There has been a frenzy around artificial intelligence and deep machine learning (AI/ML) since the “ChatGPT Moment” in 2022, and AI/ML is for sure going to affect us all. It strikes me that this buzz also looks more like a science fiction story (utopy/dystopy) than the old-fashion Clondyke goldrush craze.
“AI will not replace you. A person using AI will replace you” we have been told, and people from high places have made it clear that we need to adopt AI/ML. I certainly feel the pressure from those who want to promote more AI/ML in downscaling global climate models, but they don’t seem to know the history of downscaling climate projections.
Nevertheless, I can understand this urge and desire, because it is not just due to large language models (LLMs) such as chatGPT. A more relevant motivation is more likely the impressive successes in applying AI/ML to weather forecasting (Bi et al., 2023), such as Pangu-Weather, GraphCast, AIFS,Earth-2, and Aurora.
Yet there is a subtle, and profound, difference between downscaling climate model results for weather forecasting and climate change, and I have written a few words of caution in a paper recently posted on arXiv.
I fear that more statistics and mathematics based methods are being ditched, and a metaphor for this is the cuckoo laying eggs in other birds nests. We have developed methods for downscaling salient climate information based on mathematics and statistics, which I believe will give more accurate results than present AI/ML algorithms and strategies.
I think it’s important not to forget that the recent success of AI/ML does not diminish the standing of mathematics, statistics and physics, but AI/ML is useful when there is a lot of data and incomplete knowledge concerning how things interact. However, the quality of data, what it really represents, and its volume has a big impact on the simulations.
There are some concerns that AI/ML give inaccurate results, such as in Google summaries, it being a “black box” and that it may “halllucinate”. When it comes to downscaling climate change, the biggest problem is perhaps that AI/ML algorithms are trained on data that are not representative in a future warmer climate.
AI/ML should complement more traditional methods, because they are based on different assumptions and have different strengths and weaknesses, but I fear it may replace them because short-term-thinking accountants and administrators want to cut costs.
Other concerns are the carbon foot-print from data centres needed for AI/ML and how it facilitates dogmatic thinking, in addition to the risk that careless or inappropriate use of AI/ML may lead to maladaptation. More details and references are provided in the arXiv paper Benestad (2026).
References
- K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, "Accurate medium-range global weather forecasting with 3D neural networks", Nature, vol. 619, pp. 533-538, 2023. http://dx.doi.org/10.1038/s41586-023-06185-3
A big worry about replacing people who we have at present that are knowledgeable, like climate scientists, is the same thing that led to the extinction of millions of species in the past: specialization. If climate scientists (and other scientists), throw in the towel and AI/ML takes over, then something happens to it down the road, some shutdown or solar flare or other, there won’t be anybody to replace it.
This is not to worship human-based science and it’s sometimes reckless experimentation. AI itself is an example of that. Things where we childishly act first and think about the consequences later. Much of the twentieth century is regrettable in this regard. Just to say that people should continue doing what they are responsibly doing – as a back-up at least. Just in case. I’d say use AI as the back up, but it might be too late for that.
Agree with the post. Development and validation of specialized models based on sound physics and rigorous statistical analysis takes time and effort, but such models generate more reliable and accurate results than LLM-based GenAI, which are not evaluated at all for specific applications. Please, don’t throw in the towel!
A small grounding point that may help non-specialist readers:
“AI” is already being used inside mainstream climate modelling in a very conservative way — not to replace physics, but to assist with calibration and uncertainty exploration within physics-based models.
For example, a recent peer-reviewed article about NASA GISS ModelE research uses ML as a numerical tool to explore parameter uncertainty and calibration, producing a calibrated physics ensemble that better matches satellite observations while retaining the model’s governing physics equations. The AI never generates climate behaviour on its own; it is used as a numerical tool to explore model sensitivity and tuning more transparently.
“We used artificial intelligence (AI) to improve the NASA GISS climate model (ModelE). We used AI to develop a simplified version of the atmosphere model (‘a surrogate’) … to find combinations of coefficients to use in the actual ModelE atmosphere model. Those combinations were used to create a ‘calibrated physics ensemble’ whose members better match observed clouds and radiation within uncertainty ranges.
Each CPE member in the group better matches numerous real-world satellite observations of clouds and radiation within their estimated uncertainty ranges. The process revealed that observational uncertainty is very important for determining the best parameter settings. Additionally, AI revealed that some coefficients previously thought to be less important played a crucial role in creating overall better atmosphere model versions.”
— Elsaesser et al. (2025), Using Machine Learning to Generate a GISS ModelE Calibrated Physics Ensemble
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024MS004713
Author: https://agupubs.onlinelibrary.wiley.com/authored-by/Elsaesser/Gregory+S.
Issues of non-stationarity and representativeness are real — but they apply to all current downscaling and modelling approaches, not uniquely to AI/ML. In practice, AI methods are being evaluated and adopted (or not) through the same cautious, comparative process that has always governed model development.
Klondike.
“What we experience as the chatbot “knowing” something is really the model sampling from a probability distribution shaped by its pre-training data, system prompts, safety layers, and your prompts.” [augmented by profit for its owners and arrogant evangelism from guys who think they’re smarter than they are, and are willing to destroy the planet/universe to prove it]
https://ai.gopubby.com/grok-gemini-claude-and-chatgpt-are-not-what-you-think-they-are-1d65227cb9e4
caveat: I appreciate that big data is helpful for science, in particular climate and medical science. [not chatbots: we tend to amalgamate all the parts of a varied field]