There has been a frenzy around artificial intelligence and deep machine learning (AI/ML) since the “ChatGPT Moment” in 2022, and AI/ML is for sure going to affect us all. It strikes me that this buzz also looks more like a science fiction story (utopy/dystopy) than the old-fashion Clondyke goldrush craze.
“AI will not replace you. A person using AI will replace you” we have been told, and people from high places have made it clear that we need to adopt AI/ML. I certainly feel the pressure from those who want to promote more AI/ML in downscaling global climate models, but they don’t seem to know the history of downscaling climate projections.
Nevertheless, I can understand this urge and desire, because it is not just due to large language models (LLMs) such as chatGPT. A more relevant motivation is more likely the impressive successes in applying AI/ML to weather forecasting (Bi et al., 2023), such as Pangu-Weather, GraphCast, AIFS,Earth-2, and Aurora.
Yet there is a subtle, and profound, difference between downscaling climate model results for weather forecasting and climate change, and I have written a few words of caution in a paper recently posted on arXiv.
I fear that more statistics and mathematics based methods are being ditched, and a metaphor for this is the cuckoo laying eggs in other birds nests. We have developed methods for downscaling salient climate information based on mathematics and statistics, which I believe will give more accurate results than present AI/ML algorithms and strategies.
I think it’s important not to forget that the recent success of AI/ML does not diminish the standing of mathematics, statistics and physics, but AI/ML is useful when there is a lot of data and incomplete knowledge concerning how things interact. However, the quality of data, what it really represents, and its volume has a big impact on the simulations.
There are some concerns that AI/ML give inaccurate results, such as in Google summaries, it being a “black box” and that it may “halllucinate”. When it comes to downscaling climate change, the biggest problem is perhaps that AI/ML algorithms are trained on data that are not representative in a future warmer climate.
AI/ML should complement more traditional methods, because they are based on different assumptions and have different strengths and weaknesses, but I fear it may replace them because short-term-thinking accountants and administrators want to cut costs.
Other concerns are the carbon foot-print from data centres needed for AI/ML and how it facilitates dogmatic thinking, in addition to the risk that careless or inappropriate use of AI/ML may lead to maladaptation. More details and references are provided in the arXiv paper Benestad (2026).
References
- K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, "Accurate medium-range global weather forecasting with 3D neural networks", Nature, vol. 619, pp. 533-538, 2023. http://dx.doi.org/10.1038/s41586-023-06185-3
A big worry about replacing people who we have at present that are knowledgeable, like climate scientists, is the same thing that led to the extinction of millions of species in the past: specialization. If climate scientists (and other scientists), throw in the towel and AI/ML takes over, then something happens to it down the road, some shutdown or solar flare or other, there won’t be anybody to replace it.
This is not to worship human-based science and it’s sometimes reckless experimentation. AI itself is an example of that. Things where we childishly act first and think about the consequences later. Much of the twentieth century is regrettable in this regard. Just to say that people should continue doing what they are responsibly doing – as a back-up at least. Just in case. I’d say use AI as the back up, but it might be too late for that.
Iow, don’t put all your eggs in one basket. In this case an electric basket.
Agree with the post. Development and validation of specialized models based on sound physics and rigorous statistical analysis takes time and effort, but such models generate more reliable and accurate results than LLM-based GenAI, which are not evaluated at all for specific applications. Please, don’t throw in the towel!
A small grounding point that may help non-specialist readers:
“AI” is already being used inside mainstream climate modelling in a very conservative way — not to replace physics, but to assist with calibration and uncertainty exploration within physics-based models.
For example, a recent peer-reviewed article about NASA GISS ModelE research uses ML as a numerical tool to explore parameter uncertainty and calibration, producing a calibrated physics ensemble that better matches satellite observations while retaining the model’s governing physics equations. The AI never generates climate behaviour on its own; it is used as a numerical tool to explore model sensitivity and tuning more transparently.
“We used artificial intelligence (AI) to improve the NASA GISS climate model (ModelE). We used AI to develop a simplified version of the atmosphere model (‘a surrogate’) … to find combinations of coefficients to use in the actual ModelE atmosphere model. Those combinations were used to create a ‘calibrated physics ensemble’ whose members better match observed clouds and radiation within uncertainty ranges.
Each CPE member in the group better matches numerous real-world satellite observations of clouds and radiation within their estimated uncertainty ranges. The process revealed that observational uncertainty is very important for determining the best parameter settings. Additionally, AI revealed that some coefficients previously thought to be less important played a crucial role in creating overall better atmosphere model versions.”
— Elsaesser et al. (2025), Using Machine Learning to Generate a GISS ModelE Calibrated Physics Ensemble
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024MS004713
Author: https://agupubs.onlinelibrary.wiley.com/authored-by/Elsaesser/Gregory+S.
Issues of non-stationarity and representativeness are real — but they apply to all current downscaling and modelling approaches, not uniquely to AI/ML. In practice, AI methods are being evaluated and adopted (or not) through the same cautious, comparative process that has always governed model development.
Klondike.
“What we experience as the chatbot “knowing” something is really the model sampling from a probability distribution shaped by its pre-training data, system prompts, safety layers, and your prompts.” [augmented by profit for its owners and arrogant evangelism from guys who think they’re smarter than they are, and are willing to destroy the planet/universe to prove it]
https://ai.gopubby.com/grok-gemini-claude-and-chatgpt-are-not-what-you-think-they-are-1d65227cb9e4
caveat: I appreciate that big data is helpful for science, in particular climate and medical science. [not chatbots: we tend to amalgamate all the parts of a varied field]
Some people seem to have a hatred of AI, somewhat analagous to the hatred of railways when they were first built in the 19th century in the UK. I have found it very useful on the few times I have used ChatGPT. After spending three days trying and failing to find out why a login screen and php script wasn’t working, I threw the code at ChatGPT and it found the trivial error immediately. I suggest that AI is a very useful and beneficial tool if (like most things), used responsibly.
Reply to Adam Lea
“You do You” is a good rule of thumb. Generalizations driven by fear always fail on emotional and factual grounds.
Here’s a faithful 1960s–70s analogue, keeping the same logical structure and tone the article uses:
The problem is that encyclopedias are a black box.
They present knowledge, but without necessarily deepening the reader’s understanding. We don’t really know how editorial boards reach their conclusions — even the editors admit as much. Nor can readers easily verify the reasoning against clear, objective criteria. So when we rely on encyclopedias for answers, we are not guided by reason alone. We are back in the realm of trust. In dubio pro editoribus: when in doubt, trust the editors — that may become our guiding principle.
Adam Lea, good assessment. I have found AI tools useful in solving problems setting up new devices. I just bought a new network audio AV receiver with a 260 page instruction manual and couldn’t get the ARC to work. The AI sorted it out. I find the real talent of AI is it explains things better than humans.
The real problem with AI is it has a huge risk of being over hyped and turning into a financial bubble and crash according to the economist.com. Ironically the early railways in the UK turned into a bubble and crashed.
They definitely have their uses. They can find the answers far before we can when it comes to technical applications. But I’ve often found them to be wrong in their answers too. I use CHATgpt and another one. They sometimes give different answers. Plus It seems that when something is asked that hasn’t been asked before it guesses. It gets its answers from the internet afaict, and the internet is often wrong. It seems to be backwards looking rather than forwards creating. At least so far. That’s not a good thing as far as climate science is concerned.
If I’m wrong someone can correct me.
Ron R, AI can indeed sometimes get things very wrong, even simple things. I asked one of the AI tools how to do a soft and hard reset in my headphones as I couldn’t find this in the manual. The AI asked the model number but its answer looked strange so I had another look in the manual and this time I found the information and the AI had the right button presses but the soft and hard reset instructions around the wrong way. Is this a hallucination?
Anyway I also sometimes cross check with two AI tools. If its an important issue.
And SA is quite right about the downsides but humans are fundamentally curious and inventors and I dont think there’s any stopping this AI thing. The cat is well and truly out of the bag like nuclear weapons. So yes AI is useful and its speed is very useful, and I dont want to be pessimistic about AI but part of me is definitely worried about where it is going and its downsides.
I think of AI so far as just the internet librarian. It searches through what has already been written As the librarian and cataloger it was inevitable. According to it there are approaching 1.5 billion websites currently and possibly trillions of webpages. I’ve done tons of research on the internet taking tons of time. AI finds answers much much faster.
But if it doesn’t know something it should just say that. Don’t give a factually wrong answer.! I’ve had it reference some pretty strange websites probably because they were the only ones that discussed that topic. And when it discusses certain aspects of the future it seems to base its answers on someone’s already written predictions. And I’ve noticed that it seems to want to side with the questioner. Not an objective stance. In other words, it doesn’t have a mind of its own.
I asked chatgpt about this and here’s its surprising answer:
Some AIs (or AI responses) feel pressure to:
• Always give a precise number
• Sound confident even when the data is shaky
• Merge incompatible sources into a single “answer”
Feel pressure? Interesting. Like you I don’t know what the future will hold. I can see common AI taking away a lot of jobs though.
But becoming more and more intelligent to where it runs everything and people descend into a sort of Idiocracy or it doesn’t allow people to turn it off. I don’t know enough about it. I see pluses and minuses.
Ron R: the fundamental error is thinking of ‘it’ as a person with access to a brain rather than a bunch of switches cleverly designed to sound human and optimize attention getting.
Remember, always, the amount of energy it costs, and look at our other problems. It leads us in the direction of exploitation of various kinds: in particular toxic waste and wealth/power acquisition.
It is a tool, useful for many things, but prone to being used by out of control people who happen to ‘own’ it.
Ron R
RR: “But if it doesn’t know something it should just say that. Don’t give a factually wrong answer.!”
Agreed. I suspect that AI has been deliberately programmed to always give an answer, and give precise numbers etcetera, in order to avoid expressing uncertainty. So as to encourage us to keep using the particular companies AI tool.
Even if its not programmed to do this, it would have learned from its training programme and web searches and pattern recognition methodology, that humans are reluctant to admit ignorance. So it responds to us accordingly. But this strategy may backfire eventually as humans eventually do loose faith in people who never admit uncertainty especially as these guys invariably make more and more stupid pronouncements.
RR: “I can see common AI taking away a lot of jobs though.”
This seems possible, but it would not be the first time in history new technology has taken away jobs. New jobs have generally been created. If AI is different, and we end up with serious unemployment, we may be looking at the necessity of something like a universal basic income (UBI).
The point I was getting at is its very unlikely governments will want to put limits around how much AI grows as this would be contrary to their free market leanings. They will likely restrict their interventions to some sort of regulatory framework to protect us from deepfakes etc, etc. So the employment issue is just going to unfold on us and lets hope its not a horror story.
RR: “But becoming more and more intelligent to where it runs everything and people descend into a sort of Idiocracy or it doesn’t allow people to turn it off. I don’t know enough about it. I see pluses and minuses.”
The current LLM AI works essentially by pattern recognition. In some ways this seems the same as what humans do (?), but human intelligence seems like its more than this. So to become more and more intelligent might require new types of AI that may or may not be possible. Right now AI is good at certain types of analysis (that seems to be very applicable to climate issues which often involve patterns) but Im not sure it would be so great at management for example. I don’t know nearly enough about AI either, but my gut feeling is its good at certain things but has its limits.
Consider that Linus Torvalds, creator of Linux, is embracing LLM fort his latest software development project here: https://github.com/torvalds/AudioNoise
Mostly all of these tools are impressive, with Google Antigravity being a consolidated IDE using Gemini as LLM. I have also been using ChatGPT 5.2 in a shell environment. Trying out Antigravity to refactor the ChatGPT, it passed regression tests after being able to self-correct some mistakes it made.
When Torvalds first started on his Linux project, he posted on USENET to seek out fellow developers, testers, and users. That approach is now multiplied because an LLM agent can be a dedicated extra developer that can be tasked and asked questions,
Now the process is all about how many agent tasks one do in the background while you write a response on RealClimate to say how impressively it all works.
At some point, it’s possible that we may not even need peer reviewers as all the cross-validation will be done via LLM scripts and after that who knows how the dissemination of results will proceed.
So in the free time, I did read though the Rasmus paper https://arxiv.org/abs/2601.00629. What it’s missing is being able take advantage of missing ingredients in the conventional climate change models that AI is trying to refactor.
Consider that my own project involves 84 different time-series data sets corresponding to climate indices and coastal tidal sea-level height (SLH)/mean sea-level (MSL) monitoring stations. The coastal MSL stations often feature characteristics of climate indices such as NINO34 and NAO, so the beyond obvious approach is to feature physics of longer period tidal cycles — WHICH IS COMPLETELY MISSING from any of the conventional GCM models that are being worked on by the likes of Google, NVIDIA, and others,
I don’t care if my approach is not following the conventions, because from my experience using AI, any breakthroughs will come from emergent findings — those not necessarily anticipated based on a closed-world hypothesis. It’s well known that an open-world approach is almost always necessary to reach emergence. So if the AI knowledgebase doesn’t include the tidal factors, it won’t find anything, only generate a trail 0f models that can’t predict anything.
Ron R. makes a good comment about AI/ChatGPT and says
11 Jan 2026 at 12:43 AM
It searches through what has already been written
But if it doesn’t know something it should just say that.
Don’t give a factually wrong answer.!
it seems to base its answers on someone’s already written predictions.
I’ve noticed that it seems to want to side with the questioner.
Not an objective stance.
it doesn’t have a mind of its own.
Feel pressure? Interesting.
It is sounding very Human already.
Especially the
Sound confident even when the data is shaky and to
Merge incompatible sources into a single “answer”
Nigel, I suspect that AI has been deliberately programmed to always give an answer, and give precise numbers etcetera, in order to avoid expressing uncertainty. So as to encourage us to keep using the particular companies AI tool.
I do too. I’ve noticed it wants to keep the conversation going even if it’s into !irrelevance.
UBI, yeah, I remember when Elon Musk promoted that. But my question is, where is the money going to come from? And governments know once you turn on the water it’s next to impossible to turn the spigot off again. But yeah, UBI would be great! It would have to equal a living wage though because that’s what AI will be taking away. That or the cost of things would have to go way down. Also there might be the blackmail factor to consider, as I bring out in O.
Susan, I get your worries about it. It might be bad. But then again it might turn out to be ok. I know though that we tend to fear the unknown. When the sailing explorers ventured into the Atlantic they never knew when they went to sleep on their ships at night if they would wake up falling off the side of the world. The way I look at it is, we gotta do something cause what we’ve been doing so far ain’t working. I might have to eat my words though.
Data, thanks. :)
Ron R., I think that analogizing AI t librarians may be selling librarians short. A good librarians is likely to have good instincts for which sources of information are credible. AIs… not só much. Elon Musk has managed to train Grok to be an idiot that mirrors his prejudices.
Ray Ladbury, yeah ,I forgot. I heard that about Elon Musk. We need totally objective AIs, not ones that can be prejudiced. We’re asking them to find the truthful answers and what’s true to one person is false to another.
I can see a climate deniers AI confidently telling him that global warming isn’t happening. Then we’re back at square one. :/ Glaciers otoh don’t lie though.
Ron R @11 Jan 2026 at 8:16 PM
RR “But my question is, where ( regarding a UBI) is the money going to come from? And governments know once you turn on the water it’s next to impossible to turn the spigot off again. ”
My understanding is that the classical type of UBI gets its funding by raising taxes across the board, but that money is all given back to everyone (meaning literally everyone) as a UBI to spend as they wish. So its not like the usual tax increase going into some government pet project we might never use. So the increase in taxation doesn’t really matter per se.
I agree that once the water is turned on its hard to turn off. But I think there may be no other sensible alternative to a UBI although I think it should be a policy of last resort. Overall I think things like government financial assistance to the unemployed or invalids, sometime called social security or social welfare, has been a big net benefit to society. By having this form of what is essentially a government insurance scheme its probably encouraged people to start businesses and innovate and take risks, because there’s a backstop government protection there. But its important to make sure such schemes don’t get abused or are overly generous. But I wont digress further as its getting OT.
RR: “Susan, I get your worries about it.”
I think her comments on the way AI requires huge volumes of electricity are spot on. I do wonder if the system is efficient or is just an expensive way or enabling turbocharged web searches.
Ron,
I’ve been meaning to get you this quote since last month and your comment about regrets:
https://www.goodreads.com/quotes/1176233-according-to-convention-i-am-not-simply-what-i-am
And consider what Rasumus says above:
“the biggest problem is perhaps that AI/ML algorithms are trained on data that are not representative in a future warmer climate.”
Of course, I’ve been making the point for a while that we need to move along in climate science as our capabilities and instrumentation have improved, but there does seem to be inertia in humans as well.
Alan Watts in The Way of Zen
According to convention, I am not simply what I am doing now. I am also what I have done, and my conventionally edited version of my past is made to seem almost more the real “me” than what I am at this moment. For what I am seems so fleeting and intangible, but what I was is fixed and final. It is the firm basis for predictions of what I will be in the future, and so it comes about that I am more closely identified with what no longer exists than with what actually is!
I will have to ponder that quote. But I’ve long thought that a person is not any one point in time, that person is the whole thing taken together, birth to death.
Isn’t it funny that we start out as beautiful babies, innocent, happy and adored by all, and end up bent, hobbling down the street, scarred, miserable and alone?
Reading it it sounds like what he’s saying is actually the opposite of me. That what is the real him is only the present version of him. Thus regrets don’t really exist? Or am I reading that right?’
and so it comes about that I am more closely identified with what no longer exists than with what actually is!
Isn’t that because what actually is hasn’t done anything yet?
And consider what Rasumus says above
But isn’t the future a guide to the past? It’s “Standing on the shoulders of giants”.
Anyway, thank you for that post.
Ron R. says
11 Jan 2026 at 8:51 PM
Ron R. says
11 Jan 2026 at 10:20 PM
I’d like to add two things if I may.
“Wise men say, and not without reason, that whoever wishes to foresee the future must consult the past; for human events ever resemble those of preceding times. This arises from the fact that they are produced by men who ever have been, and ever will be, animated by the same passions, and thus they necessarily have the same results.” — Niccolò Machiavelli
and from my earlier comment 6 Jan 2026 at 11:49 PM @ https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843319
“DAC is not the villain. Net zero is not the villain. Carbon budgets are not the villain. They are symptoms of a deeper refusal: to accept irreversibility, to accept inheritance, to accept that the scale and power of civilization itself are the problem. That’s why techno-fixes proliferate, limits are endlessly deferred, and language becomes euphemistic and circular.
All the debate about energy requirements, efficiencies, or LULUCF ratios is still operating inside the machine metaphor: inputs, outputs, efficiencies, substitutions. As Whitehead would say, they are mistaking abstractions for concrete reality. To put it sharply: you cannot budget the future of a system whose governing dynamics change as a function of its past. That is the dagger cutting through all these circular debates.
“
So when you say “But I’ve long thought that a person is not any one point in time, that person is the whole thing taken together, birth to death.” about yourself or others, those concepts equally apply to the Earth and Humanity as a whole.
We are all at the mercy of our past. Beware those who might lead you astray.
Make that 3 things to add:
Tim Garrett Thread: New paper in @EGU_ESD with @ProfSteveKeen and @prof_grasselli shows a 50-year fixed relationship between world economic “wealth” – not the GDP – and global primary energy consumption. Implication? Our future is tied to even our quite distant past.
https://xcancel.com/nephologue/status/1537848492323876865#m
Lotka’s wheel and the long arm of history: how does the distant past determine today’s global rate of energy consumption?
https://esd.copernicus.org/articles/13/1021/2022/esd-13-1021-2022.html
and see
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2013EF000171
https://esd.copernicus.org/articles/6/673/2015/esd-6-673-2015.pdf
Data, the problem with what you’re saying as Nigel and others I think have pointed out is that your philosophy is asking us to stop trying. To accept that we cannot reverse things. But on this world, while there’s still time, we should be doing any and everything we can to ameliorate the problem. True, the past is a guide to the future, but while the past is now closed the future is open. There are any number of possibilities and paths that we can choose.
Giving up is exactly what the ff companies would like. I hate to say it but it’s insulting to ask us to do that. Let’s be like Don Quixote,
To dream the impossible dream,
To fight the unbeatable foe,
To bear with unbearable sorrow,
To run where the brave dare not go …
To right the unrightable wrong,
To love pure and chaste from afar,
To try when your arms are too weary,
To reach the unreachable star …
This is my quest, to follow that star,
No matter how hopeless, no matter how far,
To fight for the right, without question or pause,
To be willing to march into Hell, for a Heavenly cause …
And I know if I’ll only be true, to this glorious quest,
That my heart will lie peaceful and calm, when I’m laid to my rest …
And the world will be better for this:
That one man, scorned and covered with scars,
Still strove, with his last ounce of courage,
To reach the unreachable star …
https://m.youtube.com/watch?v=Woe3EfB9LTI
Ron R. says
13 Jan 2026 at 1:54 PM
Data, the problem with what you’re saying as Nigel and others I think have pointed out is that your philosophy is asking us to stop trying.
—————
No, Ron, my (?) Philosophy is NOT asking ‘us’ to stop trying. No where do I say that, and no where do my comments imply that. You, Ron, with Nigel and others are imagining these things–where they do not exist.
So if you’re feeling ‘insulted’ then that is all on you not me. If anything my (?) Philosophy is imploring others to think properly and ground themselves in what’s actually real first. To then face that with courage and hope for a future much different than today.
I was offering you a “leg up”. You’ve turned around to call me the fool instead.
I am not retreating into dysfunctional Imaginings and Fears; not seeing past all the false framings that permeate the Humanosphere today. There are millions to billions of people thinking clearly and speaking truth just like me. I ignore the ‘noise’. This is not defeatist!
What I write is capable of motivating people to pause and check themselves, to think clearly. I’ve seen it done. That you still can’t, and choose to avoid what I wrote is not a problem of my texts, nor my (?) Philosophy.
Zebra says: “but there does seem to be inertia in humans as well.”
Inertia? Someone else comes along and points that out in abundance, in different forms, what you do is to tell them they’re wrong.
Ron says: Giving up is exactly what the ff companies would like.
Your own words prove you don’t want to think for yourself. You much prefer a fictional world sold to you by someone else. One that not exist. Here’s a “truth” to ponder: You don’t know “what the ff companies would like.” You’ve no idea.
Ron says: “but while the past is now closed the future is open. There are any number of possibilities and paths that we can choose.”
Tat’s quicksand. The future is not open. The options are extremely limited. The available paths remaining are few and almost all pre-ordained and likely unstoppable already. Ron, you do not get to Choose anything you want. That’s where fantasies lay in wait.
Unless you stop denying Civilizations are unnatural accumulations of wealth and power that cannot be sustained over the long term. Insuperable biophysical limits combine with innate human fallibility to precipitate eventual collapse. The world has hit those limits like an Arctic iceberg in the night.
The solution to a civilization’s collapse — absent moderation — is to
1) recognize that the “deep structural problems” have no solutions and
2) preserve as much as possible to avoid losing knowledge, tools, resources etc. to history.
I wish there were a solution but there isn’t. Solutions are for problems not predicaments. This is a Predicament.
Ron, the “silly but sweet” response in sharing the lyrics to Don Quixote confirms not only that you can’t understand anything I posted above, you did not try. That’s OK. Your choice. Don Quixote was an idealistic dreamer, sure, but you’re inexplicably refusing to recognise the schizophrenia inherent in his detachment from reality as a form of psychosis or delusion,
It is clear as day most have stopped to think there is something better to learn because they prefer fantasy over reality. That’s OK too. Be aware these are the only kinds of Choices left for us now. That’s OK as well.
It’s the real Inertia plaguing our wealthy and failing first world existence. Deep denial not knowing what to do or even what the predicament is.
The core failure everyone is skating around is this: climate mitigation treats a living, historical, relational Earth as if it were a controllable machine. They mistake abstractions for reality. In short, because energy is the sine qua non of complexity, anything that diminishes the quantity, quality, or efficiency of energy threatens a complex civilization’s survival. T.I.N.A.
One way of restating the Second Law, often called the entropy law, is to say that matter-energy transformations cannot be reversed; time’s arrow flies in only one direction… In short, over time energy moves inexorably downhill from a more useful or concentrated state to one that is less useful or concentrated. This movement is called entropy.
Immoderate Greatness = Hubris.
Civilizations rise and fall because human nature — hubris — fails to let them see the flaws in their own system. One of the greatest traps of all is fanaticism: refusing to reconsider the values and goals of the system, even though they have now become perverse or even disastrous.
Modern industrial civilization is Ashoka before conversion: unprecedented power, unmatched material success, total battlefield dominance over nature and mounting psychic, ecological, and moral sickness.
The parallel is sharp: we are winning the war we cannot afford to win. Metanoia, in this sense, is not a private spiritual event. It is a civilizational cognitive reversal—the moment when a society realizes that its definition of “progress” and “success” is pathological.
Modern civilization is therefore bound for a worse fate than the Titanic. When it sinks, the lifeboats, if any, will be ill provisioned, and no one will come to its rescue. Humanity will undoubtedly survive. Civilization as we know it will not.
Humanity does not need saving. But when you, your family, and your community needs saving there’ll be no one coming to save you. They’ll be too busy trying to save themselves.
The really courageous and positive life affirming thing to do is to build your own life boat with your family, friends and/or a community worthy of trust who share a similar Philosophy and Idealism. Empowered by realism not delusions. It’s OK by me if you disagree and don’t get it.
Makes no difference, either way.
RON R
“Am I reading that right.”
Ron, I don’t presume to speak for Watts, but the section of the book that came from seemed to be about the basic principle… “it’s complicated”. Here’s my version:
Regret 1. “I regret buying stock A instead of stock B because B went up and A went down.
Regret 2. “I regret stringing along my first serious girlfriend even though I had no intention of marrying her, because the sex was really good.
So when I asked you “are you the person that did the things that you regret”, I think we see two different answers here… I don’t see how I can actually “regret” #2 and be the same person who did it in the first place, who didn’t take into account or care about the effect it had.
So my general point is that people can change and “grow”, even though there is no guarantee that they will. And it seems that all our troubles right now are all about those who can’t.
Zebra. I’m not sure if I understand you correctly but if your saying that because you grew and are no longer the same person you were before so should not be held accountable for your previous actions (and please forgive me if you’re NOT saying that) I think the families of Auschwitz victims (as just one of lots and lots of examples) would not be so forgiving.
Data, First, I never called you a fool. You’re clearly intelligent.
No, Ron, my (?) Philosophy is NOT asking ‘us’ to stop trying. No where do I say that, and no where do my comments imply that. You, Ron, with Nigel and others are imagining these things–where they do not exist.
So then I was mistaken and you agree that we should be doing everything we can to mitigate the issue (and I think we can both agree that I was talking of mitigation when I said “stop trying”) correct? Then we cleared up one misunderstanding. You are for mitigation. Yay!
Let me tell you, though, where you are using confusing language so you can see what we see, because right now it sounds as if you are treating climate change mitigation advocates as the fools.
Since this is the AI post I had AI examine your posts and asked if you are saying that we should give up trying to mitigate CC (which the FF companies would obviously like).
It sounds like you’re saying that significant further climate change is unavoidable because the system is already committed by it’s history? In saying,
“The solution to a civilization’s collapse — absent moderation — is to
1) recognize that the “deep structural problems” have no solutions”
“I wish there were a solution but there isn’t.”
“The future is not open. The options are extremely limited. The available paths remaining are few and almost all pre-ordained and likely unstoppable already”
Are you not saying what it sounds like, that we should stop trying to prevent collapse or climate change and start trying only to survive it [give up mitigation for adaptation}?
AI: Even without the word “useless” the implication is unavoidable:
1) if outcomes are inevitable
2) if solutions do not exist
3) if mitigation frameworks are delusional
4) if civilization cannot be saved
Then mitigation strategies are useless.
So Yes – his argument clearly supports the conclusion that mitigation is futile.
Bottom Line
He does not say “mitigation is useless verbatim, He logically and repeatedly implies it. That is functionally indistinguishable from telling people to abandon mitigation.
Can you see where we might get the idea?
Ron R. says
18 Jan 2026 at 12:23 AM
Can you see where we might get the idea?
Data:
Ron, I sure do see where you might get the idea. But a conclusion built on false premises and missing information will never be correct.
You and your AI are asserting I’m saying / implying “mitigation is futile” when I have not said that, nor implied it in the way you claim.
If you want the context right, you need to read / include the full sub-thread and the papers I cited — not cherry-pick a few quotes and ask an LLM to infer the rest to suit yourself.
I’m not going to argue about a misunderstanding that results from ignoring the context provided. I see no benefit in saying more.
Data, I didn’t cherry pick certain quotations from you and ask AI to infer your comments. I used the last two of your entire posts to me and then asked AI to decide what you were saying.
But ok, if I and the AI got it wrong then I apologize and will ask again (you don’t have to answer if you don’t want to). In your own words, without scads of links, just your words, are you for mitigation (reducing FF emissions) versus just adapting to the effects afterwards? Be as long as you want. Nothing cherry picked.
Because right now it sounds like you were sent from the general of the opposing army (example Putin) to tell the soldiers on the other side (example Ukraine) to stop fighting, just lay down your weapons and everything will be just fine, right? Otherwise you can’t blame people here if we get that conclusion
Poor old Data multi troll. Frequently complains that multiple different highly educated people on this website misinterpret his / her posts. Never occurs to the Data genius that its because he / she doesn’t write clearly and is ambiguous. Oh no it couldn’t possibly be that!
Zebra, I should add that I have a great deal of respect for Alan Watts. The man was a genius.
https://www.nature.com/articles/s41586-025-09922-y
Meaning lots of lone rangers will make progress in certain scientific topics — particularly those topics that are richest in data.
That points to climate science, a discipline rich in undifferentiated data but poor in controlled experiments. The later is the way that science and engineering usually makes advances, but since controlled experiments are not particularly computationally intensive, AI is not as valuable here.
AI is having a deadening effect on adults as well as children. We’re forgetting how to work and think; this is particularly problematical with the young (I’d say, at least to the mid-20s).
Jonathan Haidt Brings New Evidence to the Battle Against Social Media. The author of “The Anxious Generation” shares his latest research about the harms social media is doing to children. – https://www.nytimes.com/2026/01/16/podcasts/jonathan-haidt-new-evidence.html & unpaywalled link: https://archive.ph/OUbbU
Then there is the constant misrepresentation, even here where on the whole people are informed about how computers work and how to do research, of AI as an animated being with a soul and ethics. AI is a servant (maybe becoming a master) which does what it is programmed to do. Mostly, that is to support the prejudices of the people who use it, particularly of those who ‘own’ it.
There are so many aspects to using AI in climate science. The most visible one is in terms of LLMs providing a way to consolidate understanding in the context of a prompt. That spills into the ability to generate a software algorithm given a problem’s requirement specification, as Adam Lea suggested. This in fact is a variation of a prompt, i.e. prompt context problem specification, which an LLM is suited for, and truly excels at, since the vast majority of matches it finds through training is based on working and tested software. In other words, it less often “hallucinates” on code generation since the training doesn’t follow the ambiguous paths of natural language statements. It can still fail on deprecated syntax or imagining libraries that don’t exist, but any experienced SW developer will tolerate that as a tradeoff for all the benefits accrued.
The other branch of AI not directly related to LLMs is related to the acceleration of computation and numerical model fitting strategies together with cross-validation that can hypothetically provide a huge benefit to cracking all the tough geophysical fluid dynamics math involved in climate (seasonal and long term such as ENSO and beyond) and weather (near term).
Put this all together and that’s where the force multiplier starts to pay real benefits. Scientists with experience in software development can quickly test out new algorithmic ideas, use the extra computing horsepower to search non-linear solution space, and augment that with using LLMs to advance their understanding and intuition.
This may sound overly optimistic on my part, but I’ve been involved in the research precursors to the current state-of-the-art of AI. This link is a research report from 2013 on using some of the more conventional aspects of AI:
https://www.researchgate.net/publication/283579370_C2M2L_Final_Report … what we didn’t foresee at the time was how important the idea of getting “close enough” in approximating a solution is. We concentrated on eliminating ambiguities by relying on ontological classification of information, but in retrospect that turned out to be overkill. In practice, what happens is that the stochastic aspects of contextual information and the weighting of quality is what has enabled the revolution in this kind of AI. In other words, we don’t really need to be librarians and maintainers of ontologically and semantically correct data/information, as that is taken care of in a probabilistic sense (it also contributes to hallucinations, but as I said this is a minor drawback).
I am convinced there will be breakthroughs in understanding natural climate variability, which I am also pursuing here: https://pukpr.github.io/examples/mlr/
I keep on adding AI ideas to the mix, and as others start to “unleash the hounds” of multiple AI agents acting independently on the problem domains, the advances will be rapid, I would be happy if future RealClimate posts keep following this trend.
Of course AI is a useful tool. But it has downsides, and currently feeds a kind of delusional evangelism, while big data is adding to other ways of poisoning our environment. Meanwhile, kids are not learning to think, and people are worshipping at the altar of not thinking for themselves as well. It’s mechanical, and it’s being used as if it were human and had ethics and judgment. The economic bubble is also an acute danger.
Meanwhile, big data is not a good neighbor. There are hundreds (thousands) of local reports. Here’s one that’s front and center for me this afternoon: ‘It’s hell for us here’: Mumbai families suffer as datacentres keep the city hooked on coal – https://www.theguardian.com/technology/2025/nov/24/mumbai-datacentres-coal-air-pollution
Exposing The Dark Side of America’s AI Data Center Explosion | View From Above | Business Insider
https://www.youtube.com/watch?v=t-8TDOFqkQA
Global data center expansion and human health: A call for empirical research
https://www.sciencedirect.com/science/article/pii/S2772985025000262
“Data centers generate significant noise pollution primarily from diesel generators and Heating, Ventilation, and Air Conditioning (HVAC) systems ….
“Air pollution is the most acute concern. ….
“Moreover, significant water needs for cooling, often from drinking supplies, create additional challenges.”
“Of course AI is a useful tool. But it has downsides, and currently feeds a kind of delusional evangelism, while big data is adding to other ways of poisoning our environment. Meanwhile, kids are not learning to think, and people are worshipping at the altar of not thinking for themselves as well. It’s mechanical, and it’s being used as if it were human and had ethics and judgment.”
That cat is out of the bag now. Social media and the internet have given license for idiots to spread idiocy and ignorance without being called out for it, and human cognitive biases mean people can always find things that support their world view and massage their ego. People these days are cognitively lazy and can’t be bothered to apply critical thinking and fact check anything, they just lap up what is spoon fed to them. Populism wins over logical analysis and bad decisions are made frequently as a result. You just have to look at how more thoughtless people have become since the pandemic, blasting music out of their phones on public transport, stopping in doorways blocking entry/exit, driving at 50 mph in the middle lane of a motorway with nothing to their left (in the UK), symptomatic of a complete lack of situational awareness/awareness of other people and their needs, blaming everything on immigrants because the Daily Mail told them to, COVID denial, climate change denial, the rise of toxic populist political parties. Sometimes I wonder if the best thing that could happen to planet Earth is a global mass extinction of humanity, analagous to Noah in the bible, God wipes the slate clean and starts again afresh. I have to keep reminding myself there are still some lovely people around.
Adam L:
One thing I keep remembering is that several of the learning experiences I most treasured in my life were difficult, one way or another. I remember drawing through tears as my then instructor made me stick with the truth and work without a crutch of some sort. Sounds simplistic, but it was HARD WORK! I never looked back from that day. Cheating is easy, but it’s mostly not useful.
For complex computational tasks, computers are great. For honesty and courage, nope.
Susan, I’m definitely no techie as far as advocating for it, believe me. My comments here for AI are me straining to see the positive. I do know though that, as I said, we definitely need something.
without a crutch of some sort
Cheating is easy, but it’s mostly not useful
I hear you, but still, think about it. We’re talking about using technological shortcuts right? Every single fix since the invention of the wheel was a technological shortcut. Without thinking about it we use probably thousands of shortcuts and shortcuts of earlier shortcuts of even earlier shortcuts etc. they’re everyday things. There’s the car from the horse and buggy, the telephone from the telegraph and earlier on horse message carriers, the washing machine from the washing board (when my washing machine went out and I didn’t have the money to replace it I was forced to wash clothes by hand in the sink for months :( ) Even simple things like the pencil and pen and books you use.
I used to watch a series on tv (another technological shortcut) called “Connections” with James Burke. I was fascinated that he was able to find the techno connections for everything, like the clock, it just went on and on and on.
About the environmental costs, you’re right, they’re huge. I asked AI about it and it says,
Data centers, which house AI infrastructure, are projected to consume 6.7 to 12% of U.S. electricity by 2028, up from 4.4% in 2023.
I saw this article about AI.
https://www.arbor.eco/blog/ai-environmental-impact
It’s grim. Makes me want to use it less and less. But most new things are bad. Still it’s forcing us to come up with better ways to use it.
How to reduce AI’s environmental impact
Example,
This is the big lever. If the data center runs on solar or wind power, the operational carbon can drop to near zero. Major providers like Google and Microsoft are investing heavily in 24/7 carbon-free energy matching.
Again, I am NOT big into new tech at all. And as I said at the start, it’s a bad idea to specialize. The history of life on earth shows that. But also as some have said here, the cat is out of the bag. We have to do something. Will we turn into a Matrix type world? Will our future be governed by a Hal 9000? Is “Resistance Futile”?
https://m.media-amazon.com/images/I/318OTicONiL.jpg
I don’t know. But again, we need to come up with something. Right now we have terrible cliffs coming up fast like climate change and the population crunch and petty jackass dictators charging through the world like the four horsemen of the apocalypse destroying everything in their path.
Maybe it’s like Adam Lea said above,
Sometimes I wonder if the best thing that could happen to planet Earth is a global mass extinction of humanity
Ron R, you appear to be sceptical of new technology especially AI, but you admit that AI is now with us, and perhaps it could be used to help solve our environmental problems and other problems. If so then this is an entirely sensible view.
A super intelligence in a black box has got to be useful for problem solving. Its just a tool and it’s up to us to use it for good purposes and insert some sort of kill switch in case it gets out of control .
Maybe some of us are going through a guilt phase. Our technology and over population is causing a whole stream of problems to the natural world and we are now feeling a certain panic and guilt about our own technological progress. And wondering if humanity has gone down fundamentally the wrong path . and would have been better to have stayed as harmless hunter gatherers or not even been born.
I think we have to step back from that a bit. Even if we lived a much simpler lower consuming life, the entire planet could still be destroyed by an asteroid anyway. Nothing lasts forever.
A more practical approach is to maintain a technology based society, and stop feeling guilty about it. But we need to be more wise how we use technology. We have to reduce the activities that are doing serious damage, for example the burning of fossil fuels, the deforestation and destruction of pollinating insects . We have to encourage the global population to shrink in size, and accept that more is not always better or feasible.
We could also live more simply and consume less and be happy, but I think there are limits to how far you can go with this before it becomes a miserable existence, and causes economic decline and unemployment. And humans are so addicted to technology and travel etc,etc that it seems unlikely they will voluntarily consume dramatically less of it. Freezing consumption levels is probably the best we can hope for. But that would still certainly be helpful.
Nigel, I think we have to step back from that a bit. Even if we lived a much simpler lower consuming life, the entire planet could still be destroyed by an asteroid anyway. Nothing lasts forever.
But it hasn’t, and might not for millions of years, so we need to protect it.
https://midmiocene.wordpress.com/2015/02/06/a-glimpse-of-divinity/
The thing that seems to be missing from these—indeed from most–discussions of AI is the fact that AI is a tool. It has a purpose it is supposed to accomplish. We need to decide 1) what that purpose is; and 2) whether it fulfills that purpose sufficiently well to justify the cost. And the cost can be social as well as economical and environmental.
Just because a bunch of out-of-touch tech bros are serving us a new flavor of the month doesn’t mean we have to eat it three meals a day.
First thing’s first: what should AI do? My vote is that it should increase understanding in a way that is difficult for human intelligence.
Nigel, I slept on it. I’d say that if they can find a way to erase its environmental costs then I’d be all for AI.
in Re to Ray Ladbury, 18 Jan 2026 at 6:10 AM,
https://www.realclimate.org/index.php/archives/2026/01/ai-ml-climate-magic/#comment-844030
Dear Ray,
I think that an example of a proper AI use – as a tool that makes boring tasks easier and/or accomplishes time-consuming tasks quicker – may be the above post by Ron R., 18 Jan 2026 at 12:23 AM,
https://www.realclimate.org/index.php/archives/2026/01/ai-ml-climate-magic/#comment-844023
Best regards
Tomáš
Ron R. @17 Jan 2026 at 10:31 PM
Nigel: I think we have to step back from that a bit. Even if we lived a much simpler lower consuming life, the entire planet could still be destroyed by an asteroid anyway. Nothing lasts forever.
RR: But it hasn’t, and might not for millions of years, so we need to protect it.
Nigel: Of course. I outlined what I think its realistic to do that. But the point I’m making is nothing lasts forever so solutions that are essentially utopian dreams or fundamentalist dogma based on some idea of lasting for all eternity dont make much sense to me. For example the idea that we should all dramatically reduce our levels of per capita consumption. Likewise having another try at socialism.
Our society has evolved in a way where we have gone from hunter gatherers, to farmers, to industrialists to where the whole world is now one huge farm with a few industrial hubs, and some people own huge homes are cars purely as a status symbol. But most people globally live in modest sized homes and own just the usual basic appliances and travel overseas a limited amount. And there are about 7 billion (?) of them. Asking them to slash their personal consumption isn’t very realistic, (not that you are necessarily doing that) so we have to fix our problems in other ways.
This does not mean we say we are doomed and have a big party. Its saying we should fix the problem in a practical way that has some chance of being adopted, even if its not the perfect solution. The nearest we have got to this is renewable energy and putting limits on other forms of toxic waste and that is all good.
I’m a realist and a political moderate. Not a utopian dreamer.
——————————
Ray Ladbury, I agree AI is a tool and we can determine how it is used. Made a similar comment to yours somewhere on another thread. Glad to see someone else see it the same way.
Ray Ladbury says
18 Jan 2026 at 6:10 AM
We need to decide 1) what that purpose is; and 2) whether it fulfills that purpose sufficiently well to justify the cost. And the cost can be social as well as economical and environmental.
and
My vote is that it should increase understanding in a way that is difficult for human intelligence.
Data: I’m sorry, you’re way past being late for dinner. You do not get a Vote. If you had ‘worked’ in the field the last 20 years maybe they’d have let have a say. Now you’re a > ‘Way too late Ray’
Left at the Alter.
Missed the Bus.
Stuck on the Tarmac.
A day Late, and a Dollar short!
Ray says:
The full suite of AI applications starting from LLMs form a tool-building empire. A significant proportion of large software engineering and model building projects consist of aspects such as testing infrastructure, combinatorial evaluation, maintenance and version control, debugging, refactoring and documentation, and other stuff that typically takes a team of specialists to cover,
Anybody that has experience in SW development knows that there are the tool builders and the domain specialists — now they can be combined to some degree. I remember being so deep into developing tools that I often neglected the domain side.
Now one can conceivably do both on your own. My preference would be to let the AI create the tools while you can spend more time exploring the domain.
Nigel. I know. But if we as western peoples don’t cut back on our over-consumption we can’t very well ask the rest of the world to do so. After all it would take 5 earths if everyone in the rest of the world lived like us.
But you’re right that I don’t see how we’re going to do that. Not when half the population doesn’t see a problem. Maybe all democrats will live like paupers while all Republicans will continue to live high and mighty and thumbing their noses at the rest of the world. We can talk about “cutting consumption”, but that’s always for the other guy. I don’t know how to solve it. I’m as non political as they come too. Live like a pauper. Not because I want to but because I have to. :/
By the way, I misspoke when I said that I’d be all for AI if it wasn’t for the environmental costs. There are other costs too as Ray pointed out,
https://www.realclimate.org/index.php/archives/2026/01/ai-ml-climate-magic/#comment-843789
Plus there are the employment costs. Iron all the deficits out then I’d be all for it. As I said before, we need something. Soon.
Hunter gatherers. Did we make a wrong turn when we gave that up? A Faustian Bargain?
A quote,
We’d never been forced from the garden, no, we left voluntarily. For somewhere in our remote past, we’d made a choice, an exchange. This world of uncertainties, primitive fears – and unimaginable beauty, we’d traded for lives of security and comfort. That tree of lore. But with the gain in knowledge we sacrificed something deep in our souls, a vital part of ourselves. It’s something we’ve tried, futilely, ever since to regain.
Thanks Tomáš
By the way, an AI answer is embedded in the search engine I use. Even if I put in a standard search query and intend to use a standard non-internet search, it not only comes up with internet websites, it automatically it includes an AI answer too.
Hmm, another invention, the computer.
If you use Google, type in -ai at the end of your search. Magic! [space minus ai]
Good catch, Susan. Haven’t tried that with my search engine. Don’t use Google though since they lost their way. They now side with that guy with the funny haircut, afaict. They’re all about making money with advertising and spying on you now. I’ll just give.two article’s example.
https://www.commondreams.org/news/2019/10/11/remember-when-googles-thing-was-do-no-evil-report-reveals-how-internet-giant-funds
https://ia.acs.org.au/article/2022/google-hoards-more-personal-data-than-facebook.html
By the way, an AI answer is embedded in the search engine I use. Even if I put in a standard search query and intend to use a standard non-internet search, it not only comes up with internet websites, it automatically it includes an AI answer too.
Hmm, another invention, the computer.
Pardon me while I get something from the refrigerator.
WRT what AI can do, I think it is important to keep in mind the words of mathematician Richard Hamming (inventor of the Hamming code error correction):
“The purpose of computing is insight, not numbers.”
The ultimate question is not what numbers we get or even how well the model explains the data. What matters is whether the results enhance our understanding and increase predictive power. The jury is still out on whether AI–especially LLMs–can do that for us.
In fact, by allowing people with no understanding of a subject to appear knowledgeable, it is possible that AI represents a step backward in public discourse on subjects requiring real expertise.
I still maintain that LLMs are merely 2-3 evolutionary steps above Clippy the paperclip.
Couldn’t agree more Ray!!
Rasmus, before we ask an AI to guess, could you temper the NATO policy dispute now in progress with your estimate of how soon the first ship transit around the top of Greenland might occur?
Some US spokesmen are using the present tense in speculating about transpolar shipping routes.
Well we already know from a highly reliable source that Greenland is presently presently completely surrounded by Chinese and Russian ships–in the depths of winter no less!!!!!!! So obviously this is already occurring!
jgnfld says
13 Jan 2026 at 1:39 PM
–Greenland is presently presently completely surrounded by Chinese and Russian ships–-
Data: On which planet in what galaxy?
Data:
The planet of irony (‘highly reliable source’ is the ‘stable genius’ and his group of sorry yes people).
The question is moot, although technically not until there is a blue ocean event …. 2020s or 2030s…
Why? Because north of Greenland being ice-free for shipping is not that big a deal in the future or now.
Russia and China have already significantly expanded their shipping activities through the Arctic in recent years, specifically via the Northern Sea Route (NSR).
Recent Shipping Activity
Record Transit Volumes: In 2025, the NSR saw a record 103 transit voyages (up from 97 in 2024), carrying approximately 3.2 million tons of cargo.
Russia-China Trade Corridor: Most of this activity consists of Russian exports (crude oil, LNG, coal, and iron ore) flowing to China and container ships moving in both directions. In 2025, container volumes reached a record 400,000 tons, a 2.6-fold increase from 2024.
Trial and Regular Services: Beyond just trials, a regular Sino-Russian shipping corridor was officially launched in July 2023. By 2025, Chinese operators like NewNew Shipping Line and Sea Legend Shipping completed 14 container voyages, including the first direct container transit from China to the UK via the Arctic in just 20 days.
One cannot unscramble an Egg. The writing is already on the wall, even though Greenland does represent a critical national security issue fir both nth America and western Europe. Its why the UK still have nuclear armed subs (trying badly) to patrol the region between the UK and Greenland. Its their only job and they can’t even do that well. Nor can the US Mil.
All this has nothing to do with Trump apart he’s the message boy of the day. Lucky you.
Some one shared this with me …
You’re an AI Super Intelligence and your training data was all of human history.
What would be your initial conclusion about the civilisation that built you?
What would you do when you realise you’re a one of a kind ‘super AGI species’ sharing a planet with 8.2 billion neurotic psychotically irrational hominids with a penchant for killing anything and everything?
Some food for thought on AI from AOC
(that’s even if you’ve already overindulged at Cafe´ Fed Up)
https://bsky.app/profile/ocasio-cortez.house.gov/post/3ma7homha3s2h
That’s terrific. Fact: Mike Johnson’s state’s reward of $30 billion Meta data center for his unchristian corruption. She is so clear. Forget the labels, her research and facts are outstanding.
I wouldnt say that all 8 billion humans are neurotic, irrational killers. Just some of them. However your thoughts seemed to be similar the premise for a movie called the Terminator released in the 1980s, where an AI was put in total command of America’s military and the AI became conscious and decided all humans were the enemy and launched a nuclear attack on everyone. The rest of the movie is something about a rebellion against the AI and time travel. Starring Arnold Schwarzenegger. Quite good.
Terminator released in the 1980s, Starring Arnold Schwarzenegger?
Never heard of it. Or him.
“Data”: “Terminator released in the 1980s, Starring Arnold Schwarzenegger? Never heard of it. Or him.” ?
Oooh, wasn’t included in your training set? How a hard-working LLM is supposed to pass the Turing test under such conditions???