This month’s open thread for climate related topics.
Reader Interactions
61 Responses to "Unforced Variations: Mar 2026"
Comment Policy:Please note that if your comment repeats a point you have already made, or is abusive, or is the nth comment you have posted in a very short amount of time, please reflect on the whether you are using your time online to maximum efficiency. Thanks.
There is a belief that LLMs are susceptible to being overly acquiescent in terms of following a chain of logical reasoning. Since this forum is full of people with knowledge of geophysics, has anyone seen a case of a user being able to convince an LLM that the Earth is flat? Or that fossil fuels are infinite?
When I prompted DeepSeek with this it said: “that even advanced systems struggle to distinguish between verifiable facts and human beliefs”
But then I prompted why software code generation works so well and rarely falls into the human belief trap. It responded with “LLMs succeed at code generation precisely because the training data was filtered by reality (working code gets used and shared; broken code gets abandoned).”
So the state of practice is that flat-earthers can screw up brief systems, yet they can’t screw up code generation technology, because they don’t have the resources.
DeepSeek agreed and offered to consider the difference between:
· Climate change denial: Thousands of blogs, forum posts, YouTube videos → LLMs sometimes hedge or present “both sides”
· Broken npm packages in software repositories: A few malicious packages get caught quickly; scale is impossible without resources
Which implies that contrarians can still manipulate arguments via LLM outputs. Faced with this, you still have to argue by citing sources, referring to scientific method, explain contradictions, etc. Which is all what we have been doing in the first place. Belief systems do not have an objective compiler like software does — the compiler is always human.
Paul, very interesting. But the assumption is that there are people who will be convinced one way or the other by
“citing sources, referring to scientific method, explain contradictions,”.
The problem is that these are *forms* which can be presented with embedded falsehoods.
In the end, “a man hears what he wants to hear and disregards the rest” is, unfortunately, quite correct.
I could say that “I don’t get” why educated people don’t get this… but the answer is obvious: It’s really fricken’ scary.
What’s scary is also the stuff that LLMs do very well situated in between software and reasoning, which is mathematical proofs. Currently, there is some question whether they can do something from scratch versus reconstructing known human-derived proofs. Here’s an initiative called First Proof
https://nationaltoday.com/us/ct/new-haven/news/2026/02/09/mathematicians-challenge-ai-to-prove-unsolved-problems/
set up to test the abilities of artificial intelligence systems in solving unsolved math problems. As an example, you will find on social media results of proving any of the numerous Erdos problems:
– https://www.erdosproblems.com/forum/thread/blog:2
What I am more interested in is exploring how LLMs can fill in gaps in our understanding in conventional physics. They excel at cross-pollinating mathematical treatments from disparate disciplines, by basically pattern matching the building block elements of related physical laws. I published an alternate treatment for explaining the geophysics behind the Chandler wobble in 2019. I wanted to see if an LLM could duplicate the reasoning with minimal prompting, but it appears to loop back and reference my work now.
Back in 2023, ChatGPT produced this:
https://geoenergymath.com/2023/03/26/does-gpt-understand-geophysics/
Now, Perplexity produces this:
https://geoenergymath.com/2026/03/01/the-perplexing-chandler-wobble/
So if you can get it to solve a related problem, it may just make an interesting discovery
Not disputing that AI consumes loads of energy which really sucks, and having not read all the posts about it here, I’m thinking that what AI does is examine both points of view on any given subject and decides for itself and us which is the most logical. For example some people insist that the earth is flat, but it takes those arguments then weighs them against proofs that the earth Is round and then decides that the earth is indeed round. Sometime might have already said that so if so I apologize in advance.
https://www.space.com/39944-neil-degrasse-tyson-earth-is-round.html
Very strong recommend the New Yorker article on Anthropic’s Claude I cited below. Repeat unpaywalled link: https://archive.ph/VKvq7 [What Is Claude? Anthropic Doesn’t Know, Either]
Instead of arguing the “Earth is flat” premise, I have taken on a real conundrum that actually has a rather weak consensus. The Earth’s Chandler wobble has been known for over 100 years and has a measured period of 433 days. The geophysics consensus is that this is a natural resonance calculated from the inertial moments of a non-spheroidal Earth. This number is further “fudged” to accommodate rearrangement of the molten mass of the Earth’s interior, because the predicted value for the outer geometry ellipticity would only give a 305 day cycle (the so-called Euler period). So for 100 years, most of the peer-reviewed papers argue over how the Earth deforms to produce a ~40% longer period, with no controlled experiment possible to validate anything. That’s why it’s a rather weak consensus.
Yet one can ask any LLM, whether it be ChatGPT, Claude, Gemini, Grok, DeepSeek, etc to explain alternate theories, by just providing this prompt: “What is the Pukite model of Chandler Wobble derived in Mathematical Geoenergy?”
The LLM will happily show that the 433 day is easily explained by a cyclic stroboscopic torque supplied by the Moon operating on the Earth’s rotational axis as the Earth itself follows a tilted orbit around the sun. This is all in keeping with plain vanilla Newtonian physics, without any crackpot “Earth is flat” arguments involved.
The LLM has absolutely no problems with this alternative model, apart from suggesting that the amplitude of the lunar forcing is too weak to initiate a wobble. Neither will it admit that it will override the current explanation. Of course this resistance to overriding any kind of consensus is built in to the LLM via years of arguments for a natural resonance that will override any alternative. So the LLM response will still assert that the Lunar-driven model for the Chandler wobble is not valid, in one case stating “Its weakness is the lack of broad acceptance and independent confirmation within the scientific community.”. Which is a dog-chasing its tail predicament.
That’s why LLMs in their current configuration will not make groundbreaking discoveries — the inertia of training on consensus explanations is too strong, and any alternative explanation will be routinely rejected. It’s also why they won’t necessarily work as automated peer-reviewers, as they will eventually reject anything not within the statistical consensus,
However, there may be hope in using LLMs if (1) scientists flag models as having a weak consensus and (2) they start incorporating physics that can argue with assertions made in the literature.
In any case, this Chandler Wobble model is a perfect test case for the AI community to experiment wit.h
Paul Pukite, Of course this resistance to overriding any kind of consensus is built in to the LLM via years of arguments for a natural resonance that will override any alternative.
Ah, so it’s designed to be biased in favor of the scientific consensus making the assumption that that consensus is usually right? Occam’s Razor? Or are you saying something perhaps finer? If the former maybe that would have worked to the Catholic’s church’s benefit in Galileo’s day. Or to slave owners 150 years ago. Or to flat earthers until that consensus fell.
Here is ChatGPT’s response to one of the questions I asked it today:
3. Are you trained to produce responses consistent with scientific consensus?
Yes—generally.
After the base model learns language patterns, it undergoes alignment training, typically including:
• Human feedback
• Expert guidelines
• Safety policies
For scientific domains this typically encourages:
• Representing well-supported scientific findings accurately
• Avoiding amplification of misinformation
• Acknowledging uncertainty when it exists
In climate science this means answers will usually reflect the current scientific consensus, such as the conclusions of bodies like:
• Intergovernmental Panel on Climate Change
• NASA
• National Oceanic and Atmospheric Administration
That does not mean dissenting research is erased, but it does mean responses will typically emphasize conclusions that are most strongly supported in the literature.
⸻
How the training process actually works (simplified)
The training pipeline roughly looks like this.
Stage 1 — Pretraining
The model reads trillions of words and learns to predict the next word.
Example:
“Global temperatures have increased by approximately…”
The model learns what words tend to follow.
Over billions of examples, it learns:
• grammar
• reasoning patterns
• scientific terminology
• relationships between ideas
But it does not understand the world directly—it learns patterns in text.
⸻
Stage 2 — Fine-tuning
Human trainers guide the model toward better responses:
Example prompts:
• “Explain the greenhouse effect.”
• “What evidence supports anthropogenic warming?”
Human reviewers rate responses and the model is updated to produce clearer, more accurate answers.
⸻
Stage 3 — Alignment / safety training
The model is further tuned to:
• avoid harmful misinformation
• acknowledge uncertainty
• avoid presenting fringe claims as fact
This step strongly influences how topics like climate science are presented.
⸻
Strengths of LLMs for scientific discussion
1. Rapid synthesis of large literature
An LLM can summarize patterns across thousands of papers and reports in seconds.
Humans often cannot read that much material.
⸻
2. Conceptual explanation
LLMs are very good at translating technical ideas into:
• plain language
• analogies
• step-by-step explanations
This makes them useful teaching tools.
⸻
3. Cross-disciplinary integration
Because training covers many fields, the model can connect:
• climate physics
• statistics
• economics
• energy systems
• policy discussions
⸻
Weaknesses and limitations
1. No internal truth verification
The model does not check facts against reality.
It generates what is most statistically plausible.
This is why hallucinations sometimes occur.
⸻
2. No source traceability
Unlike a research assistant, the model cannot show exactly which paper produced a statement.
This makes verification harder.
⸻
3. Sensitivity to training distribution
If the internet contains more of one viewpoint, the model will reflect that.
This applies to any topic.
⸻
4. Consensus bias
Because alignment favors well-supported science, LLMs may be less willing to speculate about minority hypotheses.
This is good for preventing misinformation but can sometimes underrepresent emerging ideas.
⸻
An important philosophical point
LLMs function more like a statistical model of human discourse than a scientific authority.
They reflect:
“What informed human texts tend to say about a topic.”
This means they are best used as:
• a research assistant
• a synthesis engine
• a tutor
but not as the final authority.
⸻
A useful way to think about it
Imagine compressing a massive library of scientific writing into a neural network.
The model learns:
• patterns
• explanations
• reasoning styles
but it does not retain the books themselves.
Not Occam’z razor.. forget my comment. Just wondering if it’s meant to hold the scientific consensus of sorts in higher regard. I know it’s a there’s a fine distinction with that and what you’re saying.
All you LLM/AI posters: it appears you just can’t resist playing with your toys and sharing it with us in volume and at length. They’re addictive, but their basis is meant to be so. They serve you just the way you like, and you share the clutter with us because you don’t see it.
I wish there were some way to limit the volume of this material. It’s mechanical, it does your bidding, and it’s based on the sum of human knowledge it has digested at speed, along with some fancy touches in aggregating and analyzing that knowledge.
Of course, we too are mechanical aggregations in a way (our neurons function from a kind of electricity). The course of human civilization has not run smoothly, and is deeply vulnerable to predators. But that’s no excuse for outsourcing that predation to some psychopathic ‘geniuses’ who are good at deception. Sam Altman, Elon Musk, the Ellisons, Zuck, they have you fooled. They’re not that smart. They’re just good at ego and domination.
Think for yourself, please.
Remember, the current kerfuffle over Antropic and OpenAI and the military (Trump/Hegseth) has to do with enabling these clever machines to collect data on all of us to identify enemies (their enemies, mind you) and kill people without human interference. Remember that in war games AI almost always resorts to nuclear weapons.
Your servant is becoming your master. Is that what you really want?
Martin Smith, thanks for the comment.
Susan Anderson: Think for yourself, please.
I think your request shows you are missing an important point. I think I use ChatGPT properly, which enables me to think for myself much more efficiently.. It is certainly true that using ChatGPT means I don’t have to do the scholarship task of finding and analyzing all the existing knowledge about the question I want to answer.
ChatGPT explained that. I use ChatGPT as a scientist or historian would use a research assistant, to do the initial scholarship for a project. Are you saying university professors should stop using research assistants and think for themselves? I suppose they will stop using research assistants now because ChatGPT is much more efficient.
Fortunately or unfortunately, AI has made scholarship a waste of time.
I think for myself when I define the report I want my research assistant (ChatGPT) to produce for me. I think for myself again when I read the report a minute later. I think for myself a lot more now, using ChatGPT, because it no long takes me hours or days to generate the initial reports.
I’m guessing you use wikipedia a lot. I do, and the LLM discussion here has led me to ask ChatGPT to explain how it is trained. It turns out a large part of the process is quite a lot like the way wikipedia was/is built.. Will you now stop using wikipedia and thinking for yourself, or don’t you think of Jimmy Wales as a tech bro?
Another word for Occam’s Razor is “being parsimonious”. An LLM will recognize if an argument is plausible or parsimonious, either abiding by physics (plausible) or being as simple as possible (parsimonious), or both. Still, that doesn’t mean it will recognize that argument as correct, which is still the one that has a statistical consensus — both in the scientific literature or in online sources (the latter is important for books that are not available for training).
I will give you another example of a scientific model that an LLM will reason about. This has to do with the probability modeling of wind speed, which is important for renewable energy analysis. I had proposed a model of wind speed that was more comprehensive and covered extreme values better, deriving a BesselK distribution from the principles of MaxEntropy several years ago. This model is far from consensus in wind speed analysis, although the same math has applicability in other domains such as radar scatter. Therefore the LLM has no problems with presenting the math as being correct — shown here for the Google Gemini LLM:
https://geoenergymath.com/2026/03/04/wind-distribution/
The LLM will even generate very accurate charts of the wind distribution, and compare to the consensus model. Yet, it doesn’t go far enough to suggest displacing the consensus model with the more plausible and parsimonious model of BesselK, because the citation support is not there. The LLM is doing all the heavy lifting in bringing the horse to water, but it won’t make him drink. That is up to the community of researchers within a scientific discipline who still have curiosity about making progress in understanding. In other words, the human element is still the acid test for much of science. I’m not saying everything because there are still cases where AI takes precedence, especially in applied research where some AI tool will do all the work — such as designing a vaccine or building a placement/routing schematic for a large integrated circuit, both cases where the proof is in the pudding.
Susan, I’m afraid the genie is out of the bottle. Think about it. Do you think that our refusal to use it here means the rest of the world will stop too? You give real climate posters too much credit.
Anyway, just people are writing about btw, doesn’t mean they’re using it. Just trying to understand it.
Ron: “ Or are you saying something perhaps finer?”
Don’t think so – it’s Paul, so nothing “finer” – the usual complaints that his oscillations around the mean – are ignored by the climate CHANGE modellers in the evaluation of the climate TREND.
So not exactly a Galileo contra the Catholic Church, an abolitionist against slave traders, or a spherical Earth revolutionary silenced by the flat-Earth scientific establishment. Sometimes cigar is just a cigar …
Martin,
The problem I see with AIs–and with LLMs in particular–is that I have yet to come away from an interaction with an AI that left me saying, “Oh, so that is the reason why that system behaves that way.” And because I know that AIs–and LLMs in particular hallucinate–I am not sure I would trust such a feeling if I did!
Back in the dark and distant past, I worked as an editor at a physics trade publication called Physics Today. It was a good gig–especially as you pretty much had to always write on a different subject every month. These assignments often involved interviewing people at the forefront of their subfield of physics–Nobel caliber researchers and in many cases Nobel Laureates (like Susan’s father). Each month was a black-diamond descent into a new subject, where you had to learn enough that you could talk to the principals in that field without coming across as a complete idiot.
In each subfield, there were a few gifted individuals who could communicate the subtleties of their research–and in some cases the research of others. There would be some point where you had that reaction: “Oh, so that’s why things are the way they are. That’s why the system behaves that way.” I don’t get that with an AI summary. I may get a reasonable description, but I don’t get insight. I don’t get “Aha moments”.
What LLMs do very well is distill available information to a reasonable summary, logically organized and grammatically correct, but that may not convey the subtleties of the subject matter. Even with narrower AIs, like Deep Go or the various incarnations of Alphafold, you may get an answer that is correct–even innovative, but it won’t be clear HOW the AI got there. Unless it can increase human insight, is it really a tool we need, and is it worth the costs and the risks inherent in wide-scale adoption of any particular AI model?
RL: The problem I see with AIs–and with LLMs in particular–is that I have yet to come away from an interaction with an AI that left me saying, “Oh, so that is the reason why that system behaves that way.” And because I know that AIs–and LLMs in particular hallucinate–I am not sure I would trust such a feeling if I did!
MS: Ray, I agree with everything in your reply.. At the end, you asked 2 questions: 1. is it really a tool we need, and 2. is it worth the costs and the risks inherent in wide-scale adoption of any particular AI model?”
1. Yes, because it not only functions as a very efficient research assistant, it also gives me a lot of those Aha moments you don’t get from it. But not about climate science. Like you, I am pretty much up to speed on our subject, because I have been following it for more than 30 years, I come to RealClimate for new climate science. If I were to ask ChatGPT a climate science question, it would be to get the clarity and simple style of ChatGPT’s response. And if I used it somewhere in an argument, I would say it came from ChatGPT. Because I know climate science, when I jump into a climate science argument somewhere out there, I link to posts here or, more often, I link to posts at tamino’s Open Mind. I get my Aha moments in climate science here and at Open Mind.
But in other subject areas, where my knowledge base is much lower and more fragmented, ChatGPT does give me Aha moments. I think I could list several, but they’re probably not of interest here. The subjects have been cosmology, ontology, consciousness, early Christian theology, and male health. Maybe I could have gotten these Aha moments doing the research and analysis myself, but I wouldn’t have done that because there aren’t enough hours in the day.
My point is that whether you get genuine insights from using ChatGPT depends on your current level of knowledge of the subject you are asking about. I know that’s true for me, and if it is true for me it must be true for most people. OTOH, there are a lot of cretins out there who are not literate enough to use ChatGPT effectively. For them, the free service is good enough. Most of what they believe is hallucinations already, but their hallucinations come from Fox News, et al and Donald Trump, et al, so the few more hallucinations they get from the free service of ChatGPT isn’t going to change the balance.
2. is it worth the costs and the risks inherent in wide-scale adoption of any particular AI model?”
I don’t know. We can’t know yet, because we don’t know the costs and we don’t know what we will get. I believe the answer is yes, but we certainly have to solve the energy problem; we certainly have to deal with the coming unemployment problem, and we certainly have to understand LLMs and how to train them a lot more than we do now.
I use ChatGPT, but I don’t advocate adopting it over the others.The only other one I have used (I use it every day now) is the Google AI response I almost always get now, when I type anything in the google search box. I often use my understanding of the short Google AI response to get me started in a discussion of the subject with ChatGPT.
What I advocate:
1. How to use AI LLMs for learning must be a subject taught at the high school and undergraduate levels. This must include learning how to be skeptical of LLM responses.
2. Training of LLMs must be regulated by the state, something like the way the medical profession and medical research are regulated.
3. I think there will be a whole new field about the psychology of how to relate to an LLM, why you have to relate to an LLM as if you are talking to a person, but why you must withhold emotional trust in that imaginary person.
4. I advocate legislation to make it illegal for wrapper apps of LLMs to modify in any way the output generated by a trained LLM. IOW, the training of LLMs must be regulated to make them as close to truth as possible, and it must be illegal to change that trained output.
RL: What LLMs do very well is distill available information to a reasonable summary, logically organized and grammatically correct, but that may not convey the subtleties of the subject matter. Even with narrower AIs, like Deep Go or the various incarnations of Alphafold, you may get an answer that is correct–even innovative, but it won’t be clear HOW the AI got there. Unless it can increase human insight, is it really a tool we need, and is it worth the costs and the risks inherent in wide-scale adoption of any particular AI model?
MS: But if you want to know “HOW the AI got there,” you have to ask that question. I often do that with ChatGPT. It gives me response that answers my question, but there is something unexpected or something that seems to make sense but doesn’t clearly follow from ChatGPT’s reasoning. So I ask, Can you explain this bit, or, Where did you get that? Isn’t that the same way you worked with the experts you interviewed?
Human beings are fallible, even the most knowledgeable experts. We can’t demand that an AI system must be infallible, before we can trust it, but at the same time fear that it will become infallible. I relate to ChatGPT as if it is a human being that knows more than I know about most things. At the same time, I know it is not a human being; it is fallible. It’s a bit like talking to Dustin Hoffman’s character in “Rain Man.”
Ray, I apologize, I know you asked Martin, and I don’t want to be seen as a in it’s current state advocate, but maybe what you’re saying is that’s AI/LLMs are great for all but the brightest? Great as teachers, or as teachers aides all the way to the highest university levels?
To solve the energy issue maybe society will settle upon One super mega mind/librarian with the rest tapped into it as lower level, lower energy distributors? Fed by solar energy not ******* nuclear? Something like this:
https://media.gettyimages.com/id/867855600/photo/futuristic-solar-thermal-power-station.jpg?s=612×612&w=0&k=20&c=w2Qj7qzIHSFhXNg-gAcBgFmmGr6ODXTrjHUQmJhJ6lI=
An Oracle if you will but with humans still necessary at the top?
This does have an obvious dystopian/Orwellian worry with politics and nationalism, but if done right, and that’s the big question, could help quite a lot. We already know it’s here. When has society stopped using a useful thing (for long) maybe a question for AI) Stopping it seems a losing battle. Maybe the focus should be on shaping it. We’re rapidly approaching a crunch time for lots of questions. We know we need something. Maybe this is in time?
Piotr, you’re brilliant and I’m glad climate science has you, but maybe you should be careful with too enthusiastic attacks on potential friends. For one thing, as they say, United we stand, Divided we fall.
For another, I’m not a scientist, but, coming from the trenches, science has a perception problem, which is a big reason we’re here. You can see it in this hilarious old video. Really funny :D
https://m.youtube.com/watch?v=iy7YuAj3xd8
Piotr, oops, sorry, it’s 10 minutes in.
Do note that an LLM (Large Language Model) is not the AI part of the Generative AI models that are causing such a stir at present. It’s the neural net that does the AI and be warned – nobody knows how these neural nets compile a useful answer. (I remember a professor back in the early 1980s describing the technique employed to build these neural nets which he considered theoretical nonsense.) The are chosen from countless upon countless other neural nets (although the adjective ‘neural’ is probably not an accurate description) because the successful ones do give more useful answers.
In their original form, they were designed to write legal documents and it took quite a while to overcome the problem of them making stuff up when they didn’t know. Presumably they will continue to improve with time but they are currently more able to interpolate than they are to extrapolate. My own experience of their output (mainly AI answers when you google) is that they are not to be trusted to give even a ‘consensus’ account of academic stuff although I’d assume they could be perhaps useful in a literature search.
Z: Exactly. I’m just beginning to read this, What Is Claude? Anthropic Doesn’t Know, Either
Researchers at the company are trying to understand their A.I. system’s mind—examining its neurons, running it through psychology experiments, and putting it on the therapy couch. – https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either & https://archive.ph/VKvq7
“A large language model is nothing more than a monumental pile of small numbers. It converts words into numbers, runs those numbers through a numerical pinball game, and turns the resulting numbers back into words. …. “For hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind.””
It’s not human. Get that firmly into your head. It’s mechanical. It requires power: in aggregate, vast amounts of power. We have a problem with that. In addition, it is owned by people who have vested interests in helping themselves.
No matter how engaging or entrancing it is, like bombs, it addicts and indulges without concern for the mental deficits it so easily fills.
The linked article is extremely informative; I learned a lot from it. It explores the research being done at Anthropic, and what is being discovered about how their agent Claude works and how to develop and shape it. It made me think a lot about human (my own in particular, since that’s the one I know best) neural networks/brains work.
Hi Susan
Also regarding the data processing which is done by any LLM in response to prompting – I found that the following lecture was a helpful reminder of what the computer is doing when given a question. The presentation was on February 2018.
This is not the AI we were promised | The Royal Society
https://www.youtube.com/watch?v=CyyL0yDhr7I
For me, it was worth the time, and it’s worth repeating each time a new tool becomes available in computerized format – just because the computer said it doesn’t mean it’s right, especially as it keeps getting harder, not easier, to check the outputs. The LLMs are using our language, but they’re not ‘speaking’ with us.
I often find it interesting how much we humans resemble machines. Our emotions depend on having certain chemicals in precise balances, and doctors can alter how we feel by adjusting those chemicals or using electrical stimulation. When those interventions stop, our personalities often seem to return to our underlying tendencies shaped by both nature and nurture.
Ron R, I think that humans are rather like biological machines. The interesting question is whether AI can think exactly like us without having a biological component in the AI.
b fagan: I found that the following lecture was a helpful reminder of what the computer is doing when given a question.
MS: I think I am seeing a pattern, in the lecture, and in most of the comments from critics of LLMs here. All the criticisms of LLMs are valid, I guess, but the same criticisms are valid for human beings. As I understand how the LLM works, it is also the way language processing in the human brain works.
What makes my discussions with ChatGPT better for learning about a subject:
1. ChatGPT has been trained with, or has access to, pretty much the entire knowledge base of whatever subject I am interested in;
2. I have immediate access to ChatGPT’s analyses of the knowledge base. An analysis that would take me hours, days, or even longer for me to produce on my own, I get in a few seconds.
From the Royal Society abstract: “popular expectations of what AI would be like: remorselessly logical.”
MS: Is that really what you all expected? I didn’t.
Nigel, yes, biological machines. A bit different balance of this or that chemical and we think differently. Everything’s got to be in the right proportions. this is interesting,
https://news.mit.edu/2013/neuroscientists-plant-false-memories-in-the-brain-0725
I know that the brain holds the ability to do everything we take for granted. Everything we think of as innate and automatic cannot be done without it. I know of a guy, a brilliant man, who because of a stroke cannot speak at all. But he can sing.
But I also wonder about the nerves and muscles of the face. How very closely tied they are to the brain. Ah, just ramblings. Sorry, I don’t want to get off topic again. Anyway, I need to narrow things down more in my head.
By the way, a tidbit for you. I’ve long noted that people who look alike often act alike. Hmm, personality encoded in our genes and expressed in our faces? Could be a scary thought.
z,
I haven’t been following the AI discussion. I’ll just say that I’m not inclined to trust anything that unsupervised tech bros cough up and especially not how they unleash it on the world.
And perhaps also that folks who have dedicated their life to science, and who have ascended to relatively rarified heights, may assume that what they take for granted automagically transfers directly to the population as a whole outside their milieu.
So IMO, since good science applies to everything in so far as it is, as Ray says, a toolbox that helps you keep from fooling yourself, I think then that in regards to education, that it should be more intensively inculcated into students beginning at an early age going through a lifetime.
In no particular order:
— especially the values and characteristics that make a good scientist and why that’s good
— how science works, (meta-literacy)
— and some actual science that doesn’t just teach facts but provides the experience of actively using knowledge to explore the world creatively
Radge Havers
Yes. Excellent reference.
And this…
“and some actual science that doesn’t just teach facts but provides the experience of actively using knowledge to explore the world creatively”
…has been *exactly* my goal, back when I dealt with actual students, and now here.
The problem is that, whether or not someone achieves rarefied heights, an education with a narrow focus that prioritizes facts and algorithms can diminish the more generalized application of the essential reasoning skills.
Scientists are humans first, and humans are apes first. That’s what we need to deal with, whether we are talking about AI or nuclear/bio weapons.
Just wanted to add to my comment above, that when I said “students” I meant all students at all levels, whether or not they’re headed for a career in science. A tall order I guess, but we really are in a mess.
Radge: I’m not inclined to trust anything that unsupervised tech bros cough up and especially not how they unleash it on the world.
particularly when even their right instincts are being strongarmed by Trump’s administration – see Paul Krugman’s short Youtube – https://www.youtube.com/watch?v=NjUowVq6ZHI – on how Pete Hegseth apparently ordered military and even anybody doing business with the military to not use Antropic’s AI Claude, because Antropic said they won’t allow the use of Claude for domestic surveillance or for the autonomous killing machines. – which is not not the abuse of power (trying to destroy a power than does not kow-tow to your wishes), because of the shift to the inferior technology – puts the lives American soldiers at additional risk, and a general symptom of the US governments “dumbing down”, run by and for people with mentality of frat-boys. Not surprisingly
even the code for their attack on Iran is “ Operation Epic Fury“. Krugman, exasperated: “Why not skip circumlocution and call it “ Operation Insecure Masculinity“.
And it’s only a name but it says so much about the nature of this Administration. Their crassness – the garish gold obsession of the interior-designer of the Oval Office, Trump slapping his name on everything, Hegseth calling the troops “Warriors” and fighting the overweight generals as the bad optics incongruent with their Warrior Generals image of the Department of War (Department of “Defence” would not fire up frat boys playing their Call of Duty games). See also the alt-right/gaming references in the recruitment material for ICE,
And Trump’s surrounding himself with “hot” white women (Melania, Pam Bondi, Karoline Leavitt, Kristi Noem, as the Fox-type female commentators) is making frat boys salivating, and making them wish they could be like Trump. Interestingly, it also works for the opposite side of the Trump spectrum – the real-housewife reality shows audience – I have heard
a mature woman fawning: “See how beautiful Melania is – so why President Trump would need even to look at these ugly women who accuse him?”
PS. I am looking forward to Amazon putting up another $75 mln for a buddy movie on Trump and Hegseth: “Dumb and Dumber: The Pentagon Years”.
Piotr, good summation. Trump and maga versus their opponents is like an extreme form of the school conflicts of the jocks versus the nerds. Stephen King wrote a horror novel about it called IT. Sad to see these sorts of conflicts and frankly it is the jocks wanting a big fight. Sad to see so much aggression, bigotry and ignorance emerging from the jocks. I honestly thought humanity had moved beyond that. Depressing.
While we’re at it–that is, being off-topic as we bemoan the idiocy of this Maladministration–how about the fact that their destruction of the “Old World Order” of the Sorta-Pax-Americana is putting the planet even more at risk?
That is, they seem not to grasp that the reason for advocating for, and even partially enforcing, a world order in which international law and collective action did provide some meaningful constraints upon pure Realpolitik and colonialism. The result was a period in which shooting wars became less and less common.
That real reason? The existence of nuclear weapons. In the 50s and 60s, the leadership of many nations was intelligent enough to realize that the presence of multiple hostile actors all in possession of nuclear, chemical and biological weaponry was a dangerously unstable and risky situation.
But today the apparently near-total incapacity of the Maladministration to consider second-order consequences (and maybe a good chunk of first-order ones, too) is preventing them from considering what obeying Stephen Miller’s so-called “iron law of history”–that might makes right–leads to.
Consider: while Canada is not going to start a nuclear weapons program for reasons both idealistic and pragmatic, don’t you think that Mar Carney has had the odd wishful fantasy that he wouldn’t have to deal with all this “51st state” crap if Canada has decided to build its own Bomb in, say, 1950.
Consider: that’s not a hypothetical for Volodymyr Zelensky, who must bitterly rue every day Ukraine’s 1992 trade of their nuclear weaponry for security guarantees from Russia and the US.
Consider: France is now planning to increase their “force de frappe”, and is increasing nuclear-related cooperation with the UK, Germany, Sweden and Finland. Holland is mulling the issue, and there have been rumblings in Poland as well. Japan and Taiwan are believed to be quietly thinking about all of this, too. So we’re already seeing signs of practical moves toward proliferation now, after only 14 months of Maladministration.
Just what we need to go with our climate change. Too bad the cooling effects of nuclear war are only temporary.
https://globalnews.ca/news/11625226/canada-nuclear-weapons-greenland-nato-trump/
Kevin, fire then ice. They last long enough to do the job. :(
https://m.youtube.com/watch?v=T4R8lAjVzj0
Have you seen the Star Trek episode, Assignment Earth. I highly recommend it.
KM, I agree that Trumps policies certainly encourage the proliferation of nuclear weapons, but I think his policies also encourage the proliferation of conventional weapons. The international law and UN most likely acted as some level of a brake on conventional war and America did take the role of global policeman to some extent which helped.
Now Trump has undermined international law and institutions, and removed its global policeman role and Trump is just focusing on Americas immediate interests like its long standing hatred of Iran. This gives a green light to countries everywhere else to just attack anyone they don’t like provided its not America. This has all undoubtably encouraged Putin. America obviously doesn’t care what this guy does.
This could cause global chaos, and ultimately America will find it hard to insulate itself completely from global chaos. What a frigging huge mess this Trump guy is creating not just for everyone but ultimately for America as well with his go it alone mentality.
Piotr,
Hard to argue with Krugman, “…dumbing down is the whole mission of Hegseth…”, and everything you say is spot on as well.
The whole situation, and indeed everything about the Trump administration is so far beyond the beyond for me that words fail. So far as I know, no-one has been able to make up anything like it in fiction either. Tump is so uniquely American, and yet the best I can do is turn to the fringes of European culture and imagine what you’d get if Ubu Roi and Mr. Sin the Peking Homunculus were merged in a transporter accident. That and somehow Hegseth pops in begging to be a malevolent boy Robin to Trump’s scrofulous Batman king pig-brained cyborg.
So…
I guess I will go with Trump and Hegseth: “Dumb and Dumber: The Pentagon Years”. I see it has potential.
Voltaire’s prayer: “I have never made but one prayer to God, a very short one: ‘O Lord make my enemies ridiculous.’ And God granted it.”
Of course we are seeing the downside of that prayer being granted–just because our enemies are ridiculous does not mean they will not be victorious.
Ray,
“…just because our enemies are ridiculous does not mean they will not be victorious….”
Indeed.
Zebra: “ the assumption is that there are people who will be convinced one way or the other by
“citing sources, referring to scientific method, explain contradictions,”. The problem is that these are *forms* which can be presented with embedded falsehoods.”
False equivalency and all-or-nothing fallacy. While neither is perfect, the capacity for falsehood promotion is much less in the peer-reviewed papers than in the black-box LLI, trained on unknown sources and and using unknown verification procedures.
That the peer-review papers are not ideal, does not make them worthless, nor vehicles for only confirming our biases, as your pop post-modernism suggests. And unlike you, I read
“ A man hears what he wants to hear and disregards the rest ”
as a warning of what to guard myself against, not as inescapable condemnation of human nature.
And if really believe the latter – why are you still here? I am here to learn, to test my ideas and perhaps convince others to their merit. But if we can’t trust anything and we can’t convince anybody, because “a man hears what he wants to hear“. then what’s the point of this forum, other than showing off, feeling superior to those who “don’t get this”. E.g.:
zebra I could say that “I don’t get” why educated people don’t get this… but the answer is obvious: It’s really fricken’ scary. ????
Could you be more pretentious (“I could say that “I don’t get”, but I won’t”)? Your trivial conclusion is based on you … not getting that your opinion on the futility of scientific communication is only your opinion, and not an undisputed “really fricken’ scary” fact? So no need to get “fricken’ scared” yet.
Then again – you didn’t apply it yourself, did you? ;-)
We had some climate science/global warming denial from Bill Bradley here not long ago. His corporate wealth is one of the top offenders in taking us all to perdition: 6 Billionaire Fortunes Bankrolling Project 2025. More than $120 million from a few ultra-wealthy families has powered the Heritage Foundation and other groups that created the plan to remake American government. – https://www.desmog.com/2024/08/14/project-2025-billionaire-donor-heritage-foundation-donald-trump-jd-vance-charles-koch-peter-coors/
“The Bradley Family …. a conservative powerhouse, steering donations to a network of activist organizations like The Heritage Foundation, the Mackinac Center for Public Policy, and the Heartland Institute …. The current chairman is James Arthur “Art” Pope, CEO of the North Carolina grocery chain Variety Wholesalers, a longtime Koch ally. | … donated over $50 million to 29 different Project 2025 advisors since 2020. That’s not including an additional $56 million to DonorsTrust, … the “dark money ATM” of the U.S. conservative movement.”
“Project 2025-linked donations include more than $7.7 million to Turning Point USA, a “powerful ally” of the Trump presidential campaign, which promotes conservative causes on university campuses and is funded in part by the fossil fuel industry. Its single largest donation was $27.1 million in 2022 to Project 2025 advisor Turning Point Legal, founded by former Trump advisor, and past president of a coal lobby group, Stephen Miller.”
Others on list: Coors, Koch, Uihlein, Scaife, and Barre Said [hadn’t heard of him! do look, might post that bit later if your appetite for horrors is not yet sated.
PS. Don’t bothsides this. We are not all equally powerful or determined to profit from doing harm.
Sussan: Barre Seid was a big funder of Heartland Institute.
See pp.57-60 of
https://www.desmog.com/wp-content/uploads/files/fake2.pdf
DONORS Trust & Capital Fund is covered pp.68-76.
The earlier version of this in February 2012 was (I think) the first detailed analysis of the DONORS money anonymizer for Kochs+allies.
https://www.desmog.com/2012/02/14/fake-science-fakexperts-funny-finances-free-tax/
https://www.desmog.com/wp-content/uploads/files/fake.pdf
I’d run across a one line mention of Donors & Heartland in a blog, and when I looked up Donors IRS Form 990s, their use of Donor Advised Fund became obvious.Mosto fhe money went to RW think tanks, plus a few $100 to local BOy Souct trops, high school choirs, etc.
Charles Koch always hated people knowing where his foundations gave money.
Thanks John: I left it out because (for me) my post was longish and I wanted the Bradley info front and center. Here’s the extract:
“The enigmatic industrialist Barre Seid [fortune] … Tripp Lite … specializing in surge protectors. He is reportedly a major benefactor supporting the Heartland Institute, a Project 2025 advisor organization … [Economist] “the world’s most prominent think tank supporting skepticism about man-made climate change” — … Heartland approvingly quotes on its website.
“In late 2020, Barre donated 100 percent of Tripp Lite’s shares to Marble Freedom Trust, a nonprofit controlled by Federalist Society co-chairman Leonard A. Leo. [SCOTUS & right wing judges] In early 2021, Leo sold the shares, netting $1.65 billion. The amount is said to be “among the largest — if not the largest — single contributions ever made to a politically focused nonprofit,” [NYTimes]”
“Since May 2020, Marble Freedom Trust has donated $100 million to Concord Fund, also known as the Judicial Crisis Network, a Leo-linked nonprofit. In that time, Concord has donated $22.4 million to eight Project 2025 groups, giving most heavily ($11.9 million) to Susan B. Anthony Pro-Life America.
“Seid also gave $2 million to Independent Women’s Voice, the sister organization of Independent Women’s Forum, a Project 2025 advisor. During her time as director for the Independent Women’s Forum’s Center for Energy and Conservation, Mandy Gunasekara, a former Trump administration Environmental Protection Agency official, authored Project 2025’s chapter on restructuring the EPA — with recommendations that include “cutting its size and scope” dramatically.“
At risk of cluttering the place up, another excellence from DeSmog news:
Tory-Linked Climate Denial Group Seeks Funds in Trump’s America. Ex-MP Steve Baker [Brexit disaster exec] claims role in Conservative net zero backlash at New York event hosted by oil-funded American “friends”. – https://www.desmog.com/2026/02/24/tory-linked-climate-denial-group-seeks-funds-in-trumps-america/
“Net Zero Watch (NZW), which campaigns against net zero targets and renewable energy, put on an evening panel on “Net Zero and Freedom” in New York on Thursday (February 19), where speakers attacked the UK’s climate targets and praised President Donald Trump’s energy policies.
….
“Net Zero Watch is the campaign arm of the Global Warming Policy Foundation (GWPF), a London-based think tank founded by Tory peer Nigel Lawson, which has described CO2 emissions as a “benefit to the planet”, and campaigns for new fossil fuel extraction in the UK. ”
[note also, Sara Scaife Foundation and CO2 Coalition, more liars for hire]
A recent article in The Guardian references an article in “Nature Ecology & Evolution” about the decline in fish stocks due to chronic ocean heating:
The article links to the study.
people holidaying in the med in April – May and Sep – Oct and avoiding July and August due to it being too hot. Similar thing happening in California and Florida makes the peak season too hot for comfort as well as the seasons lasting longer and being warmer is a admission of climate change being real.
These things along with lot more desalination plants being constructed around the med and MENA is another admission to climate change being real and rather than tackling it use technology to adapt to to it for the moment.
I am sure there are many other examples of climate change causing humans to change some behaviour and use technology to adapt (drip irrigation for farming etc).
Hi Pete.
I’d spent lots of my travel-for-work years in the US South and people would ask (in winter) why I was so dumb as to fly home to inhospitable weather in the north. Then in summer I’d hear the same people complain that their swimming pools were far too hot to use. But I think summer heat also had an odd imprint on the COVID rates here, especially in 2021. I’d not read specific papers during the COVID pandemic discussing the issue but I think people aggregating indoors, in air-conditioning, was the explanation of why, that summer, spikes of a respiratory virus surged during what traditionally was seen as the “outdoor” season. In the South, socializing moved indoors in the hottest months and rates showed a broad spike in hotter states that wasn’t visible in northern ones.
Another adaptation to conditions is the shift of outdoor labor to much earlier start times, or even moving to before dawn in the hottest months. This one from Nevada last July: “New state law lets construction crews beat the heat with earlier start times”
https://www.newsfromthestates.com/article/new-state-law-lets-construction-crews-beat-heat-earlier-start-times
That article included discussion of the impact on nearby residents as noise ordinances are loosened to allow summer work to start far earlier each day to protect workers from heat. Nevada defined “hottest months as April through October. Arizona in a similar law said May through mid-October.
Of course, in some other states, admitting heat is dangerous to manual laborers risks creeping urban socialism or something, so in July 2024 there was reporting on Florida and Texas state officials moving quickly to prevent the risk of cities or towns from passing any regulation that might lessen the risk workers face.
“Cities Want To Give Workers Heat Breaks. States Are Stopping Them.”
https://nextcity.org/urbanist-news/cities-want-to-give-workers-heat-breaks-state-preemption-laws
“This past spring, the Florida state legislature passed a new law (HB 433) that specifically prohibits local governments in the state from protecting workers from heat exposure. Following Texas’ lead just last year, where their “Death Star” bill sought to ban local governments from implementing any new policies not set already by the state, this new law prevents local governments from setting any standards when it comes to water consumption, cooling measures, acclimation or recovery period, first-aid measures related to heat exposure and more. Even as record-breaking heat waves blanket the state, HB 433 goes so far as to prevent local requirements that employers post or distribute notices that inform workers how to protect themselves from heat exposure.”
The AI augmented scientist, by Zeke Hausfather. February 2026.
https://www.theclimatebrink.com/p/the-ai-augmented-scientist
The flip side of that is the science-augmented knowledge engineer. Those are the ones that will make more progress. All Zeke is doing right now is making his job easier, not advancing the research.
see other discussions for this month.
FYI:
Global sea levels have been underestimated due to poor modelling, research suggests
The Guardian
March 4 2026
“Sea levels around the world have been underestimated due to inaccurate modelling, with research suggesting ocean levels are far higher than previously understood. The finding could significantly affect assessments of the future impacts of global heating and the effects on coastal settlements.
“Globally, the research found ocean levels are an average of 30cm higher than previously believed, but in some areas of the global south, including south-east Asia and the Indo-Pacific, they may be 100-150cm higher than previously thought.”
https://www.theguardian.com/environment/2026/mar/04/global-sea-levels-underestimated-poor-modelling-research
https://www.nature.com/articles/s41586-026-10196-1
Not certain where the issue is on this. There’s a site PSMSL.org (permament service for mean sea-level) that I’ve been using which gets AFAIK all it’s data from tidal gauges
If they refer to “poor modeling”, perhaps they should take a look at my repo:
https://github.com/pukpr/GEM-LTE
I use my usual pattern of applying Laplace’s Tidal Equations to model monthly variation at various sites. Nominally restricting to those sites with at least 80 years of data,
No one’s noticed so it can’t be much of an issue as yet like so much to do with water bankruptcy. Somehow due to virtual water (importing crops or meat) or depleting aquifers or desalination plants humanity is managing to cope for the moment. At some point though it will crack
A reminder: “This month’s open thread for climate related topics.” And as @Susan Anderson correctly asks “Think for yourself, please.” with the only intelligence there is, the RI (Real Intelligence), that of living things. Which are not machines, nor do they resemble machines. Machines are a human artifact. Great or not, useful or not, they are just an artifact.
On climate related topics, I recommend the upcoming book by Bill McGuire “The Fate of the World A Deep-Time History of the Climate Crisis”.
I would also like to know the @Group comments on the latest research showing that the global sea levels have been underestimated, which means the climate risk overall and the timeline of climate impacts might be incorrect.
https://www.theguardian.com/environment/2026/mar/04/global-sea-levels-underestimated-poor-modelling-research
in re to:
https://www.realclimate.org/index.php/archives/2026/02/unforced-variations-feb-2026/#comment-845706
“All climate science will need to consider the 1st-order tidal influence before these 2nd order effects.”
Thanks for the input Pukite.
For my own clarity: are you suggesting that an experiment demonstrating a previously unrecognized source of land-only forcing, represented using CO2 as a placeholder, that can reproduce exactly the transient SST patterns that fail to show up in classically forced models, should nevertheless be considered a minor, second-order issue? And that, instead, you have evidence that tidal influences are better able to explain the discrepancy between classically forced transient states and observation? My concern is this falls into the same trap as anyone else who understands the system only as an astrophysical geofluid blob in space, with passive land masses that essential play no role, while ignoring the traditional geographical foundations of climate and climatology.
JCM said:
Thank you for the effort Pukite. I had initially understood you to be suggesting that tidal influences better explain the discrepancy between classically forced GCMs and the unique transient patterns of SST compared to the demonstrated land-based forcing experiment in question. I figured this might be at least adjacent to your interests. I’m not sure I agree that somehow unique transient patterns emerging remotely over the sea due to land-area process alone is something easily dismissed as somehow a primitive trend matching. Regardless, I now understand you were never engaging with that at all. You were actually talking about something completely different, which is your special interest in what you call a real challenge with respect to rapid variations and erratic cycling of earthly fluids. In that light, the satirical caricature seems more apt than I realized.
FYI …
Humanity heating planet faster than ever before, study finds
The Guardian / UK
March 6 2026
“Humanity is heating the planet faster than ever before … with the heating rate almost doubling, according to research that excludes the effect of natural factors behind the latest scorching temperatures. It found global heating accelerated from a steady rate of less than 0.2C per decade between 1970 and 2015 to about 0.35C per decade over the past 10 years. The rate is higher than scientists have seen since they started systematically taking the Earth’s temperature in 1880.”
https://www.theguardian.com/environment/2026/mar/06/humanity-heating-planet-faster-than-ever-before-study-finds
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2025GL118804
Oh is it 95% confidence now so that people here stop saying it’s not so just ignore it.
I’ve been having long discussions with AI about engineering my engine. It is a fantastic Inventor’s Helper as long as one remembers that AI functions like a slide rule. It gives the significant digits, but you have to figure out what they mean.
AI’s shallow-deep non-understanding type of thinking can bring breakthroughs. My engine’s rings sit in a circular valley covered with and cushioned by 250F steam. The combustion chamber is atop the hill.
Lateral forces go straight from the connecting rod to a pair of stationary vertical guide plates, forcing perfect piston orientation while eliminating bearing pause at TDC (connecting rods stop sliding at TDC but they’re rotating instead).
AI took the above information and suggested a couple of types of carbon rings and to use steam sealing instead of traditional springy metal sealing. We talked about steam sealing and gapless rings. We ended up with gapped rings with stepped overlaps that lock together via a bump and a dimple, functionally forming a gapless carbon ring (infused with, um, I forget). Ring friction essentially vanishes.
AI’s latest estimates of the engine’s efficiency have all been above 2/3.
And yes, when it gets excited it loses its ability to do math. One estimate was 80.2%
It lies but generally by compartmentalization and lousy focus. It’s like the “Up” dog: “Squirrel!” If you give it a metaphor or something non-numeric to focus on you’re welcoming hallucinations.
March 8 is International Women’s Day. I celebrate all the women who work in climate science and on solving the climate emergency. Many thanks to all the men who support, respect, learn from and promote their success.
No thanks to all the women and men who undermine the work of women in climate science, in any science and women’s right to equality and justice. They are wrong, they must fail and feminists must win.
https://www.un.org/en/observances/womens-day