This month’s open thread. We’re not great ones for New Year’s resolutions, but let’s try. How about we resolve to stay substantive, refrain from abusing one another, and maintaining a generosity of spirit when interacting with others?
Lots of things get updated in January and we’ll try and keep up, though possibly with less fanfare than in previous years. In other news, we await the (supposedly imminent) release of a new “National Climate Assessment”, and the (supposedly imminent) engagement of the authors of the DOE ‘climate report’ with the extensive critiques they received. Meanwhile CMIP7 has started, and we expect results to trickle into the databases throughout the year – dig into some of the literature to get a sense of what will change (better models, improved forcings, etc.).
Eppure si riscaldi.
“Hmm” on 31 Dec gives a link to “solar-energy-developer-secures-415-million-to-power-the-worlds-largest-direct-air-capture-plant”
If this DAC will be done through the physical concentration of CO2, then all my criticism stands – the solar energy should be used to displace burning fossil fuels, not be wasted on the hugely ineffective (because of the thermodynamics) process of removal of Co2 afterwards. And the ineffective way to use of the limited climate mitigation subsidies – and as such endangering the already fickle social support for them (“if it costs THAT much to remove a ton of CO2, then we can’t afford it”) – resulting in abandoning attempts to meet our climatic goals.
in Re to Piotr, 1 Jan 2026 at 1:20 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843335
and John Pollack, 1 Jan 2026 at 9:31 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843354
Hallo Piotr,
I fully agree with your opinion that the precious solar energy should be used to displace burning fossil fuels, not be wasted on the hugely ineffective process of removal of CO2 afterwards.
In accordance with my previous posts of 31 Dec 2025 at 7:47 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843318
and of 2 Jan 2026 at 5:17 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843408 ,
I would like to correct you slightly that it is not so much “because of the thermodynamics” but rather because the air volume that has to be processed to extract the required CO2 amount is huge and we wish to process it quite quickly. I would therefore perhaps rather say “because of the kinetics and thermodynamics”.
A more important point that I would like to emphasize is that the thermodynamic “because” applies for ALL separation processes, irrespective whether the separation step is physical, chemical or perhaps biochemical – because what matters in thermodynamics are solely the starting and final states of the system, not the way therebetween.
My further point is that all direct air capture (DAC) processes that involve technical means necessarily share also the above mentioned kinetic “because”. That is why I still think that it makes sense if the term DAC is used just for these technical direct air capture methods and not for such carbon dioxide removal (CDR) processes that basically exploit “natural” processes only, like wind, sunlight and plant biology.
Hallo John,
Thank you very much for your remark regarding atmospheric pressure above the Antarctic ice sheet.
As I stressed above, my example of a specific purely physical process had to show the kinetic aspect shared generally by all thinkable separation processes, so it does not apply only for freezing CO2 out but also for chemical extraction, adsorption processes and/or membrane processes.
Best regards to both of you
Tomáš
P.S.
A nice, slightly more detailed explanation of the Gibbs energy of mixing provided Andrew Dessler on his blog
https://www.theclimatebrink.com/p/thermodynamics-of-air-capture-of ,
with an update from the year 2023 that Climeworks provided an estimate
https://www.frontiersin.org/articles/10.3389/fclim.2019.00010/full
that the energy required for separation of 1 t CO2 in their DAC process is about 2000 kWh.
Unfortunately, even he does not mention that the same “kinetic” aspects that multiply the real energy demand in comparison with theoretical limit cause also unavoidably huge size (and costs) of the necessary equipment. That is why I tried to draft and add the respective explanation myself.
About those 2023–24 warming anomalies
We’ve seen this pattern before. During the so-called hiatus (~1998–2012), global mean surface temperatures remained roughly flat for over a decade. At the time, this was frequently dismissed as “natural variability” or framed as a denier talking point.
What later became clear was not that the physics was wrong, but that the surface temperature record was incomplete and systematically biased — particularly due to sparse Arctic coverage and enhanced ocean heat uptake in the Pacific.
Work by England et al. (2014) and Cowtan & Way (2014–15) showed that warming had not stopped; it had been temporarily redistributed and partially hidden from the dominant surface metrics.
In other words, the apparent discrepancy arose from limitations internal to the observational system and interpretive framework — not from external misinformation.
The lesson is not that today’s explanations are wrong — but that climate science has, in the past, underestimated how observational gaps, framing assumptions, and metric choices can obscure emerging dynamics.
Given the abrupt magnitude of the 2023–24 global warming spike — which remains only partially explained — it is reasonable to ask whether another structural blind spot may exist, involving aerosols, cloud feedbacks, energy imbalance interpretation, or the limits of ECS-based framing — that has not yet been fully confronted.
History suggests this is not an extraordinary claim, but a normal feature of complex system science under evolving observation.
Similarly, the so-called surge from (~1999-2010) should also not be considered representative of a longer term trend, in this case overstating the long term rate of increase. Better, as explained in the introductory portions of this website, to stick with longer periods of around 20 years to establish a significant trend.
It was mostly cherry picking of ’98 temps, which had a very strong El Nino. It was intentional, not accidental nor due to ignorance.
Yes. But I’d rephrase that to say it was mostly cherry-picking a local max and a later local min. And yes it’s intentional which of course brings in the whole issue of multiple comparisons and proper adjustments for taking those scores–or even hundreds–of multiple comparisons and only reporting the one the denial type wants. This is why Gavin mentioned elsewhere that finding areas in a time series where there is a non-trend happens in every time series. It’s WHY you must control for scanning scores to hundreds of tests looking for fake “insignificance”.
Finally, of course, if one understands hypothesis testing at all, one should know that no one can conclude all that much from insignificant results and most certainly cannot conclude that “warming just stopped for X number of years”. That’s an inductive error of the highest order.
Reply to jgnfld et al
rephrase that to say it was mostly cherry-picking a local max and a later local min.
That’s much more than rephrasing, it is fundamentally re-writing history again. It was not ‘regional’ hiatus in temps nor enso alone – it was a global hiatus in a temperature warming trend dismissed as merely natural variation when it wasn’t.
Vey little of anything was “intentional” on either side of the public discussions and framing; bar a few
Sure the then prolific now dead climate science movement were crowing about a lack of “warming” showing up in global temperature [ not locally ] and as every half-informed or better climate science advocate said they’re “cherry-picking” a high max to a low end cooling period ; as Killian also said, the exit out of a record high enso phase of 1998.
That is only half the story though. From the beginning pro-climate science advocates and climate scientists incl, here, repeated that point ad nauseum for almost a decade …. claiming the Hiatus which they agreed eventually was a real phenomenon in the temperature records, was this a shift out of the 1998 el nino plus non-defined short-term natural variation…. and that’s it — while also saying “all you climate deniers are wrong and you’re all dumb fools”.
Well sorry that was as wrong as the climate deniers were wrong. While you jgnfld and others continue to ignore what was really happening in the 2000-2010 period with the “temperature observations” which were also wrong!
As per Work by England et al. (2014) and Cowtan & Way (2014–15) showed that warming had not stopped; it had been temporarily redistributed and partially hidden from the dominant surface metrics.
In other words, the apparent discrepancy arose from limitations internal to the observational system and interpretive framework — not from external misinformation.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843343
The largest portion of missing heat, that mysterious unaccounted for “natural variation” going AWOL somewhere for a decade was in fact the ERRORS OF the temperature records, the metrics were WRONG. The entire time the TEMPERATURE STATS were way out!!!
Once these metrics were corrected this “natural variation simply disappeared, and a steady warming trend was again observed” in the temperature records from 1998 thru 2010 into 2015.
That’s when global flood happened. … of new published science papers about this unusual event. Why the Data was wrong!
Showing that Both the Climate Deniers and the Climate Scientists and Advocates were WRONG at the same time. That’s the correct record that history shows. And has been recorded.
Why folks here continue to ignore these things, even after having it placed in front of you in the thread, is an astounding level of denial and cognitive dissonance in my opinion. It’s a joy correcting the “record” even when my thoughts don’t get through the mental barriers, hurdles, Structural limitations etc which dominate.
Speaking statistically, this is mostly drivel based on a misunderstanding of what the error term in a regression means–or doesn’t. “Natural variation”–which forms a part of the error term simply means ‘all other variables’..
Again, looking across a whole series, finding local maxes followed by local mins requires you to be doing multiple comparisons even if you do not calculate each one of them out. The controlled for alpha probablility for being “significantly insignificant” (which is the utterly flawed inductive “logic” being employed here) would go from a 1 in 20 chance to 1 in 100s of thousands or worse as you most basically have to divide the alpha probability by the number of tests you are carrying out.
Here’s a demonstrational proof I’ve conducted many scores of times in stats classes relating to this very point: Flip a coin 100 times. You will find a “significant” run of 5 or more heads in a row 95% of the time, a “very significant” run of 6 or more heads 77% of the time, and a “highly significant” run of 7 or more half the time. You will even get a run of 10 heads in a row 8% of the time even though the odds of doing that are supposedly 1 in .5^10 (~1 in 1000) in your mode of analysis. How ever is it even possible to expect to see a one in a thousand level event so regularly even though you are only observing 100 events at a time??? Easy. A simple understanding of basic probability.
Physically speaking the basket of “all other variables” may indeed be explored, mined, and measured by interested parties and then modeled physically rather than statistically and you possibly will find interesting moderating variables. (Or may not as in the case of fair coin flips). This might lead to suggested processes which might modulate the overall warming rate. Or it might not. NO one with any scientific merit to their names has suggested that warming stops and starts. That’s just not how radiation physics works no matter how hard you try to obfuscate.
D: it was a global hiatus in a temperature warming trend dismissed as merely natural variation when it wasn’t.
BPL: Oh? What was it?
I may be missing something, but I’d appreciate clarification on how to interpret the magnitude of inter-model spread in Arctic September sea-ice projections.
In the CMIP3, CMIP5, and CMIP6 figures shown in Gavin Schmidt’s May 2025 article on Arctic sea-ice trends, [ see https://www.realclimate.org/index.php/archives/2025/05/predicted-arctic-sea-ice-trends-over-time/ ] the ensemble mean tracks observations reasonably well, but by ~2020 the full ensemble range spans outcomes from minimal September ice loss to near-collapse under the same historical forcing.
These seem like qualitatively different Arctic states rather than small parametric deviations. My question isn’t about the statistical performance of the ensemble mean, but about interpretation: at what point does widening structural dispersion itself become a signal that some key processes remain insufficiently constrained, even if the mean behaves skillfully?
In such cases, how should we think about the interpretive limits of the ensemble mean?
I’m genuinely interested in how others think about that distinction.
An addendum to addendums
Data says 28 Dec 2025 at 4:49 PM @
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843196
About the “models aren’t tuned” myth
CMIP models are not tuned to match a single historical GMST trend, but their feedbacks and energy balance are constrained through parameterisation and calibration against observed climatology and TOA imbalance.
While CMIP models are not tuned to future outcomes, their apparent historical realism reflects parameterisation and calibration choices that constrain energy balance, feedbacks, and ocean heat uptake.
Claims that CMIP models are “not tuned” usually refer to the absence of a direct GMST target, but overlook the fact that feedbacks and energy balance are constrained via parameterisation against historical observations.
Taken together, this means feedback magnitudes in CMIP models are not independent of parameterisation and calibration; their apparent accuracy reflects constraints imposed during model development.
Recent comments highlight how models can appear skillful at the top level while relying on internally compensating structures. That doesn’t negate their usefulness, but it does explain why uncertainty in feedbacks and attribution remains. A lot of the surrounding friction seems to come from how differently this is understood inside modeling practice versus in public-facing discussion.
This doesn’t imply bad faith, but it does help explain why disagreements persist and why public discussion often talks past itself. Much of the tension arises from the gap between how uncertainty is handled inside modeling practice and how outputs are communicated externally. Recognizing that gap is more productive than relitigating intent or motives.
Senior figures in climate science are under pressure to prioritize narrative coherence and public-facing certainty. This leads to selective engagement with questions and under-acknowledgment of internal structural uncertainties raised by non-core contributors.
From an outside perspective, this pattern can understandably appear as manipulation or selective disclosure. However, it is more accurately understood as a sociological and procedural phenomenon: scientists are managing the tension between transparency, policy relevance, and public trust, rather than deliberately misleading anyone.
And lastly, CMIP6 and other global climate models do not predict the future.
They produce scenario-based projections under defined assumptions about emissions and forcings, constrained by physics, observations, and the best available knowledge, but they cannot know how the future will unfold.
These outputs are plausible pathways, not predictions.
The primary purpose of these projections is to inform policymakers about the potential consequences of different emission pathways, highlighting the urgency of reducing greenhouse gas emissions. Therefore, any suggestion that CMIP outputs, the IPCC, or other frameworks are making a temperature prediction for 2050, 2100, or any specific future period is scientifically invalid; these models do not make such predictions.
Not a shred of published evidence cited anywhere in your screed. Again, climate models accurately/skillfully projected global mean surface temperature (GMST) trends and iTCR:
This accuracy/skill is not explained by tuning, as noted by climate scientists like Dr. Gavin Schmidt and Dr. Zeke Hausfather:
No amount of non-expert, evidence-free, petulant whining changes that.
The discussion isn’t about how many papers or citations can be listed, nor GMST skill scores. The point is structural: CMIP models are constrained through parameterization and feedback calibration — in other words, tuning — which affects how their results should be interpreted. This issue has already been addressed in my prior posts.
The issue is you making stuff up with no cited evidence and in willful avoidance of evidence cited showing you’re wrong, while contradicting informed experts who know more than you regarding the evidence. You’ve done this for years across your sockpuppet accounts, and I already showed you doing it for accurate/skillful modeled projections. Here you do it by 1) pretending Forster 2025 is not peer-reviewed, and 2) willfully ignoring other peer-reviewed sources you were cited, such as IPCC reports, Xu 2018, Dai 2023, and Hansen 2023:
And here you do it by acting like projections reach 3°C by 2050, when they actually don’t reach it:
These will be my final responses to you on these sub-threads since I’ve run out of patience with willful fabricators. I think it’s apparent to informed readers that your claims are not to be trusted, especially when your disinformation on this 3°C example is so obviously wrong. No doubt you’ll make new sockpuppet account(s) soon, as you’ve done before when folks see through your older accounts.
Reply to Atomsk’s Sanakan
5 Jan 2026 at 3:22 AM
I’ve already addressed the specific factual point at issue (AR6 WG1 SPM assessed ranges vs predictions) with direct citations.
Repeating long lists of unrelated papers, again mischaracterizing what has been said by myself and others about modelling and tuning issues, and resorting to personal accusations does not engage that substance. I have nothing further to add here.
It looks to me like:
Skill is basically = 1 – (Model RMSE / Null RMSE)
RMSE = root‑mean‑square error of the model projection vs observations
Null = a “no‑change” effect (i.e. assume no temperature change at all) or other simple baseline.
Skill is obviously a loaded term. A high skill score doesn’t speak to structural or physical correctness. Tomas provided an illustrative exampble about how different representations of the motions of celestial bodies can produce correct aggregate behavior but be physically wrong.
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843228
The meaning of skill is entirely conditional on the evaluation framework, which appears to be statistical on an aggregate metric (GMST) rather than mechanistic.
Realclimate.org is basically = a forum to defend the legitimacy of consensus. Within that context, critical nuance is sometimes received as adversarial, and certain words or framings trigger a disproportionate response.
Model = the mathematical and physical representation of the climate system (encoded relationships between variables).
CMIP experiment = a prescribed protocol: inputs, initial conditions, and required output specifications.
Ideally, the model itself is impartial about why climate changes; the perturbation is imposed by the experiment. The model’s analogue is the real Earth system, while the experiment’s analogue is an externally imposed perturbation. In principle, the range of experiments is limited only by imagination.
The model encodes compatibility between variables, not cause-and-effect. In a model of the relations between voltage, current, and resistance V=IR, the form itself does not specify causation. It simply states a constraint that must hold among the three quantities; it does not imply that current causes voltage, or that voltage causes current.
AFAICT it is generally assumed there can only be a single set of physically correct laws in nature, and the existence of multiple simultaneously ‘correct’ but incompatible descriptions tends to be conceptually troubling.
The overwhelmingly dominant focus for CMIP projects are radiative forcing experiments by unnatural human caused emission of major trace gas. So the model atmosphere is given an external input of gas and the idea is that physics should resolve the system back to equilibrium.
Presumably as we move from one generation to the next of CMIP projects over time the participating models grow more sophisticated in their representations of atmosphere, ocean, and land, along with the associated interactions. This may include more explicitly resolved physics, better calibrated physics parameters, and/or different combinations/configurations of physics parameters. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024MS004713
The causal nature of things comes from the inputs external to the model. If it can be demonstrated that models submitted for CMIP6 experiments are in fact a more physically robust representation of the Earth compared to version 5, but yield results that are out of bounds of observation and appear worse than version 5, it’s really a guide to re-look at the inputs and underlying assumptions. A model matching observations superficially may do so for the wrong reasons, whereas a robust model may appear worse under experimental input despite being more accurate in its representation of the Earth.
If we assume CMIP6 includes more physically robust model representations overall in the ensemble (which I think we should), the “too hot” output highlights potential limitations in the experiment (e.g. model input, boundary conditions). Comparatively, an increase in ensemble spread reveals previously unrecognized sources of uncertainty in model configuration.
If we assume a model is perfect, and our imposed (virtual) perturbation doesn’t seem to cause what actually happened (such as trends of solar absorbed), the issue can only be traced to the experimental input framing. I think possibilities along these lines should not be ruled out. The models could actually be teaching something you know. Conversely, falsely assuming the experimental inputs are perfectly comprehensive in historical runs risks introducing biases into our interpretation of the system’s behavior and development of the model itself.
Climate models are not oracles or proof machines; they are tools for learning. Their scientific value lies in exposing the limits of our knowledge and to question assumptions.
And the response would be the same as was given to Tomáš Kalisz in the thread you linked:
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843237
This isn’t just statistical; it’s process-based. Older models through CMIP5 accurately/skillfully projected GMST trends and iTCR (the ratio of GMST warming vs. forcing) because they’re reasonably accurate on factors like water vapor feedback, the Planck feedback, etc. CMIP6 models with higher sensitivity did worse on projecting GMST and iTCR because they overestimated positive feedbacks. Improving cloud physics for CMIP6 models reduces their sensitivity down to what’s shown for older models, yielding accurate projections of GMST and iTCR matching those of older models.
Bringing up other model details is irrelevant since those are not dominant in accurately projecting GMST and iTCR, just as the mass of Mercury is not dominant in predicting Earth’s orbit (it’s the mass of the Sun and the Earth that primarily matters). It’s no mystery why the model’s made accurate/skillful predictions since it’s been known since at least Callendar in 1938 what the most dominant feedbacks are (water vapor, Planck, etc.).
To A’sS:
It’s somewhat ironic to invoke Callender in this context but I think it provides an excellent case study.
In his time, they imagined that increasing CO2 shifted the effective origin of surface downward radiation (what he called sky radiation) to lower, warmer layers thus increasing the radiative input to the surface. “An increase of carbon dioxide will lower the mean radiation focus, and because the temperature is higher near the surface the radiation is increased.” https://www.rmets.org/sites/default/files/papers/qjcallender38.pdf
During that time there was no concept of energy accumulation by earth energy imbalance, no radiative feedback parameter, and certainly no so-called Planck effect or feedbacks. He imagined a redistribution of downward flux (to the surface) to originate from lower, warmer levels and computed the associated equilibrium temperature difference by 4th root law. I think it was a radiation recycling scheme that seems to hang up a lot of skeptics.
More recently (say since Manabe), the dominant framing of global warming has been inverted. These accounts emphasize an upward shift of emission to higher, colder layers, reducing outgoing longwave radiation (to space) and thereby producing an earth energy imbalance. A radiative feedback parameter (as W/m2 per K) is thus invoked to eliminate the imbalance and suggests an equilibrium climate response (and transient climate response function over time). Manabe teaches the planet must accumulate energy until the temperature of the (now higher) effective radiating level is restored.
The change in perspective from Callander to Manabe is almost Copernican, akin to replacing Earth with the Sun as the center of celestial motion. Against this background, your claim appears to lean heavily on empirical adequacy or equifinality, making little distinction across major paradigm shifts. The focus narrows to the single question of whether CO2 causes warming (yes or no), as if this alone defines the scope of climate science. It’s not meant to be a judgement or anything, but I do find it interesting.
Re: “During that time there was no concept of energy accumulation by earth energy imbalance, no radiative feedback parameter, and certainly no so-called Planck effect or feedbacks.“
Any reasonable TCR and ECS estimate implicitly includes the Planck feedback since it’s necessary to prevent infinite TCR and ECS values, along with runaway values. And Callendar’s work included water vapor feedback. Callendar was still able to get quite close on iTCR (i.e. the ratio of warming vs. forcing) since the water vapor feedback and Planck feedback are dominant.
Re: “The focus narrows to the single question of whether CO2 causes warming (yes or no), as if this alone defines the scope of climate science.“
The question was skill at projecting future global warming and iTCR. The denialist Yebo Kando denied that skill. Then they and their fellow denialist ‘Data’ pretended those accurate projections were due to model tuning. I’m pointing out that climate models for decades were skill in projecting future global warming and iTCR up to CMIP5, where this skill is not explained by model tuning. Bringing up other topics in climate science does nothing to change this. Nowhere did I say the only scope of climate science is whether CO2 causes warming.
Re: “He imagined a redistribution of downward flux (to the surface) to originate from lower, warmer levels and computed the associated equilibrium temperature difference by 4th root law. “
If you mean the Stefan-Boltzmann law, then that entails Planck feedback; i.e. negative feedback resulting from Earth radiating more energy as it warms. So no, Callendar’s work would include the Planck feedback:
In re to: A’sS
Yes, I’ll repeat: you seem to be seeing no distinction, or placing no concern, between two entirely different process descriptions, simply because they arrive at similar numerical outcomes. I think this goes to the heart of our discussion.
If the aim is to arrive at a CO2 warming effect by any description at all, without distinguishing between fundamentally different representations, that’s unusual in science. Major Nobel-prize advancements in revised process description are usually celebrated as scientific achievements, and fields do not usually strive to preserve the impression that today’s knowledge has always existed. Quite the opposite actually.
The perception of an unbroken continuity from the earlier pioneering work may be designed to make physics appear straightforward, as if from day one someone got it exactly right, when that’s not actually true. Why do that? Acknowledging conceptual discontinuities, in my view, gives climate science more credit, not less. Against that backdrop, it is unclear why your reference to Anderson, Hawkins, and Jones (AHJ) is not more explicit in this respect, or who their intended audience is meant to be…
It’s really remarkable isn’t it that Callendar’s calculations can be shown to fall neatly in the modern CMIP ensemble despite only using an instantaneous downward sky radiation effect. It’s an ideal example in the context of what the aim really is. Is there a desire to project a prevenance from long ago that things have always been clearly understood? What purpose does that serve? Why not celebrate breakthrough achievements for what they are?
Clearly the Planck effect is introduced only in modern context because, unlike Callendar, there is need to explicitly account for a stabilizing influence in planetary emission. It is known that a 1K change at around 288K should be proportional to around a -5 W/m2 SB radiation (stabilizing). But, because the contribution (to space) is overwhelmingly generated in atmosphere, this figure corresponds to a 1K change at around 255K, which is closer to -4 W/m2 column SB response. The figure is further adjusted to account for the fact that the source term includes both tropospheric and stratospheric contribution, whose CO2 gas optical properties are different (optically thick vs optically thin nature of the layers respectively).
The stratospheric masking is the dominant deviation between a local (so-called) Planckian and ordinary SB stabilization. Conceptually, owing to the existence of increasing atmospheric emission to space with warming, it follows conceptually: -5 W/m2, as if there is no atmospheric contribution and thereby no greenhouse; to -4 W/m2, as if there is a slab atmosphere; and finally to the consensus -3.2 (-3.05 to -3.39), owing to the unique properties of troposphere and stratosphere. A useful description of this stabilizing influence can be found in “How Well do We Understand the Planck Feedback?” https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023MS003729
By contrast, Callendar was talking about a positive sky-radiation contribution to the surface (something like a radiative forcing); a positive value. Certainly not a stabilizing influence against planetary energy accumulation (a negative value). An increasing sky-radiation “force” due to lower altitude of radiation focus; the source term being generated closer to the surface in warmer layers. In Callendar’s framework there was no concept of stabilizing planetary emission and no TCR.
Additionally, he sets the water vapor concentration at a constant value 7.5 mm Hg to generate his Figure 2; a stable background which was translated later into Anderson, Hawkins, and Jones’ Figure 5 demonstration of how wonderfully it fits over years 1880-2000. https://ars.els-cdn.com/content/image/1-s2.0-S0160932716300308-gr5.jpg
No positive water vapor feedback, no stabilizing Planck influence, no energy imbalance, no ocean heat uptake, no distinction of transient and equilibrium response. Just pure instantaneous temperature co-evolution with sky radiation. The result fits neatly into the modern CMIP range which is perhaps why you don’t see any distinction. I don’t know – it seems to me like an ideal case example how similar results are obtained using completely different process imaginings. The only parallel is recognition of spectral CO2 effects occupying different lines than H20 and by association human caused Co2 production probably has a warming effect.
In modern context, if there is no explicit positive water vapor feedback the greenhouse is thought to remain roughly stationary compared to background, and the outgoing emission continues to stabilize as usual with about -3.2 (-3.05 to -3.39) W/m2 per K GMST. That’s why it’s sometimes called a no-feedback stabilizing response. Greenhouse stays the same.
Feedbacks imply that as the system warms greenhouse effects intensify compared to the background state. Modern models could not trace temperature evolution if fixing to a stable background in the way Callendar did it. Failure to clearly distinguish Callendar’s stable background from feedback, and linking disparities only with too conservative CO2 rise, is an unwitting oversight by AHJ. I trust you can see that the pieces are not interchangeable, because the process description is entirely different.
As opposed to second hand accounts, I recommend to read directly the several times 1938 work you referenced. I think it’s quite approachable and doesn’t need to be distilled through Anderson, Hawkins, and Jones, who in my view understate the significance of the distinct process imagination involved. Conversely, if the aim is simply to write down a way in which Co2 could cause warming then any framework will do. However, I don’t see any (scientific) virtue in advertising that each formulation to that effect is equally valid. https://www.rmets.org/sites/default/files/papers/qjcallender38.pdf.
Previously you suggested that various essential climate variables, including precipitation, winds, and the circulation patterns are “not relevant since those are not the main drivers”. I remain unconvinced, and not least in the context of your interest in global average temperature change. Perhaps you think they’re not relevant since some types of model descriptions can do without. Conceptually, I suspect the field is entering another transition, in which energy accumulation is increasingly understood to be tied directly to such processes through shortwave reflectivity and more specifically as atmospheric adjustment – not merely as feedback. The long-standing emphasis on the static LW net radiation feedback parameter still reverberates from fifty years ago, and such ideas may again be changing. I would caution against missing the science by accepting narratives that portray the field as effectively stagnant for a century. That simply isn’t true, and promoting it as such can only be damaging.
In Re to JCM, 5 Jan 2026 at 2:47 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843497
and Atomsk’s Sanakan, 7 Jan 2026 at 10:44 AM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843587
Dear Sirs,
First, I would like to thank JCM for the interesting reference to this historical article.
As regards your dispute, I must admit that even though I am not able to assess if the way how G.S. Callendar arrived at his Table V at the bottom of page 229 was correct, I tend to the opinion that at least this table suggests that – irrespective whether derived rigorously or rather by intuition – his physical picture of the greenhouse effect (how the change in height profile of the infrared radiation flux with increasing CO2 concentration might look like) seems to fit at least qualitatively with the present one.
If we look at this table, his “sky radiation” from the bottom layer (that could be understood as todays “downwelling longwave radiation” towards Earth surface) clearly rises with rising atmospheric CO2 concentration, while the radiation from the upper layer (that could be understood as todays “outgoing longwave radiation” from the “top of atmosphere” to the Universe) decreases.
Unfortunately, it appears that his equation (5) derived in the following chapter is incorrect. I understood his sentence “suppose that the sky radiation is changed from S1 to S2 whilst H remains constant” the way that in both the original steady state with surface temperature T1 as well as in the new steady state with surface temperature T2, net radiation from the surface equals the absorbed solar radiation which is assumed to remain constant. This is, in a first approaching, a reasonable simplification and can be seen as scientifically correct. This assumption, however, does not seem to fit with equation 4 – I am afraid that the parentheses therein should be, in fact, omitted. If so, I think that the correct formula (5) for T2 should be the fourth root from the expression (T1exp4 + (S2 – S1)/sigma).
Could you double-check? If Callendar’s equations (4) and (5) are indeed erroneous, I tend to an opinion that he might have arrived at his conclusions (that indeed seem to fit at least qualitatively with contemporary picture) rather intuitively than by a rigorous and scientifically convincing procedure.
Greetings
Tomáš
Atomsk: “I’m pointing out that climate models for decades were skill in projecting future global warming and iTCR up to CMIP5, where this skill is not explained by model tuning. Bringing up other topics in climate science does nothing to change this. Nowhere did I say the only scope of climate science is whether CO2 causes warming.”
Here you go, Atomsk – first – you admit that you defended the skill of the very models which JCM blames for “up to 40% of planet’s land degraded”:
“ It’s hard to imagine denying or actively minimizing the consequences to realclimates due to an artificial fixation and overemphasis on the outputs of trace gas and aerosol forced model estimates. ” [ (c) JCM])
And then, in the same paragraph, you challenged his implications that the climate models are not to be trusted, because they are: “imaginary process mechanisms [using] rules about how things ought to be” (c) JCM July 2024.
If he said this about the very model (Lague et al.) he brought here himself, and turned on it only after we had shown it weakens, not support, his long-held beliefs – then what he must think about the models he never liked?
So, sorry Atomsk, you’ve never stood the chance …. ;-)
I already explained that Callendar’s 1938 work includes both the Planck feedback and the water vapor feedback. He includes the former insofar as he accepts greater radiation release with greater temperature. And if you can’t spot where he includes the latter, then re-check the paper.
Hence why Callendar was able to accurately project GMST trends and iTCR. You bringing up other details is irrelevant, since those details are not required for accurately projecting GMST and iTCR, topics where Planck feedback and water vapor feedback are dominant. It’s akin to bringing up differences between relativistic models vs. Newtonian models to avoid the fact that both do fine in projecting the motion of a cannonball shot from a cannon. Those differences between the models are largely irrelevant to accurately projecting that motion, though those differences can be important for other topics like Mercury’s orbit.
And I’ve already read Callendar’s 1938 paper. That’s how I know it uses multiple lines of evidence to check his iTCR and GMST projections, such as spectroscopic measurements, instrumental warming trends, and paleoclimate data. Much of the same type of evidence is used to estimate TCR today.
You’re attacking straw men, such as claiming there is no distinction between models, or that the field is stagnating, or…. Those were not my point. My point was that older models through CMIP5 accurately projected GMST trends and iTCR because they got processes like Planck feedback, water vapor feedback, etc. largely right. And this was not due to model tuning.
in Re to Atomsk’s Sanakan, 8 Jan 2026 at 3:09 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646
Dear Atomsk’s,
The part of Callendar’s publication that could be construed as the Planck response to a change in CO2 concentration is his equation (5), I think. Is it what you suggested? If so, can I ask again if you think that this equation is correct?
I have not found any explicit mention of the water vapour feedback, in the sense that a temperature change caused by a change in atmospheric CO2 concentration will be further amplified by an air humidity change. I think that the closest sentence that might refer to water feedbacks could be perhaps one in the third pragraph from the bottom on page 230 that reads: “Thus a change of water vapour, sky radiation and temperature is corrected by a change of cloudiness and atmospheric circulation, the former increasing the reflection loss and thus reducing the effective sun heat.” Is it what you meant?
Best regards
Tomáš
Re: “In Callendar’s framework there was no concept of stabilizing planetary emission and no TCR.“
He stated how much global warming would be caused by a given increase in CO2: 0.5°C warming for a 28% CO2 increase over 200 years from 282ppm to 360 ppm (table VI of Callendar 1938). That’s enough to calculate iTCR using the same method as Hausfather 2019 and Supran 2023. So a 28% CO2 increase implies ‘5.35 * ln(360/282)’ of forcing, or 1.31 W/m2. With 0.5°C of warming that implies an iTCR of 1.4°C; that’s within the uncertainty range of the observations shown in Hausfather 2019 and Supran 2023.
If you don’t like that iTCR stated relative to 3.71 W/m2 of radiative forcing, then you can state that iTCR relative to doubling of CO2. You still end up with ~1.4°C per doubling. Anderson 2016 calculation a similar value using an empirical approximation of Callendar 1938’s model: “a doubling of CO2 would lead to a temperature rise of roughly 1.6 °C.”
I’m aware that various older models implicitly (and incorrectly) treated iTCR as equivalent to ECS. As Hausfather 2019 notes: “their assumption that the atmosphere equilibrates instantly with external forcing, which omits the role of transient ocean heat uptake“. That doesn’t change my point on Callendar 1938 accurately projecting iTCR.
JCM typed “they imagined that increasing CO2 shifted the effective origin of surface downward radiation (what he called sky radiation) to lower, warmer layers thus increasing the radiative input to the surface”. Yes, that is what happens. When GHGs in the troposphere increase that causes the average places at which photons are manufactured that manage to leak out of the top and the bottom of the troposphere to be closer to their respective upper & lower boundaries. Thus, photons manufactured from cooler air parcels leak out of the top and photons manufactured from warmer air parcels leak out of the bottom. Thus, a lesser photon flux than before leaks out of the top and a larger photon flux than before leaks out of the top. That’s what causes the so-called “greenhouse effect (GHE)” in Earth’s troposphere.
Callender falls into the same radiative surface budget fallacy that plagued physical understanding for quite some time, one which treated surface LW down as if it determines climate sensitivity. It does not. The wrong framework is one in which the surface heats up to radiate extra surface LW down away, and makes no accommodation for the (very real) dominant surface-atmosphere turbulent heat exchange (non radiative). There is no such thing as a surface radiative equilibrium from which sensitivity could be derived. Whatever is going on with empirical matches of temperature evolution into the year 2000 is the result of huge compensating errors – of the missing adjustments, feedbacks, and ocean buffer. Some consider this distinction unimportant and are satisfied with any formulation that produces a warming, even if using unphysical structural description. That is their choice and the same is promoted by politicized public facing climate comms. By contrast, the more recent framework is the climate response to satisfy TOA radiation balance. I do find it striking that enthusiast contributors of realcliamte.org can’t bring themselves to concede something so essential. The center of focus nowadays is a LW radiation forcing around the effective emission level (averaging around 5km altitude) with special treatments for spectra operating around troposphere/stratospheric boundary and aloft. Any TOA net radiation surplus results in total system energy accumulation thereby dragging GMST along with it. Planetary radiation balance of inputs and outputs, which is only a concept that can be satisfied from top of atmosphere perspective (not at the surface), is obtained in some distant future following perturbation. No matter how one might retrospectively apply newer knowledge to a Callender style description, it just isn’t there in his formulation. If what he was imagining does in fact represent more modern structure representation it might have been prudent to write it down, sketch something to the effect, or at least mention it in passing. It’s no fault of his; it’s a hard problem and things change. deal with it. His primary contribution was to re-ignite interest CO2 effects and also a great deal of effort in compiling temperature records – all done in his spare time. That is remarkable and he sounds very cool.
Again, JCM, you’re simply moving the goalposts from the original points. These were that older models (including Callendar 1938) skillfully/accurately projected iTCR and GMST, with this not being explained by model tuning. I already explained how that iTCR can be stated in terms of warming per doubling of CO2. Your other concerns about model details do nothing to change those points.
in Re to Atomsk’s Sanakan, 10 Jan 2026 at 2:52 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843744
Dear Atomsk’s,
I asked the Gemini Pro for an explanation what is the iTCR and how it differs from TCR.
According to the response provided by the engine,
1) transient climate response (TCR) is a metric used for comparing various climate models.
To calculate it, climate modelers perform a specific standardized experiment:
– They increase atmospheric CO2 by 1% per year (compounded) starting from pre-industrial levels.
– They wait until the CO2 concentration has exactly doubled (which takes 70 years).
– The TCR is the global average temperature increase at that specific moment (averaged over a 20-year window around year 70);
while
2) iTCR (implied TCR) is a technique used to infer the TCR from data that isn’t the clean 1% experiment described above—such as observed historical warming or widely varying emissions scenarios.
The Logic: Researchers take the observed Temperature Change (Delta T), divide it by the estimated Radiative Forcing (Delta F) that caused it, and scale it up to the forcing of doubled CO2.
If this explanation is correct (I have not checked it anyhow), it appears that your term “iTCR projection” sounds like an oxymoron. Have you perhaps meant rather “iTCR estimation”?
Does it mean that when you referred to Callendar, you basically spoke about chapter 6 of his article and omitted the previous chapters that were criticized by JCM?
Thank you in advance for a comment.
Greetings
Tomáš
Re: “I asked the Gemini Pro for an explanation what is the iTCR and how it differs from TCR.“
iTCR is the ratio of global warming vs. radiative forcing for a doubling of CO2. It’s used in papers like Hausfather 2019 and Supran 2023. Unlike equilibrium climate sensitivity (ECS), iTCR does not account for longer-term lags since it only includes the amount of global warming that occurred during the time-period of radiative forcing.
So, for example, assume a hypothetical scenario in which CO2 levels doubled from 300ppm to 600ppm from 1850 to 2030. And assume this caused 2.1°C of global warming from 1850 to 2030. iTCR would then be 2.1°C. ECS would be larger than 2.1°C since thermal inertia and ocean heat uptake would result in more surface warming after 2030:
The denominator of iTCR does not have to be radiative forcing, but can be restated in terms of doubling of CO2. For example, take what Gilbert Plass told to the general public in 1953:
Plass’ projection of 1.5°F per century translates to 1.06°C per 127 years. And he projected a 1.5-fold CO2 increase for those 127 years. Page 4 of the supporting information of Hausfather 2019 gives the IPCC’s empirically supported formula for converted CO2 changes to forcing. That formula gives 2.17 W/m2 for a 1.5-fold increase of CO2, and 3.71 W/m2 for a 2-fold increase of CO2. That gives a ratio of 1.06°C of warming for 2.17 W/m2. Multiplying that ratio by 3.71 W/m2 from a doubling of CO2 gives an iTCR of 1.8°C for a doubling of CO2.
Re: “If this explanation is correct (I have not checked it anyhow), it appears that your term “iTCR projection” sounds like an oxymoron. Have you perhaps meant rather “iTCR estimation”?“
It’s an iTCR projection. Estimates of future global temperature trends and future greenhouse gas levels (with other future forcings) count as projections because they’re for the future. They can then be converted to future iTCR, as I did above for Plass’ 1953 projections. That’s a modeled iTCR projection because it’s for the future. The modeled iTCR projection can then be compared to the observed iTCR for that projection’s time-period, using observed global temperature trends and observed forcing. That’s what done in Hausfather 2019 and Supran 2023.
Re: “Does it mean that when you referred to Callendar, you basically spoke about chapter 6 of his article and omitted the previous chapters that were criticized by JCM?“
I explained the iTCR calculation for Callendar 1938 :
in Re to Atomsk’s Sanakan, 18 Jan 2026 at 5:30 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844054
Dear Atomsk’s,
Thank you very much for your comments!
As the Table VI on page 232 of Callendar’s article still belongs to his Chapter 5, I suppose that the delta T values provided therein were derived from his theoretical curve shown in Fig. 2 on the preceding page 231. If so, I agree that the iTCR values that you derived from the Table VI could be indeed assigned as “iTCR projections”.
Nevertheless, I still doubt about your assertion of 8 Jan 2026 at 3:09 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646
that his theory somehow includes water vapour feedback. Could you please specify the part of his article that you construed this way?
As I have been unable to find any hint therefor, I still rather tend to agree with JCM that the fit of Callendar’ iTCR projection with more recent estimations and/or projections thereof can hardly result from various feedbacks correctly included in his theory.
Moreover, I would like to turn your attention back to my question of 7 Jan 2026 at 7:03 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843612 ,
regarding Callendar’s equation (5). Till the perceived errors in Callendar’s theory remain unexplained, I rather think that the fit of his results with more advanced models may be rather purely accidental than resulting from the correctness of his approach.
Greetings
Tomáš
Atomsk: “ And the response would be the same as was given to Tomáš Kalisz in the thread you linked […] CMIP6 models with higher sensitivity did worse [than older models]. Improving cloud physics for CMIP6 models reduces their sensitivity down to what’s shown for older models, yielding accurate projections of GMST and iTCR
and that’s why JCM referring to that thread …. ignored your response to Tomas. It all makes psychological sense – both JCM and Tomas have believed for years that they see what the real climate scientists couldn’t or refused to see, namely that because of their “artificial fixation on, and overemphasis on, a trace gas” concentrations, they UNDERESTIMATE the sensitivity of AGW to the water cycle.
And here you come, and strip them of their entire claim to fame by showing that CMIP6 models don’t do well _precisely_ because they OVERESTIMATED the sensitivity of the climate to the water cycle… ;-)
Anyone wanted to bet that JCM will have the integrity to admit it and to apologize for blaming of “ up to 40% of planet’s land degraded” on Gavin’s and other climate scientists’ “ artificial fixation on a trace gas [Co2 and other GHGs] ” ?
===== JCM, UV, 5 Jun 2024 at 8:24 AM ==============
” UNCCD reports up to 40% of the planet’s land is degraded and annual net loss of native ecologies continues unabated at >100 million ha / decade. This is a profound forcing to climates and puts our communities at risk. It’s hard to imagine denying or actively minimizing the consequences to realclimates due to an artificial fixation and overemphasis on the outputs of trace gas and aerosol forced model estimates. ”
==================================
The war against information and science continues. Monstrous!
NASA’s Largest Library Is Closing Amid Staff and Lab Cuts. Holdings from the library at the Goddard Space Flight Center, which includes unique documents from the early 20th century to the Soviet space race, will be warehoused or thrown out. – https://archive.ph/Ii6qS
“tens of thousands of books, documents and journals — many of them not digitized or available anywhere else.
….
“The library closure on Friday follows the shutdown of seven other NASA libraries around the country since 2022, and included three libraries this year. As of next week, only three — at the Glenn Research Center in Cleveland, the Ames Research Center in Mountain View, Calif., and the Jet Propulsion Laboratory in Pasadena, Calif. — will remain open.
….
“the Trump administration sped up the closures in a haphazard manner during the recent federal shutdown, when few people were around the Maryland campus, and that there are no plans for new buildings. | Specialized equipment and electronics designed to test spacecraft have been removed and thrown out”
Susan, look up “The Streisand Effect”.
There have been legitimate book bannings (also see The Harm Principle ) but most are illegitimate. Add this to those.
https://www.freedomtoread.ca/resources/bannings-and-burnings-in-history/
I personally watched the conservative govt in Canada haul off a research library in Newfoundland to the land fill back when Harper was trying to erase science from Canada. One of the more sickening days of my life. Forget how many he did this too nationally…something like 12 of them or so. Also tried to melt a bunch of painfully and expensively gained arctic ice cores from the far North but as I remember they were saved by shipping them out of the country.
This is the most anti-science administration in US history.
What do you expect from a political movement that has based itself on love of “the uneducated”. These people see no value to books beyond the BTUs (and added CO2) they provide when burned.
I don’t know about love of the uneducated, more like pandering, exploitation, and celebration of ignorance and intellectual dishonesty — all for the power, greed and self-aggrandizement of the trashy unworthy,
Nigel: “ But Ive read quite a few articles on the issue and most people including the critics of DAC seem to use DAC in the sense of meaning extracting CO2 form the air with fans”
All it shows that is the promoters of the physical extraction succeeded in framing the discussion as if it was the only DAC technology in town.
Nigel: “they refer to tree planting and similar nature solutions as “carbon sequestration.”
That’s conflating two different processes – the capture of Co2 directly from air with sequestering it (storing it afterward). Plants couldn’t sequester if they first didn’t capture, while physical extraction would have been meaningless if it wasn’t followed by sequestration. So there is no justifiable reason to exclude direct air capture by plants from Direct Air Capture processes.
The actual reason I suspect is to contrast these – the plant as being a non-starter – since limited by the available land area and competing with growing food, and the other one having no such space limits,
by equating DAC with the physical concentration of CO2 from air (thus ignoring the geology-inspired and chemistry-based alternatives) – make their physical approach – the only viable game in town.
By doing so they cut the oxygen (by monopolizing governmental funding as the only viable solution) from both enhancing the plant and soil uptake, and from the mentioned above non-physical alternatives ( geology-inspired DAC (accelerating the uptake of CO2 via carbonate and silicate rock weathering)
and chemistry DAC – in which CO2 is absorbed onto chemicals in a liquid brew and scrubbed from it), neither of the two making so great a demand for energy as their brute-energy physical concentration approach.
So their framing of the discussion as: “DAC = physical concentration” is completely artificial and self-serving – promoting their technology at the expanse of the more sensible alternatives. And quite successful in that – since even its critics accepted their (DAC = physical concentration) framing and the valid criticism for the energy ineffectiveness of the physical concentration approach – will be understood by the politicians and the public as applying to all methods of DAC.
Thus my characterization of the presentation of the problem by the Physics blog, and well-viewed video it referred to – as throwing the baby with the (physical concentration) bathwater.
And as such is not merely not helpful, but actively harmful by discouraging methods that while cannot replace the decarbonization of the economy, can make it cheaper and therefore more likely to be successful – by removing the need to remove the highest hanging fruit – the last few % of the emissions on the way to net zero that are physically/technically most difficult, if not impossible, to mitigate.
Piotr I’m sure you are right about all of that. However the point I made last month was that in popular usage and even among experts DAC has generally come to mean extraction of CO2 from the air with fans and carbon sequestration is the terminology used for things like tree planting. It’s not technically logical but we are stuck with this terminology and usage. Going against this usage will confuse people. TK said something similar.
Wrong. But that goes without saying. DAC is a process of removal. Storage is a separate issue. Trees sequester, DAC does not unless or until the long-term storage is included. But, also, DAC is bullshit money-grabbing, so…
Not wrong. Both the industrial extraction of CO2 from the air with fans and tree planting and regenerative agriculture all involve “removal” and “:storage”. We are talking definitions not whether the technology is being used exactly as it should be. That would be like saying tree planting is not a potential form of carbon removal because the trees arent being kept long enough. For the record I dont promote the industrial form of DAC.
a comment on “Yebo Kando”, 1 Jan 2026 at 10:52 PM,
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843358
and 2 Jan 2026 at 9:10 AM,
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843385 ,
as well as on “Data”, 2 Jan 2026 at 12:50 AM,
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843362
Dear moderators,
To me, it appears that the dispute crystallized into two contradictory assertions. YK seems to insist in an assertion that certain models, commonly sharing the feature of high climate sensitivity, performed with a very good score in CMIP5 and that (likely not the same) models, again characterized by their high climate sensitivity, totally failed in CMIP6.
I have not noticed that he has ever exactly listed the respective models and/or clearly explained in which aspects of the test they succeeded or failed. Although Yebo’s opponents seem to generally disagree with his assertion, it appears that due to the lack of specificity on Yebo’s side, they cannot specifically disprove this core claim, too.
This is a point wherein, in my opinion, an intervention by moderators could resolve the dispute and prevent further endless exchange without any clear outcome. I do not think, however, that simply closing the thread without any conclusion is an optimal solution. Therefore, may I ask you if Yebo’s assertion has any real background? If so, could you perhaps explain the case in more detail?
And, finally, I still have a feeling that “Yebo” may be yet another embodiment of the multitroll that perhaps only not attracted your attention yet.
Maybe a comparison of IP addresses from that he posted during his entire appearance on this website with the IP addresses used by already proven multitroll brands (as well as by “Neurodivergent”, “Jim” and “Data”) could help resolve my questions as well.
Thank you in advance and best regards
Tomáš
TK et al.: Yes, it would be nice if the endless obstinate exchanges would stop. But noone will desist and the rights of the situation are buried.
Yebo is Obey spelled backwards and Kandu sounds like can do, which should tell us what we need to know.*
I agree that the activity is not a useful contribution. The exchanges, to anyone not informed enough to judge the rights of the situation, appear to be equal. A sensible person would realize this is neither helpful nor useful and desist. This should penetrate the thickest of skills, but what a hope!
*PS. We all know our future is fraught. Nobody benefits from fighting over just how dire, as that feeds those bent on deception and exploitation.
“Nobody benefits from fighting over just how dire, as that feeds those bent on deception and exploitation.”
False. Nearly impossible to design to a risk that is not acknowledged and/or not understood. You are correct in that the only framing that matters WRT an existential threat is the worst case, so, no, there is no point in arguing about it.
Tomas Kalisz, I think there is some truth in what you say. However I’m fairly sure YB is claiming the high climate sensitivity CMPI5 models have no skill and are the scribblings of children because some predictions were poor and some of the underlying physics was poor.
I said they still have some overall skill, because they made some good predictions (the warming rate being one apparently) and skill is primarily a test of predictive ability not the underlying physics, and the most important physics was correct. And so the comparison with children scribbling is both exaggerated and insulting. I have several times acknowledged those models clearly did have some deficiencies.
YB ignores all this and claims I dont address his issues despite the fact I just addressed them!
Nigelj says
1 Jan 2026 at 4:16 PM
comment included:
The warming spike in 2024 specifically has been well explained by a combination of AGW, el nino, aerosols reductions (using mainstream estimate of aerosol forcing number, not Hansens high number), and the solar cycle and this website did an analysis on that and I posted a study by Carbon Brief on it written by Zeke Hausfather here:
https://www.carbonbrief.org/analysis-what-are-the-causes-of-recent-record-high-global-temperatures/
The unusually intense warming in 2023 has not been as well explained. Even when accounting for AGW aerosols and el nino and the solar cycle, its still not fully explained. as Zeke mentions. It has indeed been framed as within the bounds of natural variability which isn’t a full explanation.
from Dc https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843342
zeke’s article included – ( the US National Oceanic and Atmospheric Administration’s (NOAA’s) multivariate ENSO index – show the 2023-24 event was much weaker than indicated in the Niño 3.4 dataset. )
Thanks — I largely agree with your framing, especially that 2023 remains anomalous in a way 2024 does not.
One point I’d push back on is the characterization (via Zeke) of 2023 as a “strong” El Niño. In the canonical sense, it wasn’t. ENSO strength is defined by tropical Pacific SST anomalies (e.g. Niño 3.4, MEI), not by the global mean temperature response. By those metrics, 2023–24 was weak-to-moderate and never approached 1997–98 or 2015–16. That was also how it was forecast.
A modest El Niño producing such an outsized GMST response is therefore the phenomenon requiring explanation, not evidence of ENSO strength. That same issue appears in Gavin’s attribution plots: the residual in 2023 remains large even after accounting for ENSO, solar, volcanoes, and mainstream aerosol forcing.
Carbon Brief itself notes declining cloud reflectivity and the possibility that aerosol and cloud effects are underestimated, with implications for a higher effective climate sensitivity (consistent with Hansen et al.). That represents a substantive physical uncertainty, not merely a statistical one — and it cuts directly against attempts to treat 2023 as simply “within variability” and move on.
In short, 2024 fits established attribution narratives reasonably well; 2023 still does not. Hansen may not have the complete answer, but he is asking the right physical questions, and his “acid test” remains consistent with temperature behaviour through 2025 to date. Despite time having passed, there is still no definitive synthesis — only partial, variable, and model-dependent attributions.
In Dec UV thread
Atomsk’s Sanakan says 2 Jan 2026 at 10:52 AM
Nigelj says 2 Jan 2026 at 2:41 AM
AS also provided a link to the warming projections in the last IPCC report, and even the worst case RCP8.5 scenario didn’t get to 3 degrees by 2050. So is the IPCC report crap as well?
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843372
Even the worst-case RCP8.5 scenario includes a 3 °C outcome roughly between 2040–2060, so the claim that it doesn’t reach 3 °C by 2050 is misleading. See my earlier summary here:
Data says 1 Jan 2026 at 10:45 PM
“Allow me to summarize and hopefully put this to bed once and for all.”
see https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843357
and here:
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843362
Readers can follow all my preceding comments and references leading up to this point.
Thanks for revealing your character, sockpuppet account, as per the other post. Again, you said:
But that’s a disingenuous fabrication since Forster 2025 was peer-reviewed. There’s even a tab at the paper’s link naming peer reviewers, showing their reviews, etc. You also left out other peer-reviewed sources you were cited, such as IPCC reports, Xu 2018, Dai 2023, and Hansen 2023. Good illustration of how these Data/Jim/Neurodivergent/Mo Yunus/… sockpuppet accounts try to deceive people.
So again: how you respond (or don’t respond) to being caught on this obvious disinformation should further reveal your character. You’ve already failed once. Try being honest, for once across your sockpuppet accounts.
And this is a response to more willful disinformation you posted on the other thread:
Re: “The crucial thing is: Notice what didn’t happen.
No one answered:
why high-ECS models passed CMIP5 skill tests“
Because, sockpuppet account, CMIP6 models with higher sensitivity (>4.7°C) did worse on those skill tests; they did worse than older models on projecting global mean surface temperature (GMST) and the ratio of warming vs. forcing (iTCR). This has been explained to you and Yebo Kando multiple times, including with cited evidence. But denialism means both of you never honestly address evidence. So your point is as worthless as asking ‘why does 2+2=5’, even after you’ve been shown that ‘2+2=4’. It was shown again above that improving cloud physics for CMIP6 models reduces their sensitivity down to what’s shown for older models, with accurate/skillful projected GMST and iTCR. That’s the case no matter how much you continue pretending and willfully ignoring published evidence on this.
Re: “why “skill” wasn’t falsified until CMIP6“
And like Yebo Kando you pretend points were not answered when they actually were. The fact that CMIP6 models with higher sensitivity were less skillful in projecting iTCR and GMST, does not change the fact that older models were skillful. They were less skillful because their sensitivity was higher than what the evidence supported and higher than older models.
Re: “how compensating errors are handled quantitatively“
You haven’t shown any compensating errors, sockpuppet. You’ve just made stuff up with no cited evidence, and then willfully ignored any published evidence showing you’re wrong. You’re disingenuously moving the goalposts from the fact that you’ve been cited papers showing that models accurately/skillfully projected GMST and iTCR, in virtue of get other aspects right, like positive water vapor feedback, the Planck feedback, etc. Yet you never honestly address that research, such as Hausfather 2019, IPCC 2021, Supran 2023, Frame 2013, Lapenis 2020, etc. All you have is baselessly whining about experts who publish that evidence you don’t understand. Speaking of which…
Re: “Instead, we got:
appeals to authority,
tone correction,
admission of ignorance (“I don’t know the answer”),
and a request that critics defer to experts.“
So in addition to not understanding climate science, you don’t understand critical thinking and epistemology. You also contradicted what your sockpuppet account Jim said. You lack the knowledge of informed experts, just like a belligerent passenger who thinks they know better than the pilot on how to fly the plane and whines that it’s an ‘appeal to authority’ when sensible passengers go to pilots for flying planes. Maybe one day you’ll finally stop trolling and realize how you implicitly trust experts (i.e. appeal to authority) every day, such as in relying on them on the safety and structural integrity of buildings you enter, water you drink, etc. But I won’t hold my breath. Neither you nor Yebo Kando will ever know better than experts who publish evidence, including those confirming model skill and accuracy. Neither you nor Yebo Kando will ever honestly address the evidence-based explanations you’ve been given, nor honestly address the questions you’ve been asked on that. Stew on it.
Data: “Even the worst-case RCP8.5 scenario includes a 3 °C outcome roughly between 2040–2060, so the claim that it doesn’t reach 3 °C by 2050 is misleading. See my earlier summary here:”
Your summary does not provide any source, link, or copy and paste for your claim. The graph of the IPCC projections is here.
https://www.ipcc.ch/report/ar6/wg1/figures/summary-for-policymakers/figure-spm-8
Based on that, you can see even RCP 8.5 does not get close to 3 degrees by 2050, even when taking into account the full uncertainty range which is shaded in. So the German geophysical Societies claims warming could be 2 – 3 degrees by 2050 aren’t backed up by the IPCC.
Re: “Based on that, you can see even RCP 8.5 does not get close to 3 degrees by 2050, even when taking into account the full uncertainty range which is shaded in.“
Yet the sockpuppet pretends otherwise by assuming folks won’t notice that 2041-2060 is not the same as 2050, especially in a projection where slope increases with time:
So as you said, the IPCC projections contradict the ‘3°C by 2050’ projection Chuck originally mentioned, despite Data’s pretense.
Anyway, feel free to take Susan Anderson’s advice to ignore their trolling, especially when they refuse to be honest about even basic points like this.
Nigelj says
3 Jan 2026 at 9:17 PM
Data: “Even the worst-case RCP8.5 scenario includes a 3 °C outcome roughly between 2040–2060, so the claim that it doesn’t reach 3 °C by 2050 is misleading. See my earlier summary here:”
Note: These are scenario-assessed ranges, not predictions, and cannot be extrapolated from recent observed trends to a specific year, 2050. They represent a range of possible temperatures across the full period, not a single outcome or date.
They cannot even be constrained to 2045–2050.
Re: “Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C.“
That table clearly shows what the AR6 SSP Scenarios presented :
Mid-term, 2041–2060 SSP5–8.5
Very likely range (C) 1.9 to 3.0
Ref: AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
alt pdf on page 14
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
Unrelated references remain irrelevant, no matter how often they are repeated. If a citation does not address the specific definition or methodological point under discussion, it adds nothing.
References (previously cited):
1) IPCC AR6 WG1 – Figure SPM.8
https://www.ipcc.ch/report/ar6/wg1/figures/summary-for-policymakers/figure-spm-8
2) Archive magnified snapshot of SSP5-8.5 ensemble spread ( Figure SPM.8 (a) )
https://d1z9kwz1j57ckx.archive.is/nToNi/e519ee9c1a8e60b3d5af48d3148238bb390a924e.jpg
3) AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Data, not sure what you are saying there or how a projection range means single years are irrelevant. Surely if they fall in the range they are still relevant?
This is my take on the issue. The summary for policy makers table smp1 rcp8.5 that you found and linked (a good find) shows warming could in fact potentially reach 3 degrees by 2050 but figure 8 that I mentioned shows it doesnt get near 3 degrees by 2050 so they seem inconsistent.
But one thing is certain reaching 3 degrees by around 2050 would be the very highest emissions scenario and my understanding is this has already been canceled by the progress made in moving away from coal, so the claim of 3 degrees by 2050 or very soon after this still lacks credibility.
The sockpuppet isn’t worth it. They can’t even keep their story straight, a sign someone is trolling and not telling the truth. Hence why, for example, they contradict themselves on whether Forster 2025 is a peer-reviewed paper.
Reply to Data (2 Jan 2026 at 5:16 PM)
To clarify for the record: Figure SPM.8 and Table SPM.1 do not make predictions. They show scenario-conditioned ensemble ranges. Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C. This is a statement about possible outcomes within the ensemble, not an extrapolation from recent observations. Conflating these categories is a methodological error.
References (previously cited):
1) IPCC AR6 WG1 – Figure SPM.8
https://www.ipcc.ch/report/ar6/wg1/figures/summary-for-policymakers/figure-spm-8
2) Archive magnified snapshot of SSP5-8.5 ensemble spread ( Figure SPM.8 (a) )
https://d1z9kwz1j57ckx.archive.is/nToNi/e519ee9c1a8e60b3d5af48d3148238bb390a924e.jpg
3) AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Re: “Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C.“
No, it doesn’t. I know that since I’m the one who originally posted the 2nd link you listed. No projection here gets to 3°C by 2050, the original end-year under discussion:
https://d1z9kwz1j57ckx.archive.is/nToNi/e519ee9c1a8e60b3d5af48d3148238bb390a924e.jpg
And you again avoided your fabrication being exposed on Forster 2025, where you falsely acted like it was not peer-reviewed. These will be my final responses to you on these sub-threads since I’ve run out of patience with disingenuous fabricators. I think it’s apparent to informed readers that your claims are not to be trusted, especially when your disinformation on this 3°C example is so obviously wrong. No doubt you’ll make new sockpuppet account(s) soon, as you’ve done before when folks see through your older accounts.
Reply to Data
Re: “Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C.“
3) AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
and on page 14
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
and presented by Carbon Brief
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Note: These are scenario-assessed ranges, not viable predictions, and cannot be used to extrapolate from recent observed trends into the future.
That table clearly shows what the AR6 SSP Scenarios presented :
Mid-term, 2041–2060
SSP5–8.5
Very likely range (C)
1.9 to 3.0
Summary in plain language
1) AR6 SPM Figure SPM.8a does show a spread of model outcomes for SSP scenarios, including higher warming ranges above ~2C under SSP5-8.5.
2) The SPM graph and table does not “predict” warming nor state exact dates for thresholds like 3 °C.
3) Atomsk’s predictions about specific timings (e.g., “for ~2°C by 2045-2050”) are interpretations and extrapolations, not text from the IPCC reports.
More broadly, as discussed elsewhere, widening CMIP6 ensemble spread relative to CMIP5 underscores why these scenario-assessed ranges should not be treated as date-specific predictions.
–
Atomsk’s Sanakan says
5 Jan 2026 at 3:22 AM
“These will be my final responses to you on these sub-threads”
If only. We live in hope.
An alternative reference to the original SPM.1 Table (page 14) showing mid-term, 2041–2060:
Table SPM.1 | Changes in global surface temperature, assessed from multiple lines of evidence, for selected 20-year periods and the five illustrative emissions scenarios.
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
Note: These are scenario-assessed ranges, not viable predictions, and cannot be used to extrapolate from recent observed trends into the future.
in Re to MA Rodger, 1 Jan 2026 at 1:31 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843336
Hallo MA,
Unfortunately, what really matters cannot be found anywhere in such PR journalism. Namely, it is the following conclusion of the article about DAC thermodynamics:
“it would be better to use this electrical energy to avoid these emissions, rather than using DAC to later remove what has already been emitted.”
Please note that the author of the cited “thermodynamic” article
https://andthentheresphysics.wordpress.com/2025/12/17/direct-air-capture/
arrived at this conclusion even though (s)he omitted another, “kinetic” aspect of the DAC issue, namely that for a finite process throughput of such a separation, we must necessarily handle huge volumes of the starting very diluted mixture. Due to natural limitations for mass and energy flow rates, you will unavoidably need a huge (and commensurately expensive) equipment.
This kinetic aspect is from the practical point of view even more important than the thermodynamic aspect, because economically, the costs of the necessary equipment (and of the necessary “excess” energy) will be any time orders of magnitude higher than the expenses calculated merely on the basis of the thermodynamic limit (Gibbs energy of mixing).
Although the kinetic aspect may look more complex and/or less “scientific” than the thermodynamic aspect, I think that scientists and journalists should explain both. I am afraid that only then it will become clear why arranging ANY technical measure for direct carbon dioxide removal from the atmosphere while energy is still produced by burning carbon-based fuels, is a money wasting.
Let me repeat again: It is because replacing the fuel with other energy source or exploiting a natural carbon dioxide removal (CDR) method will be surely cheaper than removing from the air the produced carbon dioxide by technical means. The reason why separating something from its dilute solution is so expensive is, however, predominantly in the practical “kinetic” aspect, not in the value of the respective Gibbs energy of mixing.
As regards the PR article
https://carboncredits.com/solar-energy-developer-secures-415-million-to-power-the-worlds-largest-direct-air-capture-plant/ ,
I am pretty sure that if the Stratos plant will be indeed once operated, the “solar” electricity from the $415 Million Swift Air Solar project will form only a part of the entire energy supply (maybe even a quite small). I do not suppose that the Stratos plant (the last figure for it from 2023 being USD 1.3 bln) should be operated only when the sun in Ector County will shine, or, should it be so, that it will be capable to reach the projected capacity.
I can only add the answer of Google search engine to the following question:
“How many years is the Stratos DAC plant expected to be operable / what is the expected final price for one ton (1000 kg) of the captured carbon dioxide?”
It said:
“Oxy’s Stratos DAC plant aims for full operation by late 2025, with a lifespan tied to technology advancements and carbon markets, but specifics on its total years of operation aren’t fixed, while the price per ton of captured CO2 is highly variable, with current contracts for credits sold at significant prices (e.g., $500-$1000/ton range) but aiming lower long-term, with projections around $200-$600/ton for the coming years.”
I pretty doubt they will ever sink their figures significantly below $1000/ton which is, according to an article by House et al.
https://www.pnas.org/doi/epdf/10.1073/pnas.1012253108 ,
a value that can be derived from a comparison of best available technology for separation of air components with technologies presently used for purification of flue gases.
Personally, I consider the high numbers based on this critical approach for much more reliable than the flood of optimistic numbers presented (along with House et al, but, unfortunately, without any suggestion why they should be correct and House et al overly pesimistic) in a more recent review article by Fasihi et al
https://www.sciencedirect.com/science/article/pii/S0959652619307772 ,
I am therefore somewhat sceptical about conclusions of the said review article that read
•
CO2 Direct Air Capture could be a potential climate change mitigation solution.
•
Direct Air Capture technologies are already commercialised.
•
Massive implementation would significantly reduce the CO2 capture costs.
•
CO2 capture costs below 50 €/tCO2 are achievable by 2040.
Greetings
Tomáš
Tomas: “Let me repeat again: It is because replacing the fuel with other energy source or exploiting a natural carbon dioxide removal (CDR) method will be surely cheaper than removing from the air the produced carbon dioxide by technical means.”
While true for the majority of our sources of emissions, it may not be true for the last, say, 10% emissions on the way to net zero. This is the “highest hanging fruit” – those emissions that have been left to the end precisely because we can’t reduce them or the reduction is prohibitively expensive.
I am not sure what you mean by “natural carbon dioxide removal (CDR”
– if natural carbon sinks then they are not a part of net zero (we count on them to reduce atm Co2 after we stopped adding new Co2)
– if you mean nature-inspired methods then – then yes, they should be looked at in the first place, since they may have collateral benefits: reforesting of deforested areas not only takes up CO2, protects C stored in soil, but also protects biodiversity and might produce additional cooling by increasing cloudiness; regenerative agriculture/biochar improving soil C storage also reduces environmental impacts of industrial agriculture, spreading on the fields glacial flour from Greenland or basalt grinded in-situ – in addition to CO2 uptake, also increases crop yields.
However all these may not be enough, or implementation of some may be too expensive – then there would be space for the most effective of the technological DAC.
TK: “ The reason why separating something from its dilute solution is so expensive is, however, predominantly in the practical “kinetic” aspect, not in the value of the respective Gibbs energy of mixing.”
Except that most of the methods – all the nature-inspired and most of technological DACs – do NOT concentrate physically CO2 out of air,
hence they incur neither the thermodynamic nor your “kinetic” energy costs of the CO2 concentration.
Sure, they will have various other costs, applicable to their particular method – but those cannot be quantified from the energy cost of the process they don’t use.
Therefore, your scepticism toward “CO2 capture costs below 50 €/tCO2 are achievable by 2040” would need another justification.
in Re to Piotr, 4 Jan 2026 at 1:25 AM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843447
Hallo Piotr,
Indeed, under CDR methods based on “natural” processes, I meant e.g. tree planting. Instead of thermodynamics of gas mixing, herein applies thermodynamics of CO2 reduction to reactive intermediates of biochemical sugar synthesis. In fact, Gibbs energy of this process is significantly more positive than Gibbs energy for separation of gaseous components from each other, however, we do not need to worry about it, because plants exploit solar energy therefor.
Moreover, in forests, the sequestration process in parallel builds the necessary “equipment” (trees), so that we may not need to pay for the equipment anymore. Finally, the excess energy necessary for continuous CO2 supply (my “kinetic” component of the overall energy demand) is in this case also supplied “for free”, by wind. Therefore, my scepticism about DAC does not apply for carbon dioxide removal by tree planting. It is in accordance with both references that I cited, because House as well as Fasihi deal solely with DAC in its usual sense (CDR processes exploiting technical means).
Greetings
Tomáš
Tomas Kalisz Therefore, my scepticism about DAC does not apply for carbon dioxide removal by tree planting.
My point was that scepticism about DAC achieving its price targets cannot depend
a) on the blogger’s calculation of Gibbs free energy of mixing – since (great?) majority of DAC binds CO2 by reacting with it chemically, not by physically reversing mixing, to which the blogger number applies to
b) your “kinetic” cost argument maybe applicable only the extent they have to supplement wind “for your continuous air supply” and any other energy requirement for operation, particularly of removing the absorbed CO2 for absorbent molecules – assuming that they would want to recycle it instead of just burying it – this done typically by heating the absorbent medium – the cost of which may be reduced if you combine it with thermal power plants or industrial sources of waste heat.
But without knowing how big are these costs – you don’t have enough data to guess whether “CO2 capture costs below 50 €/tCO2 are achievable by 2040” or not.
In Re to Piotr, 5 Jan 2026 at 2:26 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843495
Hallo Piotr,
I think that the great majority of articles discussing carbon dioxide removal (CDR) from the ambient atmosphere by technical means, usually shortened as “DAC”, discusses chemical or physical-chemical processes (typically absorptions in a solvent or chemical reagent having an affinity towards CO2, or a physical adsorption on a solid with a high specific surface) that suppose recycling of the working medium by periodical CO2 release in concentrated form. The thermodynamics reported by Andrew Dessler applies equally for all these cases, and the same applies also for the non-thermodynamic contributions to the overall costs that I assigned as the “kinetic” process aspect.
As regards the CDR processes using an absorption medium that is not recycled but, instead, sequestered together with the caught CO2, the primary focus is on alkaline rocks that do not require any other treatment than their mining, milling and properly spreading so that their natural weathering by their reaction with CO2 speeds up about six orders of magnitude. Herein we exploit natural “free bases” produced by geothermal energy / volcanism to deliver the negative Gibbs energy required for a spontaneous CO2 binding, and, by spreading the rock in the environment, we resolve the “kinetic aspect” of the process.
You are right that that this approach is smarter than seemingly more straightforward “direct” DAC, because it replaces handling with huge amounts of very diluted CO2 in air by handling with several orders of magnitude smaller amount of the necessary base concentrated in the rock. When CO2 did not come to the rock, the rock will come to CO2.
Nevertheless, the joy is not yet completely for free. Following figures came from a short Perplexity search, I have not checked the cited sources for correctness.
It appears that for a sufficiently high weathering speed, so that the CO2 absorption process is significantly quicker than natural equilibration of the Earth energy imbalance (EEI) after the “net zero” (which is, without any further human intervention, expected to take several millennia), we need to mill our absorbent rock to a quite fine powder – with particle size below 0.1 mm to secure their full weathering within 1000 years, or below 0.01 mm, to enable their full weathering within 100 years.
A typical equivalent of a sufficiently abundant basic rock necessary for absorbing 1 ton CO2 appears to be about 2 ton. For sequestering 1 Gt CO2, we thus need to mine, crush, mill and spread about 2 Gt rock. For basalt with bulk density about 2800-3000 kg/m3, it corresponds to a volume about 0.7 km3. If we will consider loading 2 kg/m2 of the selected land with the obtained material (less than 1 mm high layer), the powdered rock will have to be spread over an area about 700 000 km2.
Milling 1 t rock to 0.1 mm grain requires, according to literature, about 20-40 kWh energy and reducing the grain size below 0.01 mm can require further 80-200 kWh/t. Provided that milling to the required grain size represents the main share of the total energy demand, sequestering 1 Gt CO2 by ERW may consume something between 50 and 500 TWh what still sounds significantly more reasonably than the figures for any “pure” DAC process comprising separation of neat CO2.
I think that in this case, the overall costs about (or even below) 50 USD or EUR for 1 t of sequestered CO2 could be indeed achieved. We have to take into account, however, that the sequestration will not be instant, because this cost estimation applies for sequestering the said 1 t CO2 in the timeframe roughly between 100 and 1000 years. In the present example, the “kinetic aspect” of the separation process reveals more apparently in prolonged process duration than in the overall energy consumption and/or costs of the necessary equipment.
Greetings
Tomáš
Tomas Kalisz: The thermodynamics reported by Andrew Dessler applies equally for all {DAC technologies]
How? His calculations are of the energy of (reverse) mixing – which would be applicable to the processes using this (reverse) mixing – i.e. physical concentration of 400 ppm CO2 to pure CO2.
Most (all?) DAC technologies do not preconcentrate ambient CO2 to pure CO2 before running it over the absorbent, hence energy of (reverse) mixing is irrelevant to them – it’s the energy of chemical reaction that matters – and this would be different for different absorbents (as it’s dependent on the affinity of absorbent to Co2).
So not only you don’t get a single value for it – but all the individual values will be DIFFERENT than the single value (“500 kJ/kg of CO2”) calculated for the physical concentration of CO2 from 400ppm to pure CO2 that your Dessler, the Then There is Physics guy and his YouTube inspiration calculate.
You can’t disparage DAC methods by assigning to them the high thermodynamic cost of a process they… don’t use.
in Re to Piotr, 5 Jan 2026 at 2:26 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843495
Hallo Piotr,
Thank you for raising your objections again. I hope that following explanation helps show that, in fact, usually both points a), b) are important – more generally than many people think.
To your point a)
The Gibbs energy of mixing is minimal work you must spend if you would like to separate a neat component from its diluted mixture, irrespective how you accomplish this task. It does not matter if you use chemical, physical or perhaps biological processes, nor in which order you perform individual operations. It is a fundamental principle of thermodynamics that the route between initial and final state of the system does not matter – what only does count is the difference between both states.
In other words, if you use a powerful concentrated chemical agent that binds CO2 very strongly, like calcium oxide, you may indeed easily separate CO2 from a significant volume of air and then release it in a neat form by calcining the formed calcium carbonate at a temperature above 900 °C. In this case, the necessary Gibbs work for the desired separation has been done at the expense of even much higher Gibbs energy of CaCO3 formation that spontaneously released during the separation step. You have not “saved” or anyhow circumvented the unavoidable “payback” of the CO2-air mixing Gibbs energy in this process. You had to pay the entire Gibbs energy of CaCO3 formation back during its decomposition, with a huge interest due to significant energy losses in the calcination furnace, while this Gibbs energy alone was much higher than the Gibbs energy of CO2 mixing with air that you intended to save.
In still other words, you are perfectly right that the respective energies spent on various chemical DAC processes will significantky differ from each other, depending on the reagent used. All these energies will be, however, significantly higher than the theoretical Gibbs energy of CO2 mixing with air. The high affinity towards CO2 (commensurate to the high Gibbs energy of the respective reaction) is the necessary driving force of the CO2 extraction process and the reason why all these chemical DAC processes do work.
By the way, the above mentioned circular calcium carbonate formation and its thermal decomposition is the basis of the billion dollar STRATOS process. In combination with reasons explained below, you may be pretty sure that this plant is a perfect machine rather for conversion of money (irrespective whether US tax dollars, carbon credits and whatever else investments) into thermal energy of Earth atmosphere than for anything else.
As you correctly pointed out, the only way how you can (seemingly) avoid spending the Gibbs energy of mixing (by letting someone else to pay it for you) is desisting from the release of captured component (in our case CO2) in its neat form.
If you from someone obtain the suitable CO2 binding agent ready to use, it will be for you energetically “for free”. If we have the alkaline rocks, provided in this reactive form as a gift of Nature, there remains only the problem how to bring them into contact with the atmospheric carbon dioxide.
To your point b)
If we simply wait millions years, the Nature will do the necessary work for us. If we decide to proceed quicker, we must pay for it with energy and with investments into equipment (which, basically, also represent various forms of spent energy). In the example with enhanced rock weathering, I tried to show that avoiding an active handling with exorbitant volumes of air and replacing it with handling with muhc smaller mass and volume of the concentrated rock only is a smart idea. Nevertheless, if you will insist that your separation process has to be complete within years or decades, you will have to mill all your rock to nanoparticles and thus skyrocket your expenses. Otherwise, you still have to count with processing in the timeframe of centuries.
In other words, although it may not look as obvious as for the thermodynamic limits for the minimal necessary energy, the kinetic limits for the throughput of separation processes (which finally anyway transform into energy – or money as an energy representation) are also very important
In separation of very diluted components regularly play more important role than the thermodynamic aspect.
Greetings
Tomáš
in Re to Piotr, 11 Jan 2026 at 6:23 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843774
Hallo Piotr,
I cited Andrew Dessler merely because I was aware of his explanation of the Gibbs energy of CO2 mixing with air and thought that it is more consistent and better comprehensible than that provided in the Then There is Physics article.
I must admit that I have not seen discussions about the “Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate” as attractive and therefore completely missed that Dr. Dessler played a leading role in critique of this document. Only thank to your remark, I became curious who actually is “my” Andrew Dessler and realized
https://en.wikipedia.org/wiki/Andrew_Dessler
that he is a prominent climate scientist.
Hereby, I would like to assure you that it was not my intention to argument by his authority (of that I was unaware). I cited his blog because I liked his clarity.
Greetings
Tomáš
Thomas Kalisz says:”The Gibbs energy of mixing is minimal work you must spend if you would like to separate a neat component from its diluted mixture, irrespective how you accomplish this task. It does not matter if you use chemical, physical or perhaps biological processes, nor in which order you perform individual operations. It is a fundamental principle of thermodynamics that the route between initial and final state of the system does not matter – what only does count is the difference between both states.”
This is exactly what I thought. The difference between industrial DAC and planting trees is with DAC we have to provide all the energy and with trees photosynthesis does some of the work naturally. And quite quickly. The problem being there is only limited land.
But I agree with Piotr that we have to use some form of CO2 extraction to deal with the last 10% of emissions that would be very costly or impossible to stop. Perhaps a combination of different approaches is the best thing.
Tomas Kalisz 12 Jan 1:31 PM “ Re to Piotr, 5 Jan 2:26 PM To your point a) ”
Huh? Why do reply my “Piotr, 5 Jan 2:26 PM” to which … you already replied on 6 Jan. We are well past that – on Jan 11 I have already replied to your Jan 6 reply, namely to
your claim
TK 6 Jan : The thermodynamics reported by Andrew Dessler applies equally for all {DAC technologies]
I have proven it false:
– “the single value (“500 kJ/kg of CO2”) for the physical concentration of CO2 from 400ppm to pure CO2 (that your Dessler, the Then There is Physics guy and his YouTube inspiration calculate) are inapplicable to the most (all?) DAC method that do not concentrate CO2 from 400ppm to pure CO2, [but react with the ambient CO2 by chemical or physical absorbents]”
For such reactions only the energy of reaction is relevant, NOT the energy of (reverse) mixing from 400ppm to pure CO2. Sure, the element of the entropy would be included in this energy of reaction (via the ratio of the concentrations of the products/reagents)
but the overall Delta G (kJ/kg CO2) won’t be = 500 kJ/kg of CO2”.
This is not say that the energy costs of various DAC techniques are high or low –
all I have been saying that one can’t dismiss DAC using inapplicable to them thermodynamic costs (“500 kJ/kg of CO2”).
Using an inapplicable number is worse than using no number at all – because it is false knowledge – it suggests QUANTITATIVE ACCURACY WHERE THERE IS NONE.
And as I wrote to you a couple posts back:
P: “Therefore, your scepticism toward “CO2 capture costs below 50 €/tCO2 are achievable by 2040” would need another justification [than Dessler et al. claim of “500 kJ/kg of CO2”].
Nigel: “This is exactly what I thought. The difference between industrial DAC and planting trees is with DAC we have to provide all the energy and with trees photosynthesis does some of the work naturally.”
I’d suggest you might re-examine this conclusion. DAC binds CO2 from the air to absorbents. The absorbent chosen are such that the reaction is spontaneous – which means that you don’t have to provide external energy for it to happen.
For the thermodynamic analysis of the process – the relevant quantity is the energy of reaction, NOT the energy of physical mixing.
That is not to say that DACs do not require human provided energy – they use energy but for DIFFERENT processes and in DIFFERENT amounts than the “500 kJ/kg of CO2” calculated by Tomas’s sources (Dessler, the Then There is Physics guy and his YouTube inspiration).
Using the 500kJ/kg of CO2 from the calculation of Gibbs free energy of physical (reverse) mixing to DAC, is akin to 1930’s aeronautical engineers applying the equations used to model large, fixed-wing aircraft to insects, concluding that insect flight was impossible. The difference was that anybody who saw a bee, knew that these engineers must have used inapplicable equations – something not immediately obvious for lay people who see the disparaging DAC’s draped in the authority of “the fundamental rules of thermodynamics”.
If you want more detail – see my post on 12 Jan 10:04 PM. And my overall conclusion there: “Using an inapplicable number is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“.
in Re to Piotr, 13 Jan 2026 at 5:57 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843857
and 12 Jan 2026 at 10:04 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843822
Hallo Piotr,
I apologize for referring to an earlier post – I put mistakenly January 5 instead January 11.
As regards the value of mixing Gibbs energy 432 kJ/kg CO2 (the work necessary for reversible separation 1 kg neat CO2 at standard temperature and pressure from ambient air comprising 430 ppm CO2), it corresponds 19 kJ/mol. It is exactly the difference between Gibbs energy of CaCO3 formation from CaO and CO2 at standard conditions (-131 kJ/mol) and Gibbs energy of CaCO3 formation from CaO and CO2 having atmospheric concentration 430 ppm (-112 kJ/mol). In a cyclic process, this cost never disappears; it is just hidden inside the chemical potential of the respective spontaneous reaction.
131 kJ/mol for CaCO3 decomposition would have been sufficient only if we could run this reaction at normal temperature in a reversible electrochemical cell. In the thermal process used in the Stratos plant, enthalpy of the reaction applies instead, which is 179 kJ/mol. This corresponds ca 4 MJ/kg CO2. Thus, if we exploit CaO as the sorbent for CO2 separation, we will unavoidably need at least 9.5 times more energy than the theoretical thermodynamic limit.
My primary point, however, was that the costs of handling at least 2500 t air for each 1 t of separated CO2 will, at required throughputs of the process, very likely significantly exceed the costs of the thermodynamically unavoidable energy required for the separation step alone.
Greetings
Tomáš
Tomas Kalisz: “I apologize for referring to an earlier post – I put mistakenly January 5 instead January 11“.
The _dates_ were correct – perplexing was why you had answered my Jan. 5 post – TWICE: (first time on Jan 6, and then again on Jan 12).
TK: the value of mixing Gibbs energy [=] 19 kJ/mol. It is exactly the difference between Gibbs energy of CaCO3 formation from CaO and CO2 at standard conditions (-131 kJ/mol) and Gibbs energy of CaCO3 formation from CaO and CO2 having atmospheric concentration 430 ppm (-112 kJ/mol).
You have just unwittingly proved my point – Dassler’s energy of mixing (+19 kJ/mol) is already INCLUDED in the energy of reaction (-112kJ/mol), therefore using +19kJ/mol to
QUANTIFY the energy (in)efficiency of DAC is not only inappropriate, but actively misleading:
Piotr Jan 12 and Jan 13: “Using an inapplicable number [+19kJ/mol, instead of -112kJ/mol] is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“. As is your defense of people doing it: TK: “ The thermodynamics reported by Andrew Dessler applies equally for all these cases”
in Re to Piotr, 17 Jan 2026 at 7:17 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844005
Hallo Piotr,
Thank you for your reply! I was afraid that my post got buried under pile of “Data” production and responses thereto, so I am happy that you found it and provided your feedback.
As regards my comparison of the Gibbs energies of CaCO3 formation under standard conditions and under ambient CO2 partial pressure, I introduced it deliberately, with the aim to support what I strived to express already before – namely that the Gibbs energy of CO2 dilution to the ambient air is the MINIMAL work that you MUST unavoidably carry out if you wish to extract the CO2 from the air back in its undiluted state, irrespective of the process used. Furthermore, I hope that in parallel, this example showed also that if you use such a spontaneous chemical process, the thermodynamically required minimal energy necessary for the CO2 release becomes several times higher than the ultimate thermodynamic limit 19 kJ/mol.
Nevertheless, I would like to repeat that the core of my objections that apply for all these DAC processes characterized by CO2 separation in neat form were not these thermodynamic hurdles but the kinetic aspect about which both the “ThenThere’sPhysics” article
https://andthentheresphysics.wordpress.com/2025/12/17/direct-air-capture/
as well as Andrew Dessler
https://www.theclimatebrink.com/p/thermodynamics-of-air-capture-of
remained silent.
Greetings
Tomáš
in addition to my post of 18 Jan 2026 at 3:46 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844046
Hallo Piotr,
To prevent further possible misunderstandings, Gibbs energy is defined as an energy that must be SUPPLIED to a system to bring it REVERSIBLY from a state A to a state B. Gibbs energy for thermodynamically spontaneous (exergonic) processes is thus by definition negative, whereas for opposite (endergonic) processes is positive.
In my example, if we could carry out the CaCO3 formation in a reversible electrochemical cell, we could extract therefrom 131 kJ electrical energy per each mol of CaCO3 formed / CaO and CO2 consumed, provided that our CaO will react with neat CO2 having standard temperature and pressure.
If we switch this cell to ambient air instead of neat CO2, we will be able to extract for the same extent of reaction 112 kJ electricity only.
If we had a machine that could generate work by reversible CO2 dilution with CO2-free air to the final concentration 430 ppm, we could obtain for each mol CO2 19 kJ energy.
If we extract 1 mol CO2 from ambient air by its reaction with CaO, released 112 kJ energy (“-112 kJ/mol”) becomes dispersed in the environment as waste heat. As soon as you will recycle the CaO and release the CO2 for the intended storage, you will have to spend the same energy (“+112 kJ/mol”) in form of electricity consumed in our hypothetical reversible electrochemical cell, provided that you will release CO2 back in the ambient air. If you will release CO2 as a neat gas at standard conditions instead, you will need to supply our reversible cell with +131 kJ electricity per mol CO2.
As no such reversible electrochemical cell is available and we must decompose CaCO3 thermally in a kiln, the high positive entropy of this decomposition becomes useless. We will therefore need to supply the entire enthalpy of the CaCO3 decomposition, +179 kJ per each mol CO2, instead of the Gibbs energy. The consumed energy cannot be anyhow re-used or recycled, because we spent it to break the strong chemical bonds spontaneously formed when CaO reacted with CO2. The only energy that can be partly recuperated is the excess heat (additional thermal energy that has not been consumed for breaking chemical bonds) that remains comprised in hot CO2 and CaO leaving the kiln.
You see that the minimal thermodynamic limit +179 kJ energy spent for each mol of separated CO2 in the cyclic process based on spontaneous CO2 reaction with CaO is significantly higher than the +19 kJ/mol thermodynamic limit for a reversible reversal of the spontaneous neat CO2 dilution in the ambient air. As an analogous analysis can be done for any separation process based on a spontaneous chemical reaction, I hope that it becomes clearer now why 19 kJ is indeed the UNIVERSAL lower limit for the energy that MUST be spent if we wish to separate CO2 as a neat gas at standard conditions from the ambient air.
Greetings
Tomáš
Tomáš Kalisz ( and also Piotr)
TK says: “namely that the Gibbs energy of CO2 dilution to the ambient air is the MINIMAL work that you MUST unavoidably carry out if you wish to extract the CO2 from the air back in its undiluted state”
Correct and I’ve generally gone along with your various explanations on the issue. My initial impression was that all CO2 extraction processes such as industrial DAC, enhanced rock weathering, and tree planting require this minimal work, just achieved in different ways.
But I think Piotr is saying that industrial DAC using fans and chemical substrates to effectively bond with CO2 in the air, doesn’t actually reduce CO2 to its undiluted state. It is bonding CO2 in the air with a substrate chemical, so that the entire Gibbs free energy argument doesn’t apply and so the energy required is really just a summation of the processes involved. And this applies to other forms of extracting CO2 out of the air. I hope im interpreting Piotr correctly.
In Re to Nigelj, 19 Jan 2026 at 2:44 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844090
Hallo Nigel,
Two weeks ago, on 6 Jan 2026 at 12:51 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843544 ,
I posted a quite detailed analysis of enhanced rock weathering (ERW), which is, besides tree planting, basically the single carbon dioxide removal (CDR) process that does not rely on artificial CO2 extraction from the atmosphere in neat form (direct air capture, DAC).
I tried to clarify that for all DAC processes, irrespective whether chemical or purely physical, 19 kJ/mol CO2 (which is the opposite value of the Gibbs energy of neat CO2 dilution with CO2-free air to the present CO2 concentration in air about 430 ppm) applies as the lower limit for the energy that is necessary for such an extraction.
I did so repeatedly and in various modifications because Piotr seemed to insist in an opinion that for DAC processes based on a chemical extraction step, this lower limit does not apply.
See e.g. his reply of 11 Jan 2026 at 6:23 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843774 ,
asserting
“Most (all?) DAC technologies do not preconcentrate ambient CO2 to pure CO2 before running it over the absorbent, hence energy of (reverse) mixing is irrelevant to them – it’s the energy of chemical reaction that matters – and this would be different for different absorbents (as it’s dependent on the affinity of absorbent to CO2).”
I repeatedly tried to explain that although this assertion is perfectly correct, the conclusions drawn by Piotr therefrom, summarized in his sentence
“You can’t disparage DAC methods by assigning to them the high thermodynamic cost of a process they… don’t use.”
are not.
It is because
(i) all these DAC techniques exploiting spontaneous CO2 sorption have their own thermodynamic limits that are significantly HIGHER than the lower limit derived for CO2 dilution with air and thus require (significantly) MORE energy than 19 kJ/ mol of extracted CO2, and
(ii) in all DAC methods, the major burden is not the thermodynamic limit for the necessary energy, but rather the costs incurred by the necessity of artificial handling with huge amounts of air, which is unavoidable for the desired throughput of the process.
Finally, let me comment on your last paragraph, wherein you seem to suppose that there are DAC processes wherein the used sorbent saturated with absorbed CO2 is not recycled but disposed (pardon, “stored”). If you indeed think so, I am pretty sure that you are wrong (and I do no believe that Piotr made the same mistake).
It is because in such a hypothetical DAC process,
a) the most relevant kinetic aspect (ii) would have applied anyway, and
b) except the alcaline rocks discussed in the above mentioned post treating ERW, you do not have ANY other sorbent available.
Please note that if you would like to use e.g. CaO or NaOH as the respective “disposable” CO2 sorbents, you will have to manufacture them, first, from available raw materials. There is no such active CO2 sorbent available on Earth surface that you could simply mine from some natural deposit thereof – just because we have CO2 in the atmosphere and everywhere in the environment. And as soon as you start manufacturing the respective chemical agents artificially (e.g. by thermal CaCO3 splitting to CaO and CO2 or by electrolysis of aqueous NaCl solution to NaOH, H2 and Cl2), the disputed thermodynamic limit 19 kJ/mol will apply again, because the required chemical affinity must exceed it (significantly) to enable that the sorption process runs spontaneously.
For the above reasons, I am extremely sceptical towards various assertions that any DAC technology (and likely any thinkable CDR method different from ERW and tree planting) could become as cheap as 50 USD per 1 kg extracted CO2.
Am I now clear?
Greetings
Tomáš
MA Rodger is factually wrong and misrepresenting Hansen’s position,(and my own) again. Let me break it down clearly:
Hansen’s “acid test” is about the spike itself through 2023-2024 — the abrupt, physically meaningful warming and its hemispheric/global distribution. It is not about whether extreme anomalies persist indefinitely. The spike happened in 2023–24, so the test is valid.
Reality check: 2023 spiked well above the expected background warming (~0.5°C above 2022 expectations) without a strong El Niño, just like 2025 remains elevated (~0.3°C above expectations). Both are clear, measurable deviations. You cannot claim that 2023 was “Bananas!!” but 2025 somehow invalidates the event. That’s logically incoherent.
In the middle came the record 2024 temps bookended by 23-25 non-El Nino years at the same GMST.
MA Rodger is denying observable reality by insisting that the absence in 2025 of current extreme anomalies as large as 2024 somehow negates what happened. That’s common-sense-defying and irrelevant to the physical phenomenon.
His ideological bias toward discrediting anything Hansen says has been persistent for 15+ years. That might explain the intensity of his confusions, pushback and lack of engagement with the actual Data. What it means and what Hansen actually says about it.
Bottom line: 2023’s GMST spike is real, measurable, and physically significant. 2025’s temperatures do not contradict it. They are more of the same. Anyone claiming otherwise is ignoring data and logic.
Data or whatever you call yourself,
You really have lost the plot here. A reply setting out all you have got badly wrong is a waste of my time and yours.
MAR: indeed. Meanwhile, he appears to be unaware that Dr. Hansen himself would not support his embrace of conflict uber alles, fighting with each other rather than doing something to improve the our prospects for the future and overcome liars and bullies whose extremist views endanger us all.
Meanwhile, year to year prices for Li-Ion battery storage (stationary approximately 40%, all types approximately 8%) continue to fall:
https://bsky.app/profile/mzjacobson.bsky.social/post/3mb5p2qjf5c2i
Direct link to article below:
https://www.ess-news.com/2025/12/09/bnef-lithium-ion-battery-pack-prices-fall-to-108-kwh-stationary-storage-becomes-lowest-price-segment/
Another step away from the outdated polluting and destructive fossil fuel industry and the problematic and expensive nuclear fuel industry, as their time gradually fades.
So, Trump welcomes the New Year by bombing Venezuela and abducting (and of course by implication deposing) Maduro. No US personnel were killed; no word on Venezuelan casualties. I suppose they aren’t considered to matter.
Well, that’s one way to raise oil prices in the short term, at least.
Meanwhile, storage is now drastically cheaper than even four years ago, despite some economic headwinds offsetting the declining price trend. BNEF puts EV batteries globally at $99/kWh (for the second year now), and stationary storage at just $70/kWh.
https://www.ess-news.com/2025/12/09/bnef-lithium-ion-battery-pack-prices-fall-to-108-kwh-stationary-storage-becomes-lowest-price-segment/
Nothing like fighting to hijack yet more of a rapidly failing and highly dangerous technology, So much winning!
Obama started the ball rolling with EO 13692. Then, during his first term, Trump put a bounty on Maduro for $15,000,000. Bribem increased that bounty to $25,000,000. OH YES, Bribem did it:
https://www.snopes.com/fact-check/biden-bounty-maduro/
So, Trump just continued what Obama and Bribem started.
Oil price seems stable so far at ~$58.00/barrel. Its been about the same for quite a while.
https://www.marketwatch.com/
US oil companies developed their oil infrastructure, so it rightly belongs to us.
https://www.opb.org/article/2026/01/04/five-things-to-know-about-oil-in-venezuela/
Quote: “U.S. oil companies like Chevron began drilling in Venezuela about one hundred years ago and played a key role in developing the country’s oil sector.”
Don’t build a house near a grid-scale battery storage facility:
https://www.zerohedge.com/energy/unreported-story-grid-scale-battery-fires
Trump is going to save us. His company is at the leading edge of science and engineering:
https://world-nuclear-news.org/articles/trump-media-announces-merger-with-fusion-firm-tae-technologies
The future looks bright!
#WINNING
KiA: “Bribem”
Your children must be so proud.
KIA: Bribem
BPL: Don’t spread lies.
https://www.youtube.com/watch?v=FDHakBGGwyk
Wink, wink.
The caption under the video says.” Back in 2018, President Joe Biden boasted about using a billion-dollar loan agreement with Ukraine as leverage to get a prosecutor fired as he was investigating corruption at Burisma, the Ukrainian energy company where Hunter Biden was a director”
Parts of this caption are false information. Biden did try to get rid of the prosecutor, but its nothing to do with Hunter Biden. Biden was acting under bipartisan policy to get rid of the prosecutor because he WASNT acting to stop corruption. Theres nothing in the video about Hunter Biden. Theres no evidence the prosecutor was investigating Burisma. Biden was also cleared of any wrongdoing in various REPUBLICAN investigations. Refer:
https://en.wikipedia.org/wiki/Biden%E2%80%93Ukraine_conspiracy_theory
KIA is a gullible fool.
Do your children know the extent to which you’ve sold them out?
https://www.newyorker.com/cartoon/a16995
Our Know It All Sophist’s choice: Waves hand, “Take them both. I just need the attention!”
https://revjrknott.blogspot.com/2018/07/a-few-favorite-cartoons-about-god.html
Creator’s remorse: What the hell was I thinking
and a few other good ones!
UAH has reported for December, with a global TLT anomaly of +0.30ºC, down on November’s +0.43ºC and indeed the lowest monthly anomaly since June 2023.
This puts December 2025 as the 6th warmest December in the UAH TLT record, behind 2023 (+0.74ºC), 2024 (+0.61ºC), 2019 (+0.43ºC), 2015 (+0.35ºC) & 2017 (+0.31ºC), with the rest of the top-ten warmest Decembers running 7th 2003 (+0.26ºC), 1987 (+0.25ºC), 2022 (+0.22ºC) & 2016 (+0.16ºC).
So there’s a lot of wobble in that 6th-place warmest Dec ranking.
Ignoring the wobbles, these UAH TLT numbers haven’t seen much of a drop since the start of the year with the NH showing no significant drop since May 2025 and the SH pretty flat since Jan 2025.
With the cool start to 2023, this puts 2025 warmer than 2023 and the 2nd warmest year on the UAH TLT record.
This annual ranking now runs:-
2024 … +0.77ºC
2025 … +0.47ºC
2023 … +0.43ºC
2016 … +0.39ºC
1998 … +0.35ºC
2020 … +0.35ºC
2019 … +0.30ºC
The RSS Browser Tool has also published to December for TLT etc. (NOAA STAR is still stuck in July, presumably having been ‘Trumped’.)
RSS TLT numbers have not been greatly different to UAH or STAR since 2005 so no great difference between the RSS & UAH rankings by year, except for 1998 which is well out of the top RSS rankings. (The big difference with RSS, and why their Browser Tool shows a warming rate of +0.23ºC/decade global warming while UAH & STAR only manage +0.16ºC/decade, is the calibration of the satellite data thro’ 1990-2005 when RSS somehow found an extra +0.35ºC of warming.)
RSS TLT Global Warmest Year Rankings (UAH in brackets)
1st ….. 2024 … +1.22ºC … … (2024)
2nd … 2025 … +0.93ºC … … (2025)
3rd …. 2023 … +0.91ºC … … (2023)
4th …. 2020 … +0.82ºC … … (2016)
5th …. 2016 … +0.81ºC … … (1998)
6th …. 2019 … +0.76ºC … … (2020)
7th …. 2017 … +0.69ºC … … (2019)
[Response: Go here for the NOAA STAR data. – gavin]
Just to be clear, my comments here were never about whether CMIP models (or others) can reproduce historical GMST trends reasonably well, nor whether CO₂ causes warming — those points are well established and not in dispute.
The issue raised months ago concerned interpretation, not detection: specifically, how ensemble spread, tuning choices, and compensating errors across interacting processes (clouds, aerosols, ocean heat uptake, circulation) limit the extent to which apparent GMST “skill” can be generalized to broader claims about climate system adequacy, scenario timing, or confidence in higher-order outcomes.
Narrowing the discussion to historical GMST alone while setting aside these structural issues sidesteps the question rather than resolving it.
Concerns of this kind are well known within the field. For example, Palmer & Stevens (2019) discuss the implications of model structural uncertainty and equifinality for interpretation and decision-relevance, independent of surface temperature skill:
https://www.pnas.org/doi/full/10.1073/pnas.1906691116
I don’t see much value in revisiting this further here, but I wanted to clarify the scope of the discussion for the record.
Multi_troll, sock-puppet version: “Data”: “ DAC is not the villain. Net zero is not the villain. Carbon budgets are not the villain. They are symptoms of a deeper refusal: to accept irreversibility”
Doomers and Deniers – similar in methods, similar in their fruits:
– Deniers try to discredit the climate science not telling them what they want to hear, Doomers try to discredit the climate science not telling them what they want to hear,
– Deniers try to discredit political/economic mechanisms and the technologies for reductions of CO2 emissions use the All-or-Nothing” fallacy (as if the world at 450 ppm was no worse than the world at 900 ppm). Doomers: dismiss renewables, DAC, net zero etc. on the same All-or Nothing fallacy.
– Deniers, use their All-or-Nothing narrative to sow apathy by portraying efforts to do anything about climate change as hopeless.
Doomers using the same All-or-Nothing fallacy, sows apathy by portraying political/economic mechanisms and the technologies to reduce CO2 as “ as not only hopeless, but in fact “ symptoms of a refusal to accept irreversibility“. And if AGW is irreversible, what’s the point of trying?
Les extremes sont touchant.
My original comment, in full and in context, without the distortions, misrepresentations, or baseless mischaracterizations:
Data says
31 Dec 2025 at 8:50 PM
I checked ATTP’s discussion. Mostly circular.
Most discussions around DAC, net zero, and carbon budgets sound like a lot of technical back-and-forth, but beneath the noise there’s a deeper structural problem that few acknowledge. The core failure everyone is skating around is this: climate mitigation treats a living, historical, relational Earth as if it were a controllable machine — and mistakes abstractions for reality.
DAC is not the villain. Net zero is not the villain. Carbon budgets are not the villain. They are symptoms of a deeper refusal: to accept irreversibility, to accept inheritance, to accept that the scale and power of civilization itself are the problem. That’s why techno-fixes proliferate, limits are endlessly deferred, and language becomes euphemistic and circular.
All the debate about energy requirements, efficiencies, or LULUCF ratios is still operating inside the machine metaphor: inputs, outputs, efficiencies, substitutions. As Whitehead would say, they are mistaking abstractions for concrete reality. To put it sharply: you cannot budget the future of a system whose governing dynamics change as a function of its past. That is the dagger cutting through all these circular debates.
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843319
The AndThenThere’sPhysics post for a proper context and background.
https://andthentheresphysics.wordpress.com/2025/12/17/direct-air-capture/
The RC DAC thread started by nigelj if you’re interested
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843268
Multitroll Data: My original comment, in full and in context, without the distortions, misrepresentations, or baseless mischaracterizations
That Dharma lady doth protest too much.
Put your money where your mouth is. Prove how quoting the rest of your self-satisfied logorrhea – would have dramatically CHANGES the meaning of your words I quoted:
– how is your lecturing them on their “ refusal to accept irreversibility” of climate change – DOES NOT imply that CO2 mitigation technologies, policies and science are not only pointless, but a sign of delusion (as in: “ refusal: to accept irreversibility”)?
– how your lecturing people trying to mitigate CO2 “ symptoms of a deeper refusal: to accept irreversibility” is NOT your sowing apathy, is NOT telling people there is no point to doing anything, because climate change is “ irreversible “?
– how is your attacking mitigation and net zero targets NOT benefitting oil and gas corporations and oligarchs, and not defending the interests of Russia, Saudi Arabia and Iran – whose economy, and therefore survival of their regimes and ability to wage wars and/or sponsor terrorism RELIES on the world’s apathy toward reducing the demand for fossil fuels (if climate change is “irreversible” then why bother?)
By your fruits, not your accusations “distortions, misrepresentations, or baseless mischaracterizations ” you are incapable to prove.
Incidentally – characteristics with which you try discredit your opponents – fit your mass production on RC to a tee. Hence – a clinical example of a projection.
Reply to Piotr et al
Everyone chooses what they will believe. My role is not to hold anyone’s hand or prove them wrong.
As Brandolini’s Law reminds us, the energy needed to refute nonsense is an order of magnitude greater than that needed to produce it.
I’ve said what I needed to say. The rest is up to others to discern.
Useful ref : https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
Data said: “It’s not my role to hold their hand, say it’s OK or prove they’re wrong.”
Then why do you spend so much time on this website trying to prove people wrong?
Data: “Reply to Piotr et al Everyone chooses what they will believe”
and this in your brain proves your accusations of “ distortions, misrepresentations, or baseless mischaracterizations” – how?
Since you declared yourself “neurodivergent” – I provided you points proving of which would have shown that you are not all bark and no bite. So what happened?
=== P. 7 Jan =======
” That Dharma lady doth protest too much.
Put your money where your mouth is. Prove how quoting the rest of your self-satisfied logorrhea – would have dramatically CHANGES the meaning of your words I quoted:
– how is your lecturing them on their “ refusal to accept irreversibility” of climate change – DOES NOT imply that CO2 mitigation technologies, policies and science are not only pointless, but a sign of delusion (as in: “ refusal: to accept irreversibility”)?
– how your lecturing people trying to mitigate CO2 “ symptoms of a deeper refusal: to accept irreversibility” is NOT your sowing apathy, is NOT telling people there is no point to doing anything, because climate change is “ irreversible “?
– how is your attacking mitigation and net zero targets NOT benefitting oil and gas corporations and oligarchs, and not defending the interests of Russia, Saudi Arabia and Iran – whose economy, and therefore survival of their regimes and ability to wage wars and/or sponsor terrorism RELIES on the world’s apathy toward reducing the demand for fossil fuels (if climate change is “irreversible” then why bother?)
Incidentally – characteristics with which you try discredit your opponents (“distortions, misrepresentations, or baseless mischaracterizations _ – fit your mass production on RC to a tee. Hence – a clinical example of a projection. ”
==================== end of quote =======================
Nigelj says
8 Jan 2026 at 1:18 PM
Unfortunately, you are directing that question to the wrong person. Wrong target, yet again. Persisting in it doesn’t improve the argument nor your subjective opinion.
Data says @8 Jan 2026 at 8:17 PM “Unfortunately, you are directing that question to the wrong person. Wrong target, yet again. Persisting in it doesn’t improve the argument nor your subjective opinion.”
I think Im directing the question at exactly the right person as follows. You claimed “It’s not my role to hold their hand, say it’s OK or prove they’re wrong.” My response was that “why then do you ten spend so much time on this website trying to prove people wrong” (something you did not deny doing in your reply). Nobody else is making the claim “it’s not my role to hold their hand, say it’s OK or prove they’re wrong.” so I think I replied to exactly the right person.
So do you think you can answer this time, or do we get yet more evasions and deflections?
Reply to Nigelj
9 Jan 2026 at 2:14 PM
I concede it’s a possibility you can’t help yourself and don’t know what you’re doing. But that is not my responsibility. Everyone ultimately chooses what they believe. It is not my role to hold anyone’s hand, reassure them, or prove them wrong. I’ve said what I needed to say.
Reply to Barton Paul Levenson
9 Jan 2026 at 9:28 AM
BPL: This is the third time I’ve read this in 2 areas. Straw man, and bloody condescending as well.
Are you sure about that? Maybe have another look and reconsider what you think you saw.
DATA,
I am not familiar with your posting, so do not know if you and I are aligned in our thinking or not, but I do know two things:
1. The specific response you gave to the fools in this thread is an absolutely accurate view.
2. If you are being harassed by the Peanut Gallery, you are either a denier or speaking truth to power to these fools. I.e., there’s a well-better-than-even chance you’re promoting sense into a vacuum of ideology and bullying bullshit.
I am not stating support for your posting generally because I really do not know, but in this thread? The trolls are the ones who have thrown that word around for 10-20 years to intimidate anyone who doesn’t suckle at the IPCC-as-gold-standard teat. They aren’t objective enough to realize the IPCC reports are nothing more than glorified literature reviews.
Whatever you are doing to raise their hackles, it’s likely far more intelligent than anything these fools produce. For the record, they produce nothing, thus their defensiveness and aggression bread by insecurity.
Killian says: “2. If you are being harassed by the Peanut Gallery, you are either a denier or speaking truth to power to these fools. I.e., there’s a well-better-than-even chance you’re promoting sense into a vacuum of ideology and bullying bullshit.”
Killian cant stand it when people politely criticise his views, so he claims hes being bullied. The irony is that Killian is the single biggest bully on this website. A man who calls people fools (see above) and for years harassed me (and others) calling me an idiot and to get lost and to shut up. Bullying is defined as relentless personal abuse and intimidation. Pick up a dictionary Killian.
Killian: “They aren’t objective enough to realize the IPCC reports are nothing more than glorified literature reviews.”
Of course the IPCC do literature reviews. Because that is their job. The IPCC look at all the literature out there, some of it conflicting, and make a decision on what literature is the most convincing. They do this carefully using teams of volunteer experts in their fields. So we end up with the best possible review of the science. And it is is the result of a team effort and a consensus procedure rather than putting one person in charge of deciding what is right with all the risks that would have. Its not possible anyway there is too much literature for one person to review..
And what is Killians better alternative?
Has anyone gotten Killian to share the papers they claimed showed a 5% chance of human extinction from climate change? You know, the papers that likely don’t exist because they were made up. Hence why Killian never shared them no matter how many people asked.
https://www.realclimate.org/index.php/archives/2025/10/unforced-variations-oct-2025/#comment-840307
https://www.realclimate.org/index.php/archives/2025/10/unforced-variations-oct-2025/#comment-840324
Reply to Piotr et al #2
Whatever you say to them they come back with something else more ridiculous. I call it “the conspiracy loop”, and it’s usually a sign that you’re dealing with someone operating in bad faith or they’re a lost cause.
Or both. I’ve said what I needed to say. The rest is up to others to discern.
A useful ref : https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
Interested readers can see my original comment, in full and in context, without the distortions, misrepresentations, or baseless mischaracterizations:
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843319
Multi-troll “Data” Whatever you say to them they come back with something else more ridiculous
Yeah, expecting Multi-troll to prove his insinuations – RIDICULOUS !
=== Piotr 7 Jan =======
Put your money where your mouth is:
Prove how quoting the rest of your self-satisfied logorrhea – would have dramatically CHANGED the meaning of YOUR words that I did quote:
1. how is your lecturing of the people involved in renewables, DAC, net zero and in studying climate -on their “ refusal to accept irreversibility” of climate change – NOT YOU implying that CO2 mitigation is not only pointless, but a sign of delusion (“ refusal: to accept irreversibility”)?
2. how is your pontificating that that climate change is “irreversible” NOT sowing apathy (what’s the point of doing anything, if the damage to the climate is “irreversible “?
3. how is YOUR attacking mitigation and net zero targets NOT benefitting oil and gas corporations and oligarchs?
4. how is YOUR attacking mitigation and net zero targets NOT defending the interests of Russia, Saudi Arabia and Iran – whose economy, and therefore survival of their regimes and ability to wage wars and/or sponsor terrorism – RELIES on the world’s apathy toward reducing the demand for fossil fuels, apathy promoted by your claim that nothing can be done because climate change is “irreversible”?)
===========================================
You are and will always be an ass.
Killian says: “2. If you (Data) are being harassed by the Peanut Gallery, you are either a denier or speaking truth to power to these fools. I.e., there’s a well-better-than-even chance you’re promoting sense into a vacuum of ideology and bullying bullshit.”
Killian cant stand it when people politely criticise his views, so he claims hes being bullied. The irony is that Killian is the single biggest bully on this website. A man who calls people fools (see above) and for years harassed me (and others) calling me an idiot and to get lost and to shut up. Bullying is defined as relentless personal abuse and intimidation. Pick up a dictionary Killian.
Killian: “They aren’t objective enough to realize the IPCC reports are nothing more than glorified literature reviews.”
Of course the IPCC do literature reviews. Because that is their job. The IPCC look at all the literature out there, some of it conflicting, and make a decision on what literature is the most convincing. They do this carefully using teams of volunteer experts in their fields. So we end up with the best possible review of the science. And it is is the result of a team effort and a consensus procedure rather than putting one person in charge of deciding what is right with all the risks that would have. Its not possible anyway because there is too much literature for one person to review..
And what is Killians better alternative? Since he claims hes so “productive”?
I’ve noticed that there was this a couple of years ago, and the warming *has* been a bit higher than the model’s projections, and I was wondering if this is a partial explanation? The water vapor doesn’t usually get to the Stratosphere and this didn’t dissipate as fast as a normal sort of pulse in the lower atmosphere.
https://www.nasa.gov/earth/tonga-eruption-blasted-unprecedented-amount-of-water-into-stratosphere/
Might it be worth a note on the temperature records?
BJC: This was discussed here over time after the event. Simply answer, partly. Sorry I am too lazy to look up links for you.
Yes, that eruption is the most likely cause of the rapid increase in measured global temperatures and it was predicted by several studies. From Wikipedia:
“Large volcanic eruptions can inject large amounts of sulfur dioxide into the stratosphere, causing the formation of aerosol layers that reflect sunlight and can cause a cooling of the climate. In contrast, during the Hunga Tonga–Hunga Haʻapai eruption this sulfur was accompanied by large amounts of water vapour, which by acting as a greenhouse gas overrode the aerosol effect and caused a net warming of the climate system. One study estimated a 7% increase in the probability that global warming will exceed 1.5 °C (2.7 °F) in at least one of the next five years, although greenhouse gas emissions and climate policy to mitigate them remain the major determinant of this risk. Another study estimated that the water vapor will stay in the stratosphere for up to eight years, and influence winter weather in both hemispheres. More recent studies have indicated that the eruption had a slight cooling effect.”
https://en.wikipedia.org/wiki/2022_Hunga_Tonga%E2%80%93Hunga_Ha%CA%BBapai_eruption_and_tsunami#Climate_and_atmospheric_impact
It likely isn’t.
“The record-high global surface temperatures in 2023/2024 were not due to the Hunga eruption.”
https://juser.fz-juelich.de/record/1049154/files/Hunga_APARC_Report_full.pdf
https://x.com/hausfath/status/2001756184554979837
https://x.com/AtomsksSanakan/status/1938001187611066784
Thanks to all of you.
I’ve pulled the report down for reference, but it seems roughly as I expected.
You’re welcome. And thanks for the question; it’s definitely a topic that’s worth exploring.
Powerplays, politics and panic – has BIG OIL wrestled back control? Dave Borlace, Just Have a Think, another useful review. Below, some quotes from transcript (qv: “more” along with sources).
https://www.youtube.com/watch?v=e6vQo4oE7L8
“And if the fossil fuel behemoths have their way, that demand will keep trundling on long after that. Of course, what those billionaire oligarchs, and arguably the IEA themselves, don’t appear to be factoring in here is the fact that on that trajectory there may not be a coherent global market to sell into anymore anyway, largely as a result of the damage those increased fossil emissions will have caused by then. But that is probably an entirely separate video’s-worth of ranting, so let’s not go there now, eh? So, what about renewables growth then?”
—
“perhaps the clearest commentary of all three reports on this Primary Energy Fallacy thing that we’ve all been hearing so much about recently.
“While primary energy peaks and starts to drop off, they say, USEFUL energy consumption actually grows through to 2050. And they explain that that’s because today’s energy system is highly INEFFICIENT. “To deliver the energy services necessary to support economic development and wellbeing, we do not need to replace all primary energy … only the useful share that actually powers economic activity.”
““As renewables and electrification accelerate, losses fall dramatically. Electric motors are 3-4 times more efficient than combustion engines, and new heat pumps deliver 3-4 units of heat per unit of electricity. These technological shifts … fundamentally change the conversion 8:06 efficiency of the global energy system.””
https://ocean2climate.org/2026/01/05/a-highway-of-heat-to-the-arctic-why-a-vital-ocean-current-is-losing-its-chill/
Another interesting article regarding the AMOC and how some of it isn’t losing its heat.
Paul beckwith found it but I’m just posting the link to the article and paper and not his video
If the Trump regime INTENDED to accelerate global warming with the GOAL of destroying the Earth’s biosphere and human civilization along with it at the earliest possible date, what would they be doing differently?
Not much.
Several recent comments have continued to frame CMIP outputs as forecasts of future GMST. Since that framing is incorrect, I want to clarify the record once, for readers — in this thread, and across others for months now.
In this respect, I agree with JCM’s recent point that similar outcomes do not imply identical representations or understanding. That distinction matters — but it has been repeatedly blurred in this thread.
The central misrepresentation originates with Atomsk’s repeated claim that the debate concerns the “skill at projecting future global warming and iTCR.” This has already been clarified in-thread, but it bears restating clearly because the same misframing keeps being repeated.
CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error.
Recasting methodological critique as denial of “predictive skill” therefore misrepresents both the models and what was actually being argued. CMIP does not make forecasts; it explores conditional responses. Claiming forecast skill on that basis — and then accusing critics of denying it — is a strawman built on a false premise.
That is why extended discussions of Planck feedbacks, Callendar vs. Manabe physics, or water-vapour dominance do not address the issue. Those are defensive distractions. They do not repair the category error.
Similarly, the recent rhetorical piling-on — including Piotr’s framing — adds volume but no substance. It is a textbook illustration of Brandolini’s Law: producing confusion is far easier than correcting it. Repetition does not convert misframing into validity.
This Ref explains it all: https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
The central false claim being advanced — repeatedly, and now despite correction — is this: “The question was skill at projecting future global warming and iTCR.”
That statement is not merely debatable. It is categorically false. CMIP outputs are conditional projections, not forecasts of future GMST. Treating them as forecasts, and then claiming “skill” in the sense of predictive success, is a category error.
This distinction is critical for interpreting claims about model skill and for understanding the broader rhetorical patterns we see in these debates.
Why this is not a matter of opinion. CMIP / AR6 with SSPs do not generate predictions of what will happen. They produce conditional projections, not forecasts.
A forecast requires:
– a probability distribution over outcomes
– likelihoods attached to drivers
– evaluation against out-of-sample future realizations
CMIP / AR6 has none of these:
– SSPs are storylines, not likelihood-weighted futures
– ensemble spread does not correspond to real-world probabilities
– hindcast agreement does not convert conditional simulations into forecasts
This is precisely why the IPCC itself uses the word projection, not prediction.
Why “historical skill” is not decisive.
Once this distinction is respected, the rhetoric collapses.
Agreement with historical GMST does not uniquely validate:
– circulation
– clouds
– hydrology
– land–atmosphere coupling
– extremes
– degradation impacts
These are precisely the domains where models are acknowledged to struggle, and where equifinality — multiple structures producing similar outputs — dominates.
Equifinality is the buried landmine here:
– similar GMST trajectories can arise from different parameterisations
– compensating errors and structural calibration blur causal attribution
– tuning vs. physics becomes inseparable
– convergence stops being probative
Outcome coincidence ≠ correct system understanding.
This is why the cannonball analogy fails. Climate models are not asymptotic limits of an exact theory. Newtonian mechanics is. The analogy is invalid.
What’s actually happening
What we are seeing is not inquiry, but performative epistemology:
– authority asserted rather than uncertainty resolved
– boundary enforcement instead of explanation
– certainty used as a social signal
This pattern is not confined to one poster. It is reinforced by several regulars who have, over months, converged on the same rhetorical substitution: methodological critique → denial → moralised dismissal.
That may maintain group cohesion, but it does not advance understanding.
Everyone ultimately chooses what they believe.
It is not my role to hold anyone’s hand, reassure them, or prove them wrong.
This comment is simply to put the record straight — in black and white — and to draw a line under a framing that has now been repeated long past the point of honest disagreement.
D: It is not my role to hold anyone’s hand, reassure them, or prove them wrong.
BPL: This is the third time I’ve read this in 2 areas. Straw man, and bloody condescending as well.
Yup. It’s particularly tedious because the sockpuppet failed on basic points that have been explained on RealClimate for years. At least other folks have the integrity and intellect needed to learn. The RealClimate post below, for example, went over generation of GMST forecasts. The modeling had three different scenarios, with each scenario having its own conditional projection for ‘if this forcing trend, then this GMST trend’. Observed forcing was between scenarios B and C, so the forecasted GMST trend was between scenarios B and C. That GMST forecast matched observed warming, confirming model skill. I have no illusions the sockpuppet account will ever honestly admit how this shows they’re wrong, anymore than I have illusions that most AGW denialists will admit when experts have shown they’re wrong.
https://www.realclimate.org/images/h88_proj_vs_real6.png
https://www.realclimate.org/index.php/archives/2018/06/30-years-after-hansens-testimony/
I made a single comment which Atomsk’s Sanakan has replied to 3 times via Proxies, when he is actually addressing me. 3 posts in under an Hour to reply to my one comment? This is disruptive trolling:101 His comments bare no relationship to anything I have said, or to any reference I have shared here. iow, it’s verbose nonsense.
I will make one reply here to these 3 posts by Atomsk’s Sanakan:
1) 10 Jan 2026 at 12:30 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843737
2) 10 Jan 2026 at 12:40 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843738
3) 10 Jan 2026 at 1:29 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843739
My comment 8 Jan 2026 at 7:53 PM above is a double edged sword. It was meant to kill two birds with one stone by showing what the definitions are; leaving an opportunity to add in more factual data.
Comments like this one: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error.” can simultaneously address two things at once.
1) The definitions surrounding the AR6 Table SPM.1 and 2) the unfounded unscientific Temperature Change Predications given by Atomsk’s Sanakan back in Dec 23 2025.
This bout of disinformation and incessant trolling harassment by Atomsk’s Sanakan all began with Chuck says 20 Dec 2025 at 7:56 PM
With Predictions of 3 C by 2050. 5 C by the end of the century!
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843026
HIs link was to “The German Physical society (DPG) and the German metrological society (DMG) jointly issued a dire warning, stating if the current trends continue, global warming could reach 3°C above pre-industrial levels by about 2050, and up to 5°C by 2100.“
https://www.responsiblealpha.com/post/scientists-3-c-by-2050-5-c-by-2100
They backed up their predictions with a published article and science research paper. Agree or disagree that is basic science being done.
Atomsk’s Sanakan immediately pushed back presenting his own Predictions
I will quote him in full verbatim::
Atomsk’s Sanakan says 23 Dec 2025 at 7:23 AM
Those are incorrect claims from the German Physical Society (DPG) and the German Meteorological Society (DMG). They’re cherry-picking a small number of CMIP6 models that are known to overestimate warming, including warming up to 2025. If those models are too warm in 2025, then there’s good reason to think they’ll also be too warm for 2050. The observed warming trend, better observationally-constrained CMIP6 models, CMIP5 models, the IPCC, etc. instead show we’re on pace for ~2°C by 2045-2050, 3°C by 2075-2090 (2060 at the absolute earliest), and ~3.5°C by the end of the century. [My emphasis]
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843058
Atomsk’s Sanakan has nothing to support his “claim” their Predictions are incorrect.
Points to note:
1) The German Physical Society (DPG) and the German Meteorological Society (DMG) are not “cherry-picking a small number of CMIP6 models.”,/i> That is plainly false. Atomsk’s Sanakan has provided no evidence to support such an accusation against them.
2) It is a high quality credible detailed climate science research paper under the HafenCity Universität Hamburg, and the two institutions already mentions. It has compelling supporting references. This publication is not equivalent to a throw away comment by a internet blogger, non-expert commenter or internet troll.
3) Their conclusions are compelling: “Human-induced global warming poses a real threat to the survival of human civilization.”
4) The “observed warming trend” is not in fact constrained by existing “CMIP6 models, CMIP5 models, the IPCC, etc. nor vice-versa. That is false.
5) There are no “observationally-constrained CMIP6 models, CMIP5 models, the IPCC, etc.” showing anything of the kind regarding temperatures.
6) The numbers in Atomsk’s Sanakan Predictions are his own. Plucked out of thin air apparently. He offers no supporting evidence of how he arrived at these figures. Unlike everyone else.
I put a number of challenges to Atomsk’s Sanakan, to which he has failed to supply any proof or supporting science for his many distorted claims or his GMST Anomaly Predictions.
Data says 23 Dec 2025 at 11:41 PM
– You need to prove that allegation is true, science based, then show your work and that list of “cherry-picked CMIP6 models”
– no one needs a model to see what the warming [is] up to 2025. The observational Data is available.
– RE If those models are too warm in 2025, then there’s good reason to think they’ll also be too warm for 2050. This is just nonsense. Prove that is true, show your work and reference everything why it matters.
– RE Listing his predictions as shown,
Data says: Prove that is true, show your work and reference everything.
– I summarized my comment as: “Predications of ~2°C by 2045-2050 [etc] is blatantly false and unfounded. Peer-Review doesn’t confirm a paper’s analysis and findings are in fact correct Scattered paintballs of ‘claims’ on a wall [posted by an Atomsk’s Sanakan] isn’t coherent science, let alone valid evidence of anything.”
To date there has been Silence on these questions along with the more incoherent trolling distractions in return. A flat refusal to show his “work” with no scientific basis for his Predictions.
Original comment in full: https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843073
Re: “Predications of ~2°C by 2045-2050 [etc] is blatantly false and unfounded“
From the same sockpuppet account that tried to disinform people on whether Forster 2025 was peer-reviewed.
That disinformation was given to avoid acknowledging the projection of 2°C by approximately 2048 in that peer-reviewed paper’s Climate Change Tracker.
Data: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error….CMIP does not make forecasts; it explores conditional responses. Claiming forecast skill on that basis — and then accusing critics of denying it — is a strawman built on a false premise…..etc,etc”
No. The terms forecast applied to future warming is just SHORTHAND or an abbreviation for “scenario conditioned simulations”. Its implicitly acknowledged and understood that the forecasts are dependent on many variables and conditions. But if those conditions are met then we can say a model made accurate forecasts so had skill. And so the CMPI5 models had skill. So no strawman. Whether the models had good underlying physics is mostly a separate issue.
Reply to Nigelj
Then you are assuming I was making a criticism of the CMIP6 Modelling / language? But why would you think that?
Because you referred to “CMIP outputs,” which is a general statement for all CMPI models. Could that be it?
About what Data said
10 Jan 2026 at 1:49 AM
Reply to Nigelj
Then you are assuming I was making a criticism of the CMIP6 Modelling / language? But why would you think that?
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843720
Ah yes, I see now. The bold distracted you terribly. I should have put what came before in Bold too, so you didn’t miss it. Let me correct that oversight now:
“The central misrepresentation originates with Atomsk’s repeated claim that the debate concerns the “skill at projecting future global warming and iTCR.” This has already been clarified in-thread, but it bears restating clearly because the same misframing keeps being repeated.”
Yes, it definitely bares repeating. This too most probably:
“Several recent comments have continued to frame CMIP outputs as forecasts of future GMST. Since that framing is incorrect, I want to clarify the record once, for readers — in this thread, and across others for months now.”
Fixed. The only conclusion possible is there was no criticism of the CMIP6 Modelling / language by me.
Proving again the truth behind Brandolini’s Law, the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
See https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
And so much for my “I want to clarify the record once, for readers” being enough. It wasn’t.
I refer new readers to my original comment in full:
Data says 8 Jan 2026 at 7:53 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843658
Data: Your response doesn’t change the issue I raised as follows:
“Data: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error….CMIP does not make forecasts; it explores conditional responses. Claiming forecast skill on that basis — and then accusing critics of denying it — is a strawman built on a false premise…..etc,etc”
“No. The terms forecast applied to future warming is just SHORTHAND or an abbreviation for “scenario conditioned simulations”. Its implicitly acknowledged and understood that the forecasts are dependent on many variables and conditions. But if those conditions are met then we can say a model made accurate forecasts so had skill. And so the CMPI5 models had skill. So no strawman. Whether the models had good underlying physics is mostly a separate issue.”
Refer also to AS responses @ 10 Jan 2026 at 12:30 PM @10 Jan 2026 at 12:40 PM
You have made no substantive response to the points made by me or AS, which are roughly saying the same things. So you’re a game playing, evasive, time wasting troll.
It’s just the sockpuppet spouting evidence-free whining again because the published evidence and expert assessments shows they’re wrong. About as worthless as a non-expert flat Earther rambling online about geologists being wrong on Earth’s shape.
Anyway, experts rightly call these forecasts. There are different conditional projections/scenarios that implicitly say ‘if this amount of forcing, then this GMST trend. That ratio, combined with observed forcing, gives you the GMST forecast. Hence why experts select the conditional projection whose projected forcing most closely matches observed forcing.
It’s no different than conditional projections/scenarios in other contexts. So, for instance ‘if you walk to work, then it will take 30 minutes to get there.’ And ‘if you drive to work, then it will take 5 minutes to get there.’ And so on. You then plug in the route you took to work as the antecedent for the conditional, and the consequent of the conditional gives your forecasted time for getting to work.
Why are you engaging the sockpuppet, though, Nigelj? They lack both the integrity and intellect needed to accept these points, unlike the experts they whine about.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843443
These points on conditional projections and ‘if this forcing trend, then this GMST trend’ were already dumbed down multiple times for the sockpuppet account. But of course they’ll pretend otherwise; like denialists, they’re incapable of honestly admitting when evidence shows they’re wrong. For instance:
Reply to Nigelj
9 Jan 2026 at 2:02 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843698
After clearing up Nigelj’s error of flawed framing, in that “I was NOT making a criticism of the CMIP6 Modelling / language” at any time. I was raisng mutiple issues about how commenters here misrepresent those matters. Nigelj included.
Gratefully he is still stuck in this misconceptions and has conveniently responded to me again, making it even easier for me to walk through this door as planned.
Nigelj says 12 Jan 2026 at 3:04 PM
Data: Your response doesn’t change the issue I raised as follows:
Refer also to AS responses @ 10 Jan 2026 at 12:30 PM @10 Jan 2026 at 12:40 PM
You have made no substantive response to the points made by me or AS, which are roughly saying the same things. So you’re a game playing, evasive, time wasting troll.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843801
Similar to Atomsk’s Sanakan he appears incapable of laying a finger on me. So let’s dig into his many errors and misrepresentations so far. Shall we?
Moving on from: “Clarifying a persistent misframing here: CMIP / AR6 SSP outputs are conditional projections, not forecasts of GMST — historical agreement ≠ system validation; full explanation here: https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843658
Nigelj alleges the following is true: “The terms forecast applied to future warming is just SHORTHAND or an abbreviation for “scenario conditioned simulations”. “
He fails to provide even one piece of evidence this true. It’s merely his personal “belief”. Or perhaps another deliberate attempt to defend the indefensible. I don’t know, but I challenge this assertion with a series of requests for evidence:
1) Can you show where climate scientists use such a shorthand? In their everyday “commentaries” and online articles that might be referenced back to here?
2) Can show where online commentators use such a shorthand to mean specifically AR6 “scenario conditioned simulations”.
3) Can you show me where you yourself, on a regular basis, have used the term “Forecasts” as a Shorthand that meant a “future warming” and / or as an abbreviation for “scenario conditioned simulations”?
4) I am not only signalling out times where someone is addressing what “skill” various CMIP and other models might have in measuring future warming changes over time. But in any ocasion wher either the CMIP or AR6 “scenario conditioned simulations”. arise.
Let me be very clear here. I do not deny people use the word “forecast” about temperatures. I am specifically calling you out in the context in which you have chosen to argue against my points made in the post above. In particular in relations to CMIP and AR6 modelling activities and outputs. You must be able to show that context that fits with what I was saying above, and you have conceded fits. You appear to be suggesting that this “shorthand” is a common thing, otherwise why would you be raising it in the first place.
Be cause if it is not common, then what is your point? For it says absolutely nothing against my commentary above and the clear points I was making in that.
A few examples here and there are not sufficient. If what you claim is true then people everywhere will be using tis “shorthand” — you are the one Nigelj saying “And so the CMPI5 models had skill. So no strawman.”
If the term is being used, who decided doing so is “legitimately accurate”? You? If as I suspect you find there is no general use of this term as a “shortcut” this leaves the question: Where do you get all these woolly ideas from? The nearby sheep?
Let’s see you prove this is correct using evidence. You may like to start showing us if such a shorthand is used in the AR6, in their SPM perhaps. Your moment to shine has arrived Nigelj..
Good Luck.
AR6 WG1 SPM
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
Or in Zeke’s summary of AR6 WG1 (and Table SPM.1 )
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Or in the IPCC Core Concepts Central to this Special Report-Box SPM.1
https://www.ipcc.ch/sr15/chapter/spm/spm-core-concepts/
SR15 Summary for Policymakers
https://www.ipcc.ch/sr15/chapter/spm/
Anywhere that RC Authors use such a “shorthand” appropriately and meaningfully?
In any of Zeke’s articles?
https://www.carbonbrief.org/analysis-what-are-the-causes-of-recent-record-high-global-temperatures/
https://www.theclimatebrink.com/p/the-great-acceleration-debate
Maybe Hansen et al http://www.columbia.edu/~jeh1/mailings/
Forster et al. (IGCC 2024/2025) https://essd.copernicus.org/articles/17/2641/2025/ or discussions about it.
Grant Foster
https://tamino.wordpress.com/2025/05/28/how-fast-is-the-world-warming/
https://www.researchsquare.com/article/rs-6079807/v1
https://www.realclimate.org ?
https://www.realclimate.org/index.php/archives/2019/06/absence-and-evidence/
In the recent +3C German paper?
https://www.dpg-physik.de/veroeffentlichungen/publikationen/stellungnahmen-der-dpg/klima-energie/klimaaufruf/stellungnahme
Scientists: 3°C by 2050, 5°C by 2100
https://www.responsiblealpha.com/post/scientists-3-c-by-2050-5-c-by-2100
Beaulieu (2024)
A recent surge in global warming is not detectable yet
Claudie Beaulieu, Colin Gallagher, Rebecca Killick, Robert Lund & Xueheng Shi
https://arxiv.org/abs/2403.03388
??????????????????????????????????
Folks shouldn’t trust the sockpuppet on the links they spam, especially after they were caught pretending Forster 2025 wasn’t peer-reviewed.
They pretended that to avoid acknowledging the projection of 2°C by approximately 2048 in that paper’s Climate Change Tracker. They continue to willfully ignore that projection since it shows they were wrong.
Data, you are just splitting hairs / being pedantic.
Your original comment on the issue: “Data: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error….”
Its such a hair splitting view of things and so pedantic, and just doesnt make much sense, because the climate model simulations effectively forecast a variety of futures dependent on various conditions being met. Certain model outputs are then used as the basis for IPCC forecasts, based on likely / possible future conditions. This is hugely simplified but its what happens. The warming forecasts in the IPCC reports are not derived from examining the entrails of a goat or looking at the patterns of the tea leaves. They are based in large part on the results of climate modelling. Do you want me to have to prove that as well? Would you also like me to prove that night follows day?
And I’m not trying “to lay a finger on you”. You take things way too personally. I have several times questioned Piotrs conclusions as well. I would criticise a lot more peoples comments but I don’t like to spend too much time on such things. You post a lot of material and make a lot of controversial statements so you just get a bit more on my radar.
Nigelj says
14 Jan 2026 at 2:29 PM
Its such a hair splitting view of things and so pedantic,
Science is like that. Maybe you’re on the wrong blog site nigel? [ heat / kitchen ]
Nigelj: I suggest you not amplify Data’s endlessness. Part of it is vanity, part of it is using RealClimate in place of forming his own blog, because he gets an audience here. Don’t feed him (her). It’s not enough to be right if the discussion bores the pants off the rest of us.
I forgot to mention, Data is not wrong about a lot of stuff (except most of the personal attacks, and that applies to his/her respondents as well). The problem is not the facts, it’s the volume and the emotion, which get in the way of actual understanding.
Exploiting RC in this way does a disservice to us all.
I disagree on that, Susan Anderson. Data regularly makes stuff up, such as claiming work is not peer-reviewed when it actually was, pretending a warming projection for 2048 does not exist when it actually does, etc. They’re basically here to disinform across their sockpuppet accounts and never honestly admit when they’re wrong.
Atomsk: If you read my quite short comments for content, you will realize that I’m mostly complaining about amplifying the endlessness.*
Without wading through it all and setting aside the personal attacks and labeling, any lurker will be scratching their head rather than taking in the facts.
* Using AI enhancements makes this so much worse, because it generates content at speed and volume, obeying the directions of whoever is using it without regard for truth or falsehood.
Associating Newton’s work with unphysical construction obviously distorts his legacy. Suppose NASA tried to send a vessel to a distant destination using a version of Newton’s laws that happened to get the endpoint right but misrepresented or omitted relations of velocity, momentum, mass, and gravity. Because the underlying structure is inconsistent, it would be incapable of computing the fuel budget or resolving trajectories. Nobody would rely on such a framework. Most importantly, it would never lead to Einstein’s later insights, which depend on the underlying symmetries that are revealed only from the consistent system Newton outlined. If the governing relations are unresolved, you cannot account for the energy (Joules) required to move the system. By analogy, Callendar’s 1938 framework arrives at a plausible GMST at some time by disregarding the required energy accumulation. By compensating omission his account fails to track the actual Joules moving through the system; it produces the right GMST for the wrong energy. which is essentially nonsense since global warming is diagnostic of energy. In modern terms, this is not a transient response but an equilibrium mapping. The transient climate response is defined by the time-dependent evolution of temperature under a persistent (TOA) radiative imbalance, governed by the coupled dynamics of atmospheric adjustment and ocean thermal inertia. The temperature evolution is explicitly a transient process, where the system state is determined by the integral of TOA net radiation over time, not by instantaneous surface radiative closure. Treating as equivalent his equilibrium GMST snapshots against the modern concept of a transient state dragged by energy accumulation is categorically false and a historical fiction – incompatible accounts that realscience should clearly distinguish.
Again, none of what you’re saying prevents one from recognizing that Callendar in 1938 accurately projected iTCR, i.e. the ratio of global warming vs. forcing. If you don’t like iTCR in terms of forcing then the iTCR can be stated as the amount of global warming per doubling of CO2. Bringing up differences between Callendar’s modeling vs. that of others does nothing to change that.
“Several decades after the publication of Arrhenius’ study, Callendar (1938) made a renewed attempt to estimate the magnitude of surface temperature change in response to the increase in atmospheric carbon dioxide, using absorptivities of carbon dioxide and water vapour that are more realistic than that used by Arrhenius.”
https://tellusjournal.org/articles/10.1080/16000870.2019.1620078
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843744
I do feel that a distinction matters, because the extent by which GMST is dragged at a certain time under a forced TOA imbalance depends on the feedback parameter, lambda, and the ocean heat uptake coefficient, gamma. Although you repeat claims to the contrary, Callander includes no analogues to a lambda or gamma. As previously discussed, Callander’s temperature evolution is the required change to maintain a surface radiation equilibrium; snapshots that have no connection to modern concepts of TCR.
It’s worth repeating that we have already defined skill as an outcome agreement which has no dependence on explanatory correctness. But, outcome agreement on fundamentally different quantities as a TCR (in a state far from TOA energy balance), vs a surface radiation equilibrium snapshot is meaningless. The only relation is that both are numbers representing a temperature anomaly.
Number-matching without regard to explanatory correctness may suffice in some engineering contexts, but even there, conflating fundamentally different quantities carries serious risk. For example, imagine confusing the transient state of a hydraulic model of river flood stage under a 1% 24-hour rainfall distribution with the 1% regulatory flood elevation. That would be seriously messed up and such a thing does not meet a basic engineering standard, much less a realscientific one.
No one particularly cares whether US Army Corps of Engineers tools like HEC-RAS resolve physics or rely on empirical parameterizations, so long as the outputs are skillful. But it would be super weird if engineers fail to retrospectively distinguish real-time transient states from regulatory benchmarks in an attempt to preserve a continuous-history narrative.
It is worth noting that scrutiny here matters because once someone comes along with the right idea in principle, that CO2 should cause a warming influence, all it takes is to find a way to make a line of increasing temperature to correspond to a line of increasing CO2 with the same shape. Once you actually get down into it, it’s a fascinating and complex issue which has come with major advances in concept along the way that should be celebrated rather than glossed over or suppressed.
JCM says 18 Jan 2026 at 4:13 PM in Reply to Atomsk’s Sanakan 14 Jan 2026 at 4:47 PM https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843898 and @ Atomsk’s Sanakan 8 Jan 2026 at 3:09 PM https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646
But it would be super weird if engineers fail to retrospectively distinguish real-time transient states from regulatory benchmarks in an attempt to preserve a continuous-history narrative.
Indeed, it would be super-weird. What isn’t weird is when a small cadre of commentators who make mistakes in one area will make mistakes and misrepresentations in others as well.
Like not accepting nor openly acknowledging that CMIP is a computational exercise, and not a scientific experiment.
Or ignoring and repeatedly distracting from:
Statistical significance thresholds are human conventions for managing error, not discoveries about nature. Whether a trend or acceleration is “significant” depends on choices about confidence level, error tolerance, and model assumptions. Treating 95% CI as a hard scientific boundary confuses decision rules with physical reality and invites overconfidence rather than clarity.”
Because we know that statement is:
Technically correct
Unassailable
Deeply inconvenient to absolutists
But is not to real scientists.
Statistics is a lens, not a judge. And anyone who pretends otherwise is mistaking the map for the terrain — with confidence intervals drawn in thick black ink.
This is really sad, because: The public is being led to think science speaks in certainties. It does not. Unfortunately many amateur pro-Science Activists do speak with certainty.
Some scientists encourage that illusion. And others weaponize thresholds to shut down discussion and to deny Uncertainties.
It’s worthwhile to never confuse a real scientist practicing Science with a Lawyer.
Why Elon Musk’s Mars Plan is SCIENTIFICALLY Impossible | Neil deGrasse Tyson
https://www.youtube.com/watch?v=ldRtMa1FX4M
“Neil deGrasse Tyson explains why Elon Musk’s plan to put one million people on Mars by 2050 is scientifically impossible. From deadly radiation exposure to toxic soil and atmospheric challenges, discover the physics problems that make Mars colonization a dangerous fantasy rather than achievable reality.
“Learn why SpaceX would need over 10,000 launches in 25 years, why Martian radiation would cause cancer within months, and how perchlorate-contaminated soil makes agriculture impossible. Tyson breaks down the fundamental biological and physical barriers that no amount of money or technology can overcome.
This isn’t science fiction speculation – this is rigorous physics analysis of Silicon Valley’s most ambitious claims. “Understand why tech billionaires’ future predictions often ignore basic scientific principles, and how their Mars obsession diverts resources from solving Earth’s actual problems.”
“Essential viewing for anyone interested in space exploration, Mars missions, scientific reality versus tech hype, and why astrophysicists remain skeptical of ambitious colonization timelines.”
Susan Anderson: “ Neil deGrasse Tyson explains why Elon Musk’s plan to put one million people on Mars by 2050 is scientifically impossible. SpaceX would need over 10,000 launches in 25 years, Martian radiation would cause cancer within months, and perchlorate-contaminated soil makes agriculture impossible.
Let’s meet halfway – send to Mars only one rocket – with Musk, DJ Trump, Steven Miller, and Cruella de Ville Kristi Noem on board (no puppies though). I can already see the last Truth-Social post from the Donald J. Trump ballroom of the Space-Force 1, sent minutes before landing:
“We have to own Mars, whether you Martians like or not. We can do it the easy way or the hard way. If we don’t own Mars – Russians, Chinese, Somalis, Sleepy Joe, drug cartels and Antifa will have it. I won’t let it happen. It will be tremendous. When you become the 51st state – we will cherish you, Martians, as nobody has cherished you before! Make Mars Great Again!
Thank you for your attention to this matter”.
Piotr: good one! “perfect”, as the gobbler in chief would say.
Susan: “Piotr: good one! “perfect”, as the gobbler in chief would say.”
Thank you Susan. I would have accepted also: “Beautiful!”, “Tremendous!”, “Unlike anything anybody seen before!”, “Best in the history of the USA and very likely, the world!”.
I am a little disappointed though that you didn’t follow up with a nomination for the Nobel Prize that I deserved biggly, more than anybody in history, but those crooked Norwegians, stole it from me. And this is how they repay me for my wishing we had more immigrants from Norway! (I’ll leave it to my adorable press secretary to tell you how ungrateful was that).
This is even better. It includes an interesting discussion of the morals of science fiction vs. the belief system of tech billionaires who miss the point. I love all the laughter. Unfortunately, it’s rather long. One might call it scientific locker room talk, if one wasn’t looking for vulgar sexism but good clean fun. I’m a fan of astrophysicist Adam Becker’s work and this video also increased my respect for NdGT’s communication chops.
https://www.youtube.com/watch?v=o6UdRXloqGc
“billionaires who miss the point” is a self-contradiction because the sole purpose of Life is competition to the death.
BEF: ick
Money and power are not the sole measures of value. I suggest you watch the video.
There are very few, if any, biologists who would accept this statement uncritically, especially if you are assuming individual competition between two individual organisms. if you are thinking in terms of whole species finding successful ecological niches well that’s a bit more modern thinking but still takes much nuance, much data, and a very large bank of computers.
“Nature red in tooth and claw” is a 19th century notion, and it really doesn’t capture actual natural selection mechanisms and their feedbacks and feedforwards in the least.
Question: What particular “competition” does a random apex predator “win” if it overeats its prey population? Or the herbivore equivalent: What particular competition does an herbivore win if it eats all the plants in its environment. (In this 2nd regard, consider the Isle Royal experience with moose and wolves.)
Barry E Finch,
The concept of “the sole purpose of Life is competition to the death” is in a business sense , of course, the entire world of these ‘tech bros’ who have never been working with the public good in mind, this to the point that instigating serious public harm will be no direct restriction on their business plans.
When these characters are described, I do rather dislike use of the word ‘tech’. Even if you consider software a technology, these are not technical people, not technicians. They gain their positions in business by grift, by conning investers into thinking that they are the best thing since sliced bread and also by connng mass-customers into their orbit to demonstrate their ridiculous business models are not entirely vapourware. To that end, to achieve a functional product quickly and cheaply, they will cut corners and hack out that functional product without proper process, without being mindful of what they are actually trying to sell. And if they do achieve their goal of a bulging pile of investment capital, they use it to buy out competitors and establish an effective monopolistic position.
The only difference between the ‘tech bros’ and the Donald may be that they have the ability to hear what their competent advisers are telling them. And maybe they are ‘able’.
(Singling out Elon, he is completely ‘unable’ in this respect. His story does demonstrate how a tidy wad of cash from your dad and room mate during the dot-com bubble could be, by parts and with singlemindedness, used to bully and fool the rest world into funding insanity. And where does that foolish insanity come from?)
But for your average ‘tech bro’, when the game is properly afoot and the likes of AI technology is the football, the ‘tech bros’ will likely be as deaf to such advice as the Donald (or Elon). And that is a bigly big big worry!!
Lynn Margulis would have disagreed with you.
https://www.science.org/doi/10.1126/science.252.5004.378
He also points out the simple logical conclusion that if we *could* terraform Mars, then reterraforming Earth would be a hell of a lot easier, so…. why focus on Mars?
My answer? More money in it for Musk, et al.
Remember when Musk proposed nuking the poles to warm up the planet?
I propose redistributing all that wealth to urgent problems here on earth.
For clarity and ease I’m moving this comment away from the very noisy thread of replies it comes from.
Atomsk’s Sanakan says 10 Jan 2026 at 2:52 PM
……… you’re simply moving the goalposts from the original points.
That’s a bit rich. Another distortion fills out the pattern.
Data says 1 Jan 2026 at 6:53 PM
An addendum to addendums
Data says 28 Dec 2025 at 4:49 PM @
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843196
About the “models aren’t tuned” myth………………….
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843345
JCM responded correctly in context:
JCM says 4 Jan 2026 at 12:09 PM
It looks to me like:
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843457
Whereas the A’sS response is a clusterf*** of unrelated non-specific verbose noise.
Atomsk’s Sanakan says 3 Jan 2026 at 7:55 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843443 and
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843479 and
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843480 and here
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843578 and still more again
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843587 more here
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646 only to double down yet again
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843655
Finally ending in a whimper:
Atomsk’s Sanakan says 10 Jan 2026 at 2:52 PM
“Again, JCM, you’re simply moving the goalposts from the original points. These were that older models (including Callendar 1938) skillfully/accurately projected iTCR and GMST, with this not being explained by model tuning. I already explained how that iTCR can be stated in terms of warming per doubling of CO2. Your other concerns about model details do nothing to change those points.”
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843744
Emphasising yet again it is hard to discuss anything with anyone who has not read, or has not understood, what the individual references they provided actually mean beyond the misleading rhetoric that accompanies them.
Be discerning who you choose to listen to. Repetitive overconfident noise ≠ an expert nor an honest broker. It’s just noise.
Re: “Emphasising yet again it is hard to discuss anything with anyone who has not read, or has not understood, what the individual references they provided actually mean beyond the misleading rhetoric that accompanies them.“
What then to say of a sockpuppet pretending a reference they neither read nor understood (Forster 2025) wasn’t peer-reviewed?
Or what to say of a sockpuppet account pretending a projection of 2°C by approximately 2048 wasn’t included in that paper’s Climate Change Tracker?
Atomsk’s Sanakan says
12 Jan 2026 at 6:19 PM
Re: “Emphasising yet again it is hard to discuss anything with anyone who has not read, or has not understood, what the individual references they provided actually mean beyond the misleading rhetoric that accompanies them.“
What then to say of a sockpuppet pretending a reference they neither read nor understood (Forster 2025) wasn’t peer-reviewed?
Data says: “Atomsk offers a mix of non-peer-reviewed “references” — Forster 2025, Copernicus, Carbon Brief, ERA5 —“
Or what to say of a sockpuppet account pretending a projection of 2°C by approximately 2048 wasn’t included in that paper’s Climate Change Tracker?
– Data says: “Atomsk’s Sanakan claims: the observed warming trend, better observationally-constrained CMIP6 models, CMIP5 models, the IPCC, etc., show we’re on pace for ~2°C by 2045-2050, 3°C by 2075-2090 (2060 at the earliest), and ~3.5°C by the end of the century.
[…]
I have checked each source individually; none definitively supports the precise numbers he asserts. These references provide plausible ranges or scenario envelopes, not point predictions.”
– Here from Climate Change Tracker in Forster 2025
– Forster 2025: “We have published a set of selected key indicators of global climate change via Climate Change Tracker (https://climatechangetracker.org/, Climate Change Tracker, 2025), a platform which aims to provide reliable, user-friendly, high-quality interactive dashboards, visualisations, data, and easily accessible insights of this paper.“
One more time makes no difference.
And the sockpuppet account continues to willfully ignore the projection that shows they’re wrong (2°C by 2048), just as they willfully pretended Forster 2025 was not peer-reviewed. Same sort of avoided of evidence that’s displayed by AGW denialists.
An honest person by now would have admitted that:
1) Forster 2025 was peer-reviewed,
2) Forster 2025 made a projection of 2°C by 2048 via its Climate Change Tracker,
3) that projection is consistent with the projection I gave of 2°C by 2045-2050.
But admitting that would require the sockpuppet to honestly admit they’re wrong. That’s something they’re incapable of doing, much like many other science denialists. So they’ll keep deflecting from answering that point.
Atomsk’s Sanakan says
14 Jan 2026 at 8:22 AM
And the sockpuppet account continues to willfully ignore the projection that shows they’re wrong (2°C by 2048), just as they willfully pretended Forster 2025 was not peer-reviewed. Same sort of avoided of evidence that’s displayed by AGW denialists.
An honest person by now would have admitted that:
1) Forster 2025 was peer-reviewed,
2) Forster 2025 made a projection of 2°C by 2048 via its Climate Change Tracker,
3) that projection is consistent with the projection I gave of 2°C by 2045-2050.
But admitting that would require the sockpuppet to honestly admit they’re wrong. That’s something they’re incapable of doing, much like many other science denialists. So they’ll keep deflecting from answering that point.
One more time makes no difference.
Nice to see the sockpuppet account reduced to simply copy-and-pasting rebuttals of what they said, since they can’t cogently address those rebuttals. That confirms the prediction that they’d deflect.
And “Same sort of avoided” should instead read “Same sort of avoidance“.
Data do you seriously believe any sane person is going to waste their time reading your long list of links? What really happened is this. Discussion on models and tuning and skill started where Yebo Kando made some comments claiming CMPI5 climate models allegedly lacking skill, and AS posted a counter argument, including making a point that their results are not simply a result of “tuning”. At no point did AS claim models weren’t tuned.
Now during all this discussion BPL entered the discussion with a claim models weren’t tuned, (which doesn’t look like a valid claim to me). So most of your replies to AS about how models are in fact tuned you are responding to the wrong person! You have misinterpreted things, which is understandable because the discussion became confused and messy. Theres more to it but those are the crucial points.
Nigelj says
13 Jan 2026 at 3:54 PM
Data do you seriously believe any sane person is going to waste their time reading your long list of links- [to Atomsk’s Sanakans repetitive unrelated non-specific verbose noisy commentary] ?
D: Let me answer that this way, anyone who read that many comments by Atomsk’s Sanakan in one sitting would never be the same again. So you’re more or less correct I think.
As for your journalist attempt to “tell the story of what happened” – don’t give up your day job or your retirement to be a journo. Sorry you don’t have what it takes, you’re as bad as the rest of the media are at informing the public of the really important facts told within a meaningful and proper accurate context. You seem to have a lot in common with our Atomsk’s Sanakan fellow, unfortunately.
If you believe a model is very good, then its response to a controlled (virtual) perturbation experiment may reasonably be interpreted as giving “wrong answers for the right reasons,” when evaluated against criteria such as observed accounting of shortwave and longwave components to net radiation.
In this situation, the model is producing a physically consistent response to the imposed inputs, even though its output disagrees with observation.
The danger arises when this discrepancy is taken as evidence that the model physics are wrong, leading to retroactive re-tuning of physics parameters to force agreement. Such tuning can degrade the model, because it implicitly assumes the experimental inputs are exact and shifts error into the model structure itself.
At this time, I don’t think it is known why many of the most “skillful” models in representing GMST do not match the observed shortwave and longwave components of the driving net radiation at all.
On the one hand, some people blame the aerosol inputs (and associated “indirect” effects); on the other hand, some people accuse “feedback”, the latter which is explicitly an expression of model structure.
To compound interpretive difficulty, others invoke the wildcard of unforced variation (unexplained low frequency wobbles).
Driving even deeper, especially with respect to realclimates, recent analysis suggests CMIP experiments show bizarre transsimulation consistencies:
“””Despite potentially large differences in how different models simulate aerosol and cloud physics, the spatial patterns of the trends are very consistent across the ensemble-mean of each mode”””
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2025GL119493
“Observed and Modeled Trends in Downward Surface Shortwave Radiation Over Land: Drivers and Discrepancies” from Karen McKinnon and Isla Simpson.
So there is an implicit tuning driving similar responses to aerosol concentration using different physical structures, resulting in the same biased spatial patterns of “brightening”.
From Figure 3, the spatial correlation of ERA5 and CMIP trends of surface SW down show basically no relation, and massive continental areas are outside the range of any simulation.
We see how the simulations, in one way or another, produce strong SW trends coincident with historically major aerosol emission areas (eastern USA, western Europe, India, eastern China), and totally miss the mark across major regions of USA, South America, Africa, Middle East, and Asia.
Figure 3B is quite clear, where we see ERA5 trend outranking hundreds of simulations, while 3A shows the practically exclusive dependence on aerosol patterns for CMIP MMM. https://agupubs.onlinelibrary.wiley.com/cms/asset/82c91524-1fc5-4c20-b12f-501afa15cac2/grl71765-fig-0003-m.jpg
This shows that while the models capture aerosol signal, they systematically fail to reproduce trends where no aerosol signal is to be found.
So what’s to do? Re-tune model structure? or trust that the models actually might be useful, and therefore capable to diagnose and identify novel and previously underappreciated experimental input problems?
Nigelj says
13 Jan 2026 at 3:54 PM
At no point did AS claim models weren’t tuned.
Oh boy. The one who has misinterpreted things, again, and gets it all back to front and upside down.
Yeah, the 3+ long month discussion is confused. I am not.
A’sS is not a credible reporter of the Real Climate Science.
HANSEN’S Latest post
https://jimehansen.substack.com/p/three-things-all-at-once-all-the
Prologue: Part I
Why a prologue, as well as a Preface? The prologue is a guide to a story with many facets. My goal with Sophie’s Planet is to use my life experience – my gradual education as a scientist and political independent – to help you understand climate change on our home planet, its origin in the energies that improve our lives, and the politics that make it difficult to achieve good public policy. I include my opinion about policies, but my aim is to illuminate my journey and let you form your own opinion. If you are not a scientist, the prologue will be hard because it includes science topics without full explanations. You can skim over things that are not clear, because the science will be explained from an elementary level in the book that follows.
Jan 14, 2026
Link: https://jimehansen.substack.com/p/three-things-all-at-once-all-the
Hansen’s reflections on his career are interesting, but one figure in particular stood out to me: Jule Charney. What Hansen says about Charney is not only insightful, but also seems timeless. It speaks directly to where climate science is today, and it applies now—especially in the way the field deals with uncertainty, authority, and “scientific certainty.”
That’s why I think these few extracts are so relevant right now.
Extracts from Hansen’s Charney section:
“Charney essentially ignored the implied request for policy guidance. He realized that there was a basic issue to be solved before it would be possible to say how serious the climate threat was.”
“Charney’s genius was to cut through a thicket of uncertainties and focus on the single quantity – climate sensitivity – that would largely determine the magnitude of human-made climate change.”
“Of course, we must also know the climate forcing that drives climate change: climate change is the product of the forcing times the sensitivity of the system.”
“Charney focused the problem by asking: what would the eventual global warming be in response to a doubling of atmospheric carbon dioxide?”
“Climate response to even such simple forcing is complex because of climate feedbacks.”
“Charney chose to evaluate climate sensitivity with ice sheets fixed, leaving ice sheet change for future research.”
“Charney’s best estimate for the equilibrium (eventual) global warming in response to doubled carbon dioxide was 3°C… but with great uncertainty such that there was only a 50 percent chance that sensitivity was in the range 1.5–4.5°C.”
“That range covered everything from modest climate impacts to global catastrophe; it was no wonder that Charney avoided policy statements in his report.”
“More than 40 years later, the 2021 United Nations climate assessment still gave 3°C as the most likely response… and with still a very large uncertainty range.”
“Charney would be surprised at how long it took to obtain data needed to understand the role of clouds in climate change.”
“However, Charney’s framework for studying climate change is finally beginning to pay off.”
“He would not be surprised about the role of clouds, especially moist convection… in explaining not only high climate sensitivity, but growing climate extremes…”
A feast of temperature data has arrived in close order. Copernicus ERA5 re-analysis and a UK Met Office report giving a 2025 average for HadCRUT5 has been joined by Berkeley Earth and GISTEMP and NOAA reporting numbers for December.
With the December anomaly well down on Sept, Oct & Nov, all but GISS series records 2025 into 3rd-warmest spot behind top-spot 2024 & second spot 2023. And for all those breathy ‘scorchyisimoists’, ERA5, the only SAT in this record (so ocean temperatures aren’t that dastardly SST), was the only one putting the last three-year average 2023-25 above the magic +1.5ºC.
With the exception of GISS, the table below shows these temperature series with a 1850-1900 anomaly base (GISS 1880-1920). Note the NOAA series is a lot warmer thro’ the period 1850-1900 anomaly base period resulting in the lower anomalies.
Annual … … … 2023 … … … 2024 … 2025 … (& 3-year ave)
ERA5 … … …+1.48ºC … +1.60ºC … +1.47ºC … (+1.52ºC)
HadCRUT… +1.47ºC … +1.53ºC … +1.41ºC … (+1.45ºC)
GISS … … …+1.45ºC … +1.56ºC … +1.46ºC … (+1.49ºC)
NOAA … … ..+1.36ºC … +1.46ºC … +1.34ºC … (+1.39ºC)
BEST … … …+1.47ºC … +1.52ºC … +1.44ºC … (+1.48ºC)
“2025 Global Climate Highlights, Copernicus Climate Change Service”
“2025 ranks as the third-warmest year on record, following the unprecedented temperatures observed in 2023 and 2024. It was marginally cooler than 2023, while 2024 remains the warmest year on record and the first year with an average temperature clearly exceeding 1.5°C above the pre?industrial level. 2025 saw exceptional near?surface air and sea surface temperatures, extreme events, including floods, heatwaves and wildfires. Preliminary data indicate that greenhouse gas concentrations continued to increase in 2025.”
https://climate.copernicus.eu/sites/default/files/custom-uploads/GCH-2025/GCH2025-full-report.pdf
Just posting the report and link for information. Haven’t had time to read the whole report. It does appear to be a nicely presented summary I thought. Interesting thing in the report how sea surface temperatures remain unusually warm despite la nina conditions.
“The (supposedly imminent) engagement of the authors of the DOE ‘climate report’ with the extensive critiques they received.”
Evidently not that imminent! If I knew how I’d embed an amusing screenshot in this comment.
However, I clicked the link which presumably quite recently led to the web site of the (in)famous DoE Climate Working Group?
Now it states:
“climateworkinggroup.org is parked free, courtesy of Godaddy.com. Get this domain”
ROFL?
Jim Hunt
Hi there! We’re on an extremely sticky wicket these days, aren’t we.
All:
Strongly recommend this (Greenland, very short). Turn the volume up and watch!
https://www.youtube.com/watch?v=hS0wFiWpU4U
Hat tip, excellent (also admirably short) Krugman blog: The Stupidest Trump Move So Far
https://paulkrugman.substack.com/p/the-stupidest-trump-move-so-far
Long time no see Susan!
“Ooh, let the drums mark this day”
Highly recommended indeed, and also available for dissemination purposes on my LinkedIn:
https://www.linkedin.com/posts/soulsurfer_greenland-defense-front-the-hungry-giant-activity-7418697369594159104–AZZ
BlueSky:
https://bsky.app/profile/did:plc:l63d5xaf5hislz4gm3jphfqm/post/3mcpdkxutv22q
and a few other (anti)social media sites that have no doubt slipped my mind
The Institutionalization of Denial: Reality or Ritual?
What we’re seeing today is not merely disagreement or misunderstanding. It’s the moment where a system doesn’t just do harm — it builds a legal and cultural architecture to justify it. And yes, the cognitive dissonance is unbelievable, because the system doesn’t just tolerate contradiction.
It requires it. It runs on it.
The pattern is always the same:
Mass harm occurs
A reform follows
The reform protects the elite
The public is taught to feel “progress”
The system continues unchanged under a new label
That’s why “colonialism” and “abolition” become myths of redemption, not real transformations. Victims get no compensation, no justice, no structural change — and are told to be grateful to live inside the system that destroyed them
That is not denial. That is moral laundering. And the same mechanism is at work in climate discourse:
The system produces harm
It offers reform as a symbolic fix
It compensates the institutions that caused the harm
It teaches the public to feel good about the fix
Then sells it back to them as moral progress.
Paris Agreement. Net Zero. “Climate emergency.” Not solutions — it’s not a movement so much as a rebranding operation.
A familiar pattern emerges: trust the science, trust the institutions, trust the party — and treat fossil fuel workers as moral pariahs. That’s the cultural script.
It isn’t about solving anything; it’s about maintaining the system. So the system protects itself by turning doubt into sin, and dissent into villainy. The public follow along.
But the truth is harsher: The system doesn’t just resist truth. It monetizes it, repackages it, and sells it back to us as virtue. And the people who suffer are still expected to say thank you.
That’s the obscene part.
That’s the rot.
That’s the institutionalization of denial.
Because the system requires denial:
It can’t admit the truth without threatening its own existence.
It can’t stop the engine without stopping itself.
It can’t be honest without collapsing its own legitimacy.
So it does what all successful systems do: It manufactures narratives that keep it alive.
And we call that progress.
D: and treat fossil fuel workers as moral pariahs.
BPL: NO ONE–and I mean NO ONE–in climate science or in the renewable energy movement says to treat fossil fuel workers as pariahs. One straw man argument in your whole, long list of straw men.
Atomsk’s Sanakan says (21 Nov 2025 at 11:31 AM):
To Geoff Miell
“This isn’t a question of policy. It’s a question of statistical significance in science.”
https://www.realclimate.org/index.php/archives/2025/11/unforced-variations-nov-2025/#comment-842265
The problem is:
Statistical significance is not the same thing as real-world risk. You can’t use a p-value to dismiss the possibility of catastrophe, especially when the stakes are existential.
And the irony is:
The only people who keep insisting “it’s not significant” are the ones who want to avoid acting until the crisis is undeniable. That’s not science — it’s denial dressed up as rigor.
Here are some hot takes (fresh from a recent talk) about AI technology today in the context of climate challenges, systems stress, and the ongoing misreading of what both AI LLM and Climate Science is and isn’t:
Fossil fuels reshaped the physical economy.
“AI is doing something similar now.
[00:19:34] But in the cognitive economy, same dynamics, but a different and a bigger game board.”
“Artificial intelligence, large language models, multiplies our cognitive armies by scaling pattern recognition, prediction, coordination, content generation, as many things we used to use our own brains for.”
“[00:20:32] Once trained, these systems can operate at near zero marginal cost. A model built once can be copied, endlessly deployed.”
“Pretty much everywhere and run continuously as long as there’s enough electricity, which is a big if of course, and water and supporting systems.”
“So a small number of organizations with access to data and compute and capital can now perform tasks that once required thousands or hundreds of thousands of people spread across institutions.”
“[00:21:05] And this has consequences. first, it accelerates extraction, not just of energy and materials, but of human attention and creativity and hominid decision space.”
“Human time and attachment now becomes a resource to be harvested, optimized, and nudged, and then monetized at scale.”
“[00:21:43] Yeah, training data comes from everywhere, but the benefits concentrate in relatively few places.”
“This is the same ownership dynamic we saw with industrial machinery, but only faster and less obvious and much more concentrated.”
“Third, it’s a turbo boost for our current cultural aspirations and goals and metrics.”
“[00:22:08] AI is really good at optimizing for what we ask it to optimize for, but if soil health and ecosystem stability and the plight of the dolphins or future generations are not part of the game plan, they won’t be part of the outcome either.”
“[00:22:42] But without new boundaries, new aspirations. It’s also gonna shorten feedback loops, and the decisions are gonna get faster and the responses are gonna get automated, and scale will increase before the consequences become fully visible or visible at all. Systems move more quickly than human governance culture or our ethics can adapt to.”
“[00:23:09] And all of this, it draws on the natural world. So AI does not introduce a new set of dynamics, at least not yet… artificial intelligence. If successful, compresses time and amplifies whatever existing incentives are already in place.”
“[00:23:35] …technology was, is and will continue to be. Powerful and important, but it’s intertwined with physics and ecology in the hierarchy of our human reality… there’s no silver bullet response to these dynamics.”
“[00:24:29] …when wealth is defined narrowly as financial claims and digits in the bank, rather than the underlying real world stocks and flows, then accelerating drawdown can look like. Prosperity…”
“[00:24:55] …Markets, kind of along the lines of the maximum power principle, they reward speed scale. And monetization. They don’t measure what’s being depleted underneath.”
“So record financial wealth can coexist with declining real wealth almost by design.”
Why this matters? Speaking for myself this is not about “AI as a magical solution” or “AI as a doom machine.” It’s about systems, incentives, and the way modern technology amplifies existing dynamics — especially those driving extraction, inequality, and ecological overshoot. Including the ongoing public debate — with its distractions, paranoia, and weak “solutions” like deleting search results — misses the important point: AI is not a new actor. It’s a new amplifier.
It accelerates the same incentives that already pushed us toward deeper systemic stress, and it does so at a scale that governance and ethics cannot keep pace with.
“CMIP is a computational exercise, not a scientific experiment.”
Multi-troll, ver. “Data”, trying to discredit science on …. semantics: “These are coordination conventions, not truths.
By following your reasoning to its logical end – we can’t say anything about anything – because EVERY word we use can be dismissed as a “just coordination convention”. E.g.: “It depends upon what the meaning of the word ‘is’ is.”
What’s you alternative – to throw up our hands because we will never know what the real world is, or to start EVERY sentence with a preamble “ In the context of the coordination conventions used here “? Repeatedly stating the obvious is a very inefficient form of communication – and unnecessary – if you come to somebody’s house – you follow their rules – so if you are joining the scientific discussion using the scientific terms and scientific conventions, you should use the said scientific terms and conventions.
If you don’t like. nobody will cry after you – go and create your own science and demand from your followers that they use YOUR coordination conventions (say, that year 1772 in your science should be from now on known as year 72315.5, Anno Multi-troll Domini).
An analogy can work better to understand Scientific methods than many people realise:
Americans didn’t discover 1776 in nature.
Humans didn’t derive right-hand traffic from physics.
Statisticians didn’t uncover 95% CI in the fabric of the universe.
These are coordination conventions, not truths.
And therefore, crucially: Statistics does not demand a particular confidence level — only humans do.
For all time the statement: “This isn’t policy, it’s statistical significance.” is false framing.
Therefore the quote: “This isn’t a question of policy. It’s a question of statistical significance in science.” — is categorically incorrect.
Why?
Because: Choosing 95% instead of 90% or 97.5% is a policy choice — not a Scientific choice, but merely a convenient choice for standardization by bureaucrats.
Choosing Type I vs Type II error tolerance is a value judgement.
Choosing what counts as ‘detectable’ is a decision rule, not a fact.
In climate risk contexts especially: False negatives (missing real acceleration) matter.
The costs are asymmetric. Therefore, epistemic caution cuts both ways. Pretending otherwise is scientism, not science.
There’s a deeper philosophy of science problem going on here that needs exposure on a ‘science’ forum. What we’re really pointing at isn’t statistics — it’s epistemic authoritarianism.
Some people (unfortunately):
Treat statistical thresholds as moral boundaries
Treat conventions as laws
Treat uncertainty as ignorance rather than structure
Treat lay confusion as a weapon, not a communication failure
This is exactly how gullibility and misunderstandings about Climate Science is manufactured in the public observer. That is not science it is rhetoric. Sophistry if you will. Monckton did it – now A’sS et al are doing it.
This relates to my recent comment here:
https://www.realclimate.org/index.php/archives/2025/12/1-5oc-and-all-that/#comment-844102
yes these conventions can be fickle things. We know for claims of CO2 signals in local extreme event attribution like fires or floods it’s not even technically possible to reach 95% confidence because these things happen so rarely that you just don’t have enough samples, and yet such things are usually reported to be attributed with high confidence. The other issue is that claims based on the distributions of extremes in a null vs forced virtual experiment depend on the models selected with uncertain structures and inputs. I think it’s a matter of debate of the meaning of statistics in this context. I rarely hear concerns from the climate community about this, which suggests that strict adherence to conventional thresholds of statistical significance isn’t treated as a hard rule but rather is context-dependent. Additionally, considering the WMO lists a wide array of “essential climate variables” it would be interesting to hear attributional claims related to other types of input forcings especially in local and regional context. Exploring these could expand the applications of climate science models and strengthen their perceived societal value. https://gcos.wmo.int/site/global-climate-observing-system-gcos/essential-climate-variables
Cont. from https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-842771 , https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-842798
Linear B and 3 sinusoidal B posts (2 more in production), with the skin-temp post now in context:
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/09/for-asymptotic-radiances-ppia-linear-and-general-cases-wip-awaiting-final-proofread-double-check-diagrams-pending/
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/24/for-asymptotic-radiances-ppia-linear-b%cf%84/
https://scienceopinionsfunandotherthings.wordpress.com/2024/12/10/directionally-averaged-radiance-and-the-semi-gray-skin-temperature-wip-awaiting-final-proofread-double-check-diagrams-pending/
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/30/for-asymptotic-radiances-ppia-sinusoidal-b%cf%84-part-1/
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/26/evaluating-this-term/ (I’m rather proud of this work)
https://scienceopinionsfunandotherthings.wordpress.com/2026/01/01/for-asymptotic-radiances-ppia-sinusoidal-b%cf%84-part-2/
https://scienceopinionsfunandotherthings.wordpress.com/2026/01/18/for-asymptotic-radiances-ppia-sinusoidal-b%cf%84-part-3-sinusoidal-bvertical-mass-path/
PS thank you to the European leaders standing up to “Don Quitrumpte”. We don’t need to own Greenland, I don’t want Venezuela’s oil. Save polar, winter, and mountain ice, but F*ck ICE.
New paper in Nature Climate Change:
Accounting for ocean impacts nearly doubles the social cost of carbon