This month’s open thread. We’re not great ones for New Year’s resolutions, but let’s try. How about we resolve to stay substantive, refrain from abusing one another, and maintaining a generosity of spirit when interacting with others?
Lots of things get updated in January and we’ll try and keep up, though possibly with less fanfare than in previous years. In other news, we await the (supposedly imminent) release of a new “National Climate Assessment”, and the (supposedly imminent) engagement of the authors of the DOE ‘climate report’ with the extensive critiques they received. Meanwhile CMIP7 has started, and we expect results to trickle into the databases throughout the year – dig into some of the literature to get a sense of what will change (better models, improved forcings, etc.).
Eppure si riscaldi.
“Hmm” on 31 Dec gives a link to “solar-energy-developer-secures-415-million-to-power-the-worlds-largest-direct-air-capture-plant”
If this DAC will be done through the physical concentration of CO2, then all my criticism stands – the solar energy should be used to displace burning fossil fuels, not be wasted on the hugely ineffective (because of the thermodynamics) process of removal of Co2 afterwards. And the ineffective way to use of the limited climate mitigation subsidies – and as such endangering the already fickle social support for them (“if it costs THAT much to remove a ton of CO2, then we can’t afford it”) – resulting in abandoning attempts to meet our climatic goals.
in Re to Piotr, 1 Jan 2026 at 1:20 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843335
and John Pollack, 1 Jan 2026 at 9:31 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843354
Hallo Piotr,
I fully agree with your opinion that the precious solar energy should be used to displace burning fossil fuels, not be wasted on the hugely ineffective process of removal of CO2 afterwards.
In accordance with my previous posts of 31 Dec 2025 at 7:47 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843318
and of 2 Jan 2026 at 5:17 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843408 ,
I would like to correct you slightly that it is not so much “because of the thermodynamics” but rather because the air volume that has to be processed to extract the required CO2 amount is huge and we wish to process it quite quickly. I would therefore perhaps rather say “because of the kinetics and thermodynamics”.
A more important point that I would like to emphasize is that the thermodynamic “because” applies for ALL separation processes, irrespective whether the separation step is physical, chemical or perhaps biochemical – because what matters in thermodynamics are solely the starting and final states of the system, not the way therebetween.
My further point is that all direct air capture (DAC) processes that involve technical means necessarily share also the above mentioned kinetic “because”. That is why I still think that it makes sense if the term DAC is used just for these technical direct air capture methods and not for such carbon dioxide removal (CDR) processes that basically exploit “natural” processes only, like wind, sunlight and plant biology.
Hallo John,
Thank you very much for your remark regarding atmospheric pressure above the Antarctic ice sheet.
As I stressed above, my example of a specific purely physical process had to show the kinetic aspect shared generally by all thinkable separation processes, so it does not apply only for freezing CO2 out but also for chemical extraction, adsorption processes and/or membrane processes.
Best regards to both of you
Tomáš
P.S.
A nice, slightly more detailed explanation of the Gibbs energy of mixing provided Andrew Dessler on his blog
https://www.theclimatebrink.com/p/thermodynamics-of-air-capture-of ,
with an update from the year 2023 that Climeworks provided an estimate
https://www.frontiersin.org/articles/10.3389/fclim.2019.00010/full
that the energy required for separation of 1 t CO2 in their DAC process is about 2000 kWh.
Unfortunately, even he does not mention that the same “kinetic” aspects that multiply the real energy demand in comparison with theoretical limit cause also unavoidably huge size (and costs) of the necessary equipment. That is why I tried to draft and add the respective explanation myself.
About those 2023–24 warming anomalies
We’ve seen this pattern before. During the so-called hiatus (~1998–2012), global mean surface temperatures remained roughly flat for over a decade. At the time, this was frequently dismissed as “natural variability” or framed as a denier talking point.
What later became clear was not that the physics was wrong, but that the surface temperature record was incomplete and systematically biased — particularly due to sparse Arctic coverage and enhanced ocean heat uptake in the Pacific.
Work by England et al. (2014) and Cowtan & Way (2014–15) showed that warming had not stopped; it had been temporarily redistributed and partially hidden from the dominant surface metrics.
In other words, the apparent discrepancy arose from limitations internal to the observational system and interpretive framework — not from external misinformation.
The lesson is not that today’s explanations are wrong — but that climate science has, in the past, underestimated how observational gaps, framing assumptions, and metric choices can obscure emerging dynamics.
Given the abrupt magnitude of the 2023–24 global warming spike — which remains only partially explained — it is reasonable to ask whether another structural blind spot may exist, involving aerosols, cloud feedbacks, energy imbalance interpretation, or the limits of ECS-based framing — that has not yet been fully confronted.
History suggests this is not an extraordinary claim, but a normal feature of complex system science under evolving observation.
Similarly, the so-called surge from (~1999-2010) should also not be considered representative of a longer term trend, in this case overstating the long term rate of increase. Better, as explained in the introductory portions of this website, to stick with longer periods of around 20 years to establish a significant trend.
It was mostly cherry picking of ’98 temps, which had a very strong El Nino. It was intentional, not accidental nor due to ignorance.
Yes. But I’d rephrase that to say it was mostly cherry-picking a local max and a later local min. And yes it’s intentional which of course brings in the whole issue of multiple comparisons and proper adjustments for taking those scores–or even hundreds–of multiple comparisons and only reporting the one the denial type wants. This is why Gavin mentioned elsewhere that finding areas in a time series where there is a non-trend happens in every time series. It’s WHY you must control for scanning scores to hundreds of tests looking for fake “insignificance”.
Finally, of course, if one understands hypothesis testing at all, one should know that no one can conclude all that much from insignificant results and most certainly cannot conclude that “warming just stopped for X number of years”. That’s an inductive error of the highest order.
Reply to jgnfld et al
rephrase that to say it was mostly cherry-picking a local max and a later local min.
That’s much more than rephrasing, it is fundamentally re-writing history again. It was not ‘regional’ hiatus in temps nor enso alone – it was a global hiatus in a temperature warming trend dismissed as merely natural variation when it wasn’t.
Vey little of anything was “intentional” on either side of the public discussions and framing; bar a few
Sure the then prolific now dead climate science movement were crowing about a lack of “warming” showing up in global temperature [ not locally ] and as every half-informed or better climate science advocate said they’re “cherry-picking” a high max to a low end cooling period ; as Killian also said, the exit out of a record high enso phase of 1998.
That is only half the story though. From the beginning pro-climate science advocates and climate scientists incl, here, repeated that point ad nauseum for almost a decade …. claiming the Hiatus which they agreed eventually was a real phenomenon in the temperature records, was this a shift out of the 1998 el nino plus non-defined short-term natural variation…. and that’s it — while also saying “all you climate deniers are wrong and you’re all dumb fools”.
Well sorry that was as wrong as the climate deniers were wrong. While you jgnfld and others continue to ignore what was really happening in the 2000-2010 period with the “temperature observations” which were also wrong!
As per Work by England et al. (2014) and Cowtan & Way (2014–15) showed that warming had not stopped; it had been temporarily redistributed and partially hidden from the dominant surface metrics.
In other words, the apparent discrepancy arose from limitations internal to the observational system and interpretive framework — not from external misinformation.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843343
The largest portion of missing heat, that mysterious unaccounted for “natural variation” going AWOL somewhere for a decade was in fact the ERRORS OF the temperature records, the metrics were WRONG. The entire time the TEMPERATURE STATS were way out!!!
Once these metrics were corrected this “natural variation simply disappeared, and a steady warming trend was again observed” in the temperature records from 1998 thru 2010 into 2015.
That’s when global flood happened. … of new published science papers about this unusual event. Why the Data was wrong!
Showing that Both the Climate Deniers and the Climate Scientists and Advocates were WRONG at the same time. That’s the correct record that history shows. And has been recorded.
Why folks here continue to ignore these things, even after having it placed in front of you in the thread, is an astounding level of denial and cognitive dissonance in my opinion. It’s a joy correcting the “record” even when my thoughts don’t get through the mental barriers, hurdles, Structural limitations etc which dominate.
Speaking statistically, this is mostly drivel based on a misunderstanding of what the error term in a regression means–or doesn’t. “Natural variation”–which forms a part of the error term simply means ‘all other variables’..
Again, looking across a whole series, finding local maxes followed by local mins requires you to be doing multiple comparisons even if you do not calculate each one of them out. The controlled for alpha probablility for being “significantly insignificant” (which is the utterly flawed inductive “logic” being employed here) would go from a 1 in 20 chance to 1 in 100s of thousands or worse as you most basically have to divide the alpha probability by the number of tests you are carrying out.
Here’s a demonstrational proof I’ve conducted many scores of times in stats classes relating to this very point: Flip a coin 100 times. You will find a “significant” run of 5 or more heads in a row 95% of the time, a “very significant” run of 6 or more heads 77% of the time, and a “highly significant” run of 7 or more half the time. You will even get a run of 10 heads in a row 8% of the time even though the odds of doing that are supposedly 1 in .5^10 (~1 in 1000) in your mode of analysis. How ever is it even possible to expect to see a one in a thousand level event so regularly even though you are only observing 100 events at a time??? Easy. A simple understanding of basic probability.
Physically speaking the basket of “all other variables” may indeed be explored, mined, and measured by interested parties and then modeled physically rather than statistically and you possibly will find interesting moderating variables. (Or may not as in the case of fair coin flips). This might lead to suggested processes which might modulate the overall warming rate. Or it might not. NO one with any scientific merit to their names has suggested that warming stops and starts. That’s just not how radiation physics works no matter how hard you try to obfuscate.
D: it was a global hiatus in a temperature warming trend dismissed as merely natural variation when it wasn’t.
BPL: Oh? What was it?
I may be missing something, but I’d appreciate clarification on how to interpret the magnitude of inter-model spread in Arctic September sea-ice projections.
In the CMIP3, CMIP5, and CMIP6 figures shown in Gavin Schmidt’s May 2025 article on Arctic sea-ice trends, [ see https://www.realclimate.org/index.php/archives/2025/05/predicted-arctic-sea-ice-trends-over-time/ ] the ensemble mean tracks observations reasonably well, but by ~2020 the full ensemble range spans outcomes from minimal September ice loss to near-collapse under the same historical forcing.
These seem like qualitatively different Arctic states rather than small parametric deviations. My question isn’t about the statistical performance of the ensemble mean, but about interpretation: at what point does widening structural dispersion itself become a signal that some key processes remain insufficiently constrained, even if the mean behaves skillfully?
In such cases, how should we think about the interpretive limits of the ensemble mean?
I’m genuinely interested in how others think about that distinction.
An addendum to addendums
Data says 28 Dec 2025 at 4:49 PM @
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843196
About the “models aren’t tuned” myth
CMIP models are not tuned to match a single historical GMST trend, but their feedbacks and energy balance are constrained through parameterisation and calibration against observed climatology and TOA imbalance.
While CMIP models are not tuned to future outcomes, their apparent historical realism reflects parameterisation and calibration choices that constrain energy balance, feedbacks, and ocean heat uptake.
Claims that CMIP models are “not tuned” usually refer to the absence of a direct GMST target, but overlook the fact that feedbacks and energy balance are constrained via parameterisation against historical observations.
Taken together, this means feedback magnitudes in CMIP models are not independent of parameterisation and calibration; their apparent accuracy reflects constraints imposed during model development.
Recent comments highlight how models can appear skillful at the top level while relying on internally compensating structures. That doesn’t negate their usefulness, but it does explain why uncertainty in feedbacks and attribution remains. A lot of the surrounding friction seems to come from how differently this is understood inside modeling practice versus in public-facing discussion.
This doesn’t imply bad faith, but it does help explain why disagreements persist and why public discussion often talks past itself. Much of the tension arises from the gap between how uncertainty is handled inside modeling practice and how outputs are communicated externally. Recognizing that gap is more productive than relitigating intent or motives.
Senior figures in climate science are under pressure to prioritize narrative coherence and public-facing certainty. This leads to selective engagement with questions and under-acknowledgment of internal structural uncertainties raised by non-core contributors.
From an outside perspective, this pattern can understandably appear as manipulation or selective disclosure. However, it is more accurately understood as a sociological and procedural phenomenon: scientists are managing the tension between transparency, policy relevance, and public trust, rather than deliberately misleading anyone.
And lastly, CMIP6 and other global climate models do not predict the future.
They produce scenario-based projections under defined assumptions about emissions and forcings, constrained by physics, observations, and the best available knowledge, but they cannot know how the future will unfold.
These outputs are plausible pathways, not predictions.
The primary purpose of these projections is to inform policymakers about the potential consequences of different emission pathways, highlighting the urgency of reducing greenhouse gas emissions. Therefore, any suggestion that CMIP outputs, the IPCC, or other frameworks are making a temperature prediction for 2050, 2100, or any specific future period is scientifically invalid; these models do not make such predictions.
Not a shred of published evidence cited anywhere in your screed. Again, climate models accurately/skillfully projected global mean surface temperature (GMST) trends and iTCR:
This accuracy/skill is not explained by tuning, as noted by climate scientists like Dr. Gavin Schmidt and Dr. Zeke Hausfather:
No amount of non-expert, evidence-free, petulant whining changes that.
The discussion isn’t about how many papers or citations can be listed, nor GMST skill scores. The point is structural: CMIP models are constrained through parameterization and feedback calibration — in other words, tuning — which affects how their results should be interpreted. This issue has already been addressed in my prior posts.
The issue is you making stuff up with no cited evidence and in willful avoidance of evidence cited showing you’re wrong, while contradicting informed experts who know more than you regarding the evidence. You’ve done this for years across your sockpuppet accounts, and I already showed you doing it for accurate/skillful modeled projections. Here you do it by 1) pretending Forster 2025 is not peer-reviewed, and 2) willfully ignoring other peer-reviewed sources you were cited, such as IPCC reports, Xu 2018, Dai 2023, and Hansen 2023:
And here you do it by acting like projections reach 3°C by 2050, when they actually don’t reach it:
These will be my final responses to you on these sub-threads since I’ve run out of patience with willful fabricators. I think it’s apparent to informed readers that your claims are not to be trusted, especially when your disinformation on this 3°C example is so obviously wrong. No doubt you’ll make new sockpuppet account(s) soon, as you’ve done before when folks see through your older accounts.
Reply to Atomsk’s Sanakan
5 Jan 2026 at 3:22 AM
I’ve already addressed the specific factual point at issue (AR6 WG1 SPM assessed ranges vs predictions) with direct citations.
Repeating long lists of unrelated papers, again mischaracterizing what has been said by myself and others about modelling and tuning issues, and resorting to personal accusations does not engage that substance. I have nothing further to add here.
It looks to me like:
Skill is basically = 1 – (Model RMSE / Null RMSE)
RMSE = root‑mean‑square error of the model projection vs observations
Null = a “no‑change” effect (i.e. assume no temperature change at all) or other simple baseline.
Skill is obviously a loaded term. A high skill score doesn’t speak to structural or physical correctness. Tomas provided an illustrative exampble about how different representations of the motions of celestial bodies can produce correct aggregate behavior but be physically wrong.
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843228
The meaning of skill is entirely conditional on the evaluation framework, which appears to be statistical on an aggregate metric (GMST) rather than mechanistic.
Realclimate.org is basically = a forum to defend the legitimacy of consensus. Within that context, critical nuance is sometimes received as adversarial, and certain words or framings trigger a disproportionate response.
Model = the mathematical and physical representation of the climate system (encoded relationships between variables).
CMIP experiment = a prescribed protocol: inputs, initial conditions, and required output specifications.
Ideally, the model itself is impartial about why climate changes; the perturbation is imposed by the experiment. The model’s analogue is the real Earth system, while the experiment’s analogue is an externally imposed perturbation. In principle, the range of experiments is limited only by imagination.
The model encodes compatibility between variables, not cause-and-effect. In a model of the relations between voltage, current, and resistance V=IR, the form itself does not specify causation. It simply states a constraint that must hold among the three quantities; it does not imply that current causes voltage, or that voltage causes current.
AFAICT it is generally assumed there can only be a single set of physically correct laws in nature, and the existence of multiple simultaneously ‘correct’ but incompatible descriptions tends to be conceptually troubling.
The overwhelmingly dominant focus for CMIP projects are radiative forcing experiments by unnatural human caused emission of major trace gas. So the model atmosphere is given an external input of gas and the idea is that physics should resolve the system back to equilibrium.
Presumably as we move from one generation to the next of CMIP projects over time the participating models grow more sophisticated in their representations of atmosphere, ocean, and land, along with the associated interactions. This may include more explicitly resolved physics, better calibrated physics parameters, and/or different combinations/configurations of physics parameters. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024MS004713
The causal nature of things comes from the inputs external to the model. If it can be demonstrated that models submitted for CMIP6 experiments are in fact a more physically robust representation of the Earth compared to version 5, but yield results that are out of bounds of observation and appear worse than version 5, it’s really a guide to re-look at the inputs and underlying assumptions. A model matching observations superficially may do so for the wrong reasons, whereas a robust model may appear worse under experimental input despite being more accurate in its representation of the Earth.
If we assume CMIP6 includes more physically robust model representations overall in the ensemble (which I think we should), the “too hot” output highlights potential limitations in the experiment (e.g. model input, boundary conditions). Comparatively, an increase in ensemble spread reveals previously unrecognized sources of uncertainty in model configuration.
If we assume a model is perfect, and our imposed (virtual) perturbation doesn’t seem to cause what actually happened (such as trends of solar absorbed), the issue can only be traced to the experimental input framing. I think possibilities along these lines should not be ruled out. The models could actually be teaching something you know. Conversely, falsely assuming the experimental inputs are perfectly comprehensive in historical runs risks introducing biases into our interpretation of the system’s behavior and development of the model itself.
Climate models are not oracles or proof machines; they are tools for learning. Their scientific value lies in exposing the limits of our knowledge and to question assumptions.
And the response would be the same as was given to Tomáš Kalisz in the thread you linked:
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843237
This isn’t just statistical; it’s process-based. Older models through CMIP5 accurately/skillfully projected GMST trends and iTCR (the ratio of GMST warming vs. forcing) because they’re reasonably accurate on factors like water vapor feedback, the Planck feedback, etc. CMIP6 models with higher sensitivity did worse on projecting GMST and iTCR because they overestimated positive feedbacks. Improving cloud physics for CMIP6 models reduces their sensitivity down to what’s shown for older models, yielding accurate projections of GMST and iTCR matching those of older models.
Bringing up other model details is irrelevant since those are not dominant in accurately projecting GMST and iTCR, just as the mass of Mercury is not dominant in predicting Earth’s orbit (it’s the mass of the Sun and the Earth that primarily matters). It’s no mystery why the model’s made accurate/skillful predictions since it’s been known since at least Callendar in 1938 what the most dominant feedbacks are (water vapor, Planck, etc.).
To A’sS:
It’s somewhat ironic to invoke Callender in this context but I think it provides an excellent case study.
In his time, they imagined that increasing CO2 shifted the effective origin of surface downward radiation (what he called sky radiation) to lower, warmer layers thus increasing the radiative input to the surface. “An increase of carbon dioxide will lower the mean radiation focus, and because the temperature is higher near the surface the radiation is increased.” https://www.rmets.org/sites/default/files/papers/qjcallender38.pdf
During that time there was no concept of energy accumulation by earth energy imbalance, no radiative feedback parameter, and certainly no so-called Planck effect or feedbacks. He imagined a redistribution of downward flux (to the surface) to originate from lower, warmer levels and computed the associated equilibrium temperature difference by 4th root law. I think it was a radiation recycling scheme that seems to hang up a lot of skeptics.
More recently (say since Manabe), the dominant framing of global warming has been inverted. These accounts emphasize an upward shift of emission to higher, colder layers, reducing outgoing longwave radiation (to space) and thereby producing an earth energy imbalance. A radiative feedback parameter (as W/m2 per K) is thus invoked to eliminate the imbalance and suggests an equilibrium climate response (and transient climate response function over time). Manabe teaches the planet must accumulate energy until the temperature of the (now higher) effective radiating level is restored.
The change in perspective from Callander to Manabe is almost Copernican, akin to replacing Earth with the Sun as the center of celestial motion. Against this background, your claim appears to lean heavily on empirical adequacy or equifinality, making little distinction across major paradigm shifts. The focus narrows to the single question of whether CO2 causes warming (yes or no), as if this alone defines the scope of climate science. It’s not meant to be a judgement or anything, but I do find it interesting.
Re: “During that time there was no concept of energy accumulation by earth energy imbalance, no radiative feedback parameter, and certainly no so-called Planck effect or feedbacks.“
Any reasonable TCR and ECS estimate implicitly includes the Planck feedback since it’s necessary to prevent infinite TCR and ECS values, along with runaway values. And Callendar’s work included water vapor feedback. Callendar was still able to get quite close on iTCR (i.e. the ratio of warming vs. forcing) since the water vapor feedback and Planck feedback are dominant.
Re: “The focus narrows to the single question of whether CO2 causes warming (yes or no), as if this alone defines the scope of climate science.“
The question was skill at projecting future global warming and iTCR. The denialist Yebo Kando denied that skill. Then they and their fellow denialist ‘Data’ pretended those accurate projections were due to model tuning. I’m pointing out that climate models for decades were skill in projecting future global warming and iTCR up to CMIP5, where this skill is not explained by model tuning. Bringing up other topics in climate science does nothing to change this. Nowhere did I say the only scope of climate science is whether CO2 causes warming.
Re: “He imagined a redistribution of downward flux (to the surface) to originate from lower, warmer levels and computed the associated equilibrium temperature difference by 4th root law. “
If you mean the Stefan-Boltzmann law, then that entails Planck feedback; i.e. negative feedback resulting from Earth radiating more energy as it warms. So no, Callendar’s work would include the Planck feedback:
In re to: A’sS
Yes, I’ll repeat: you seem to be seeing no distinction, or placing no concern, between two entirely different process descriptions, simply because they arrive at similar numerical outcomes. I think this goes to the heart of our discussion.
If the aim is to arrive at a CO2 warming effect by any description at all, without distinguishing between fundamentally different representations, that’s unusual in science. Major Nobel-prize advancements in revised process description are usually celebrated as scientific achievements, and fields do not usually strive to preserve the impression that today’s knowledge has always existed. Quite the opposite actually.
The perception of an unbroken continuity from the earlier pioneering work may be designed to make physics appear straightforward, as if from day one someone got it exactly right, when that’s not actually true. Why do that? Acknowledging conceptual discontinuities, in my view, gives climate science more credit, not less. Against that backdrop, it is unclear why your reference to Anderson, Hawkins, and Jones (AHJ) is not more explicit in this respect, or who their intended audience is meant to be…
It’s really remarkable isn’t it that Callendar’s calculations can be shown to fall neatly in the modern CMIP ensemble despite only using an instantaneous downward sky radiation effect. It’s an ideal example in the context of what the aim really is. Is there a desire to project a prevenance from long ago that things have always been clearly understood? What purpose does that serve? Why not celebrate breakthrough achievements for what they are?
Clearly the Planck effect is introduced only in modern context because, unlike Callendar, there is need to explicitly account for a stabilizing influence in planetary emission. It is known that a 1K change at around 288K should be proportional to around a -5 W/m2 SB radiation (stabilizing). But, because the contribution (to space) is overwhelmingly generated in atmosphere, this figure corresponds to a 1K change at around 255K, which is closer to -4 W/m2 column SB response. The figure is further adjusted to account for the fact that the source term includes both tropospheric and stratospheric contribution, whose CO2 gas optical properties are different (optically thick vs optically thin nature of the layers respectively).
The stratospheric masking is the dominant deviation between a local (so-called) Planckian and ordinary SB stabilization. Conceptually, owing to the existence of increasing atmospheric emission to space with warming, it follows conceptually: -5 W/m2, as if there is no atmospheric contribution and thereby no greenhouse; to -4 W/m2, as if there is a slab atmosphere; and finally to the consensus -3.2 (-3.05 to -3.39), owing to the unique properties of troposphere and stratosphere. A useful description of this stabilizing influence can be found in “How Well do We Understand the Planck Feedback?” https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023MS003729
By contrast, Callendar was talking about a positive sky-radiation contribution to the surface (something like a radiative forcing); a positive value. Certainly not a stabilizing influence against planetary energy accumulation (a negative value). An increasing sky-radiation “force” due to lower altitude of radiation focus; the source term being generated closer to the surface in warmer layers. In Callendar’s framework there was no concept of stabilizing planetary emission and no TCR.
Additionally, he sets the water vapor concentration at a constant value 7.5 mm Hg to generate his Figure 2; a stable background which was translated later into Anderson, Hawkins, and Jones’ Figure 5 demonstration of how wonderfully it fits over years 1880-2000. https://ars.els-cdn.com/content/image/1-s2.0-S0160932716300308-gr5.jpg
No positive water vapor feedback, no stabilizing Planck influence, no energy imbalance, no ocean heat uptake, no distinction of transient and equilibrium response. Just pure instantaneous temperature co-evolution with sky radiation. The result fits neatly into the modern CMIP range which is perhaps why you don’t see any distinction. I don’t know – it seems to me like an ideal case example how similar results are obtained using completely different process imaginings. The only parallel is recognition of spectral CO2 effects occupying different lines than H20 and by association human caused Co2 production probably has a warming effect.
In modern context, if there is no explicit positive water vapor feedback the greenhouse is thought to remain roughly stationary compared to background, and the outgoing emission continues to stabilize as usual with about -3.2 (-3.05 to -3.39) W/m2 per K GMST. That’s why it’s sometimes called a no-feedback stabilizing response. Greenhouse stays the same.
Feedbacks imply that as the system warms greenhouse effects intensify compared to the background state. Modern models could not trace temperature evolution if fixing to a stable background in the way Callendar did it. Failure to clearly distinguish Callendar’s stable background from feedback, and linking disparities only with too conservative CO2 rise, is an unwitting oversight by AHJ. I trust you can see that the pieces are not interchangeable, because the process description is entirely different.
As opposed to second hand accounts, I recommend to read directly the several times 1938 work you referenced. I think it’s quite approachable and doesn’t need to be distilled through Anderson, Hawkins, and Jones, who in my view understate the significance of the distinct process imagination involved. Conversely, if the aim is simply to write down a way in which Co2 could cause warming then any framework will do. However, I don’t see any (scientific) virtue in advertising that each formulation to that effect is equally valid. https://www.rmets.org/sites/default/files/papers/qjcallender38.pdf.
Previously you suggested that various essential climate variables, including precipitation, winds, and the circulation patterns are “not relevant since those are not the main drivers”. I remain unconvinced, and not least in the context of your interest in global average temperature change. Perhaps you think they’re not relevant since some types of model descriptions can do without. Conceptually, I suspect the field is entering another transition, in which energy accumulation is increasingly understood to be tied directly to such processes through shortwave reflectivity and more specifically as atmospheric adjustment – not merely as feedback. The long-standing emphasis on the static LW net radiation feedback parameter still reverberates from fifty years ago, and such ideas may again be changing. I would caution against missing the science by accepting narratives that portray the field as effectively stagnant for a century. That simply isn’t true, and promoting it as such can only be damaging.
In Re to JCM, 5 Jan 2026 at 2:47 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843497
and Atomsk’s Sanakan, 7 Jan 2026 at 10:44 AM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843587
Dear Sirs,
First, I would like to thank JCM for the interesting reference to this historical article.
As regards your dispute, I must admit that even though I am not able to assess if the way how G.S. Callendar arrived at his Table V at the bottom of page 229 was correct, I tend to the opinion that at least this table suggests that – irrespective whether derived rigorously or rather by intuition – his physical picture of the greenhouse effect (how the change in height profile of the infrared radiation flux with increasing CO2 concentration might look like) seems to fit at least qualitatively with the present one.
If we look at this table, his “sky radiation” from the bottom layer (that could be understood as todays “downwelling longwave radiation” towards Earth surface) clearly rises with rising atmospheric CO2 concentration, while the radiation from the upper layer (that could be understood as todays “outgoing longwave radiation” from the “top of atmosphere” to the Universe) decreases.
Unfortunately, it appears that his equation (5) derived in the following chapter is incorrect. I understood his sentence “suppose that the sky radiation is changed from S1 to S2 whilst H remains constant” the way that in both the original steady state with surface temperature T1 as well as in the new steady state with surface temperature T2, net radiation from the surface equals the absorbed solar radiation which is assumed to remain constant. This is, in a first approaching, a reasonable simplification and can be seen as scientifically correct. This assumption, however, does not seem to fit with equation 4 – I am afraid that the parentheses therein should be, in fact, omitted. If so, I think that the correct formula (5) for T2 should be the fourth root from the expression (T1exp4 + (S2 – S1)/sigma).
Could you double-check? If Callendar’s equations (4) and (5) are indeed erroneous, I tend to an opinion that he might have arrived at his conclusions (that indeed seem to fit at least qualitatively with contemporary picture) rather intuitively than by a rigorous and scientifically convincing procedure.
Greetings
Tomáš
Atomsk: “I’m pointing out that climate models for decades were skill in projecting future global warming and iTCR up to CMIP5, where this skill is not explained by model tuning. Bringing up other topics in climate science does nothing to change this. Nowhere did I say the only scope of climate science is whether CO2 causes warming.”
Here you go, Atomsk – first – you admit that you defended the skill of the very models which JCM blames for “up to 40% of planet’s land degraded”:
“ It’s hard to imagine denying or actively minimizing the consequences to realclimates due to an artificial fixation and overemphasis on the outputs of trace gas and aerosol forced model estimates. ” [ (c) JCM])
And then, in the same paragraph, you challenged his implications that the climate models are not to be trusted, because they are: “imaginary process mechanisms [using] rules about how things ought to be” (c) JCM July 2024.
If he said this about the very model (Lague et al.) he brought here himself, and turned on it only after we had shown it weakens, not support, his long-held beliefs – then what he must think about the models he never liked?
So, sorry Atomsk, you’ve never stood the chance …. ;-)
I already explained that Callendar’s 1938 work includes both the Planck feedback and the water vapor feedback. He includes the former insofar as he accepts greater radiation release with greater temperature. And if you can’t spot where he includes the latter, then re-check the paper.
Hence why Callendar was able to accurately project GMST trends and iTCR. You bringing up other details is irrelevant, since those details are not required for accurately projecting GMST and iTCR, topics where Planck feedback and water vapor feedback are dominant. It’s akin to bringing up differences between relativistic models vs. Newtonian models to avoid the fact that both do fine in projecting the motion of a cannonball shot from a cannon. Those differences between the models are largely irrelevant to accurately projecting that motion, though those differences can be important for other topics like Mercury’s orbit.
And I’ve already read Callendar’s 1938 paper. That’s how I know it uses multiple lines of evidence to check his iTCR and GMST projections, such as spectroscopic measurements, instrumental warming trends, and paleoclimate data. Much of the same type of evidence is used to estimate TCR today.
You’re attacking straw men, such as claiming there is no distinction between models, or that the field is stagnating, or…. Those were not my point. My point was that older models through CMIP5 accurately projected GMST trends and iTCR because they got processes like Planck feedback, water vapor feedback, etc. largely right. And this was not due to model tuning.
in Re to Atomsk’s Sanakan, 8 Jan 2026 at 3:09 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646
Dear Atomsk’s,
The part of Callendar’s publication that could be construed as the Planck response to a change in CO2 concentration is his equation (5), I think. Is it what you suggested? If so, can I ask again if you think that this equation is correct?
I have not found any explicit mention of the water vapour feedback, in the sense that a temperature change caused by a change in atmospheric CO2 concentration will be further amplified by an air humidity change. I think that the closest sentence that might refer to water feedbacks could be perhaps one in the third pragraph from the bottom on page 230 that reads: “Thus a change of water vapour, sky radiation and temperature is corrected by a change of cloudiness and atmospheric circulation, the former increasing the reflection loss and thus reducing the effective sun heat.” Is it what you meant?
Best regards
Tomáš
Re: “In Callendar’s framework there was no concept of stabilizing planetary emission and no TCR.“
He stated how much global warming would be caused by a given increase in CO2: 0.5°C warming for a 28% CO2 increase over 200 years from 282ppm to 360 ppm (table VI of Callendar 1938). That’s enough to calculate iTCR using the same method as Hausfather 2019 and Supran 2023. So a 28% CO2 increase implies ‘5.35 * ln(360/282)’ of forcing, or 1.31 W/m2. With 0.5°C of warming that implies an iTCR of 1.4°C; that’s within the uncertainty range of the observations shown in Hausfather 2019 and Supran 2023.
If you don’t like that iTCR stated relative to 3.71 W/m2 of radiative forcing, then you can state that iTCR relative to doubling of CO2. You still end up with ~1.4°C per doubling. Anderson 2016 calculation a similar value using an empirical approximation of Callendar 1938’s model: “a doubling of CO2 would lead to a temperature rise of roughly 1.6 °C.”
I’m aware that various older models implicitly (and incorrectly) treated iTCR as equivalent to ECS. As Hausfather 2019 notes: “their assumption that the atmosphere equilibrates instantly with external forcing, which omits the role of transient ocean heat uptake“. That doesn’t change my point on Callendar 1938 accurately projecting iTCR.
JCM typed “they imagined that increasing CO2 shifted the effective origin of surface downward radiation (what he called sky radiation) to lower, warmer layers thus increasing the radiative input to the surface”. Yes, that is what happens. When GHGs in the troposphere increase that causes the average places at which photons are manufactured that manage to leak out of the top and the bottom of the troposphere to be closer to their respective upper & lower boundaries. Thus, photons manufactured from cooler air parcels leak out of the top and photons manufactured from warmer air parcels leak out of the bottom. Thus, a lesser photon flux than before leaks out of the top and a larger photon flux than before leaks out of the top. That’s what causes the so-called “greenhouse effect (GHE)” in Earth’s troposphere.
Callender falls into the same radiative surface budget fallacy that plagued physical understanding for quite some time, one which treated surface LW down as if it determines climate sensitivity. It does not. The wrong framework is one in which the surface heats up to radiate extra surface LW down away, and makes no accommodation for the (very real) dominant surface-atmosphere turbulent heat exchange (non radiative). There is no such thing as a surface radiative equilibrium from which sensitivity could be derived. Whatever is going on with empirical matches of temperature evolution into the year 2000 is the result of huge compensating errors – of the missing adjustments, feedbacks, and ocean buffer. Some consider this distinction unimportant and are satisfied with any formulation that produces a warming, even if using unphysical structural description. That is their choice and the same is promoted by politicized public facing climate comms. By contrast, the more recent framework is the climate response to satisfy TOA radiation balance. I do find it striking that enthusiast contributors of realcliamte.org can’t bring themselves to concede something so essential. The center of focus nowadays is a LW radiation forcing around the effective emission level (averaging around 5km altitude) with special treatments for spectra operating around troposphere/stratospheric boundary and aloft. Any TOA net radiation surplus results in total system energy accumulation thereby dragging GMST along with it. Planetary radiation balance of inputs and outputs, which is only a concept that can be satisfied from top of atmosphere perspective (not at the surface), is obtained in some distant future following perturbation. No matter how one might retrospectively apply newer knowledge to a Callender style description, it just isn’t there in his formulation. If what he was imagining does in fact represent more modern structure representation it might have been prudent to write it down, sketch something to the effect, or at least mention it in passing. It’s no fault of his; it’s a hard problem and things change. deal with it. His primary contribution was to re-ignite interest CO2 effects and also a great deal of effort in compiling temperature records – all done in his spare time. That is remarkable and he sounds very cool.
Again, JCM, you’re simply moving the goalposts from the original points. These were that older models (including Callendar 1938) skillfully/accurately projected iTCR and GMST, with this not being explained by model tuning. I already explained how that iTCR can be stated in terms of warming per doubling of CO2. Your other concerns about model details do nothing to change those points.
in Re to Atomsk’s Sanakan, 10 Jan 2026 at 2:52 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843744
Dear Atomsk’s,
I asked the Gemini Pro for an explanation what is the iTCR and how it differs from TCR.
According to the response provided by the engine,
1) transient climate response (TCR) is a metric used for comparing various climate models.
To calculate it, climate modelers perform a specific standardized experiment:
– They increase atmospheric CO2 by 1% per year (compounded) starting from pre-industrial levels.
– They wait until the CO2 concentration has exactly doubled (which takes 70 years).
– The TCR is the global average temperature increase at that specific moment (averaged over a 20-year window around year 70);
while
2) iTCR (implied TCR) is a technique used to infer the TCR from data that isn’t the clean 1% experiment described above—such as observed historical warming or widely varying emissions scenarios.
The Logic: Researchers take the observed Temperature Change (Delta T), divide it by the estimated Radiative Forcing (Delta F) that caused it, and scale it up to the forcing of doubled CO2.
If this explanation is correct (I have not checked it anyhow), it appears that your term “iTCR projection” sounds like an oxymoron. Have you perhaps meant rather “iTCR estimation”?
Does it mean that when you referred to Callendar, you basically spoke about chapter 6 of his article and omitted the previous chapters that were criticized by JCM?
Thank you in advance for a comment.
Greetings
Tomáš
Re: “I asked the Gemini Pro for an explanation what is the iTCR and how it differs from TCR.“
iTCR is the ratio of global warming vs. radiative forcing for a doubling of CO2. It’s used in papers like Hausfather 2019 and Supran 2023. Unlike equilibrium climate sensitivity (ECS), iTCR does not account for longer-term lags since it only includes the amount of global warming that occurred during the time-period of radiative forcing.
So, for example, assume a hypothetical scenario in which CO2 levels doubled from 300ppm to 600ppm from 1850 to 2030. And assume this caused 2.1°C of global warming from 1850 to 2030. iTCR would then be 2.1°C. ECS would be larger than 2.1°C since thermal inertia and ocean heat uptake would result in more surface warming after 2030:
The denominator of iTCR does not have to be radiative forcing, but can be restated in terms of doubling of CO2. For example, take what Gilbert Plass told to the general public in 1953:
Plass’ projection of 1.5°F per century translates to 1.06°C per 127 years. And he projected a 1.5-fold CO2 increase for those 127 years. Page 4 of the supporting information of Hausfather 2019 gives the IPCC’s empirically supported formula for converted CO2 changes to forcing. That formula gives 2.17 W/m2 for a 1.5-fold increase of CO2, and 3.71 W/m2 for a 2-fold increase of CO2. That gives a ratio of 1.06°C of warming for 2.17 W/m2. Multiplying that ratio by 3.71 W/m2 from a doubling of CO2 gives an iTCR of 1.8°C for a doubling of CO2.
Re: “If this explanation is correct (I have not checked it anyhow), it appears that your term “iTCR projection” sounds like an oxymoron. Have you perhaps meant rather “iTCR estimation”?“
It’s an iTCR projection. Estimates of future global temperature trends and future greenhouse gas levels (with other future forcings) count as projections because they’re for the future. They can then be converted to future iTCR, as I did above for Plass’ 1953 projections. That’s a modeled iTCR projection because it’s for the future. The modeled iTCR projection can then be compared to the observed iTCR for that projection’s time-period, using observed global temperature trends and observed forcing. That’s what done in Hausfather 2019 and Supran 2023.
Re: “Does it mean that when you referred to Callendar, you basically spoke about chapter 6 of his article and omitted the previous chapters that were criticized by JCM?“
I explained the iTCR calculation for Callendar 1938 :
in Re to Atomsk’s Sanakan, 18 Jan 2026 at 5:30 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844054
Dear Atomsk’s,
Thank you very much for your comments!
As the Table VI on page 232 of Callendar’s article still belongs to his Chapter 5, I suppose that the delta T values provided therein were derived from his theoretical curve shown in Fig. 2 on the preceding page 231. If so, I agree that the iTCR values that you derived from the Table VI could be indeed assigned as “iTCR projections”.
Nevertheless, I still doubt about your assertion of 8 Jan 2026 at 3:09 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646
that his theory somehow includes water vapour feedback. Could you please specify the part of his article that you construed this way?
As I have been unable to find any hint therefor, I still rather tend to agree with JCM that the fit of Callendar’ iTCR projection with more recent estimations and/or projections thereof can hardly result from various feedbacks correctly included in his theory.
Moreover, I would like to turn your attention back to my question of 7 Jan 2026 at 7:03 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843612 ,
regarding Callendar’s equation (5). Till the perceived errors in Callendar’s theory remain unexplained, I rather think that the fit of his results with more advanced models may be rather purely accidental than resulting from the correctness of his approach.
Greetings
Tomáš
in addition to my post of 19 Jan 2026 at 12:44 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844085
Dear Atomsk’s,
It appears that the above mentioned post got buried under content produced by the multitroll and commenters thereon.
May I remind you of my question where in Calendar’s article you have found the support for your opinion that it deals also with water vapour feedback?
Furthermore, I asked if you think that his equation (5) is correct, and if so, if you could explain in more detail why.
Thank you in advance and greetings
Tomáš
Atomsk: “ And the response would be the same as was given to Tomáš Kalisz in the thread you linked […] CMIP6 models with higher sensitivity did worse [than older models]. Improving cloud physics for CMIP6 models reduces their sensitivity down to what’s shown for older models, yielding accurate projections of GMST and iTCR
and that’s why JCM referring to that thread …. ignored your response to Tomas. It all makes psychological sense – both JCM and Tomas have believed for years that they see what the real climate scientists couldn’t or refused to see, namely that because of their “artificial fixation on, and overemphasis on, a trace gas” concentrations, they UNDERESTIMATE the sensitivity of AGW to the water cycle.
And here you come, and strip them of their entire claim to fame by showing that CMIP6 models don’t do well _precisely_ because they OVERESTIMATED the sensitivity of the climate to the water cycle… ;-)
Anyone wanted to bet that JCM will have the integrity to admit it and to apologize for blaming of “ up to 40% of planet’s land degraded” on Gavin’s and other climate scientists’ “ artificial fixation on a trace gas [Co2 and other GHGs] ” ?
===== JCM, UV, 5 Jun 2024 at 8:24 AM ==============
” UNCCD reports up to 40% of the planet’s land is degraded and annual net loss of native ecologies continues unabated at >100 million ha / decade. This is a profound forcing to climates and puts our communities at risk. It’s hard to imagine denying or actively minimizing the consequences to realclimates due to an artificial fixation and overemphasis on the outputs of trace gas and aerosol forced model estimates. ”
==================================
The war against information and science continues. Monstrous!
NASA’s Largest Library Is Closing Amid Staff and Lab Cuts. Holdings from the library at the Goddard Space Flight Center, which includes unique documents from the early 20th century to the Soviet space race, will be warehoused or thrown out. – https://archive.ph/Ii6qS
“tens of thousands of books, documents and journals — many of them not digitized or available anywhere else.
….
“The library closure on Friday follows the shutdown of seven other NASA libraries around the country since 2022, and included three libraries this year. As of next week, only three — at the Glenn Research Center in Cleveland, the Ames Research Center in Mountain View, Calif., and the Jet Propulsion Laboratory in Pasadena, Calif. — will remain open.
….
“the Trump administration sped up the closures in a haphazard manner during the recent federal shutdown, when few people were around the Maryland campus, and that there are no plans for new buildings. | Specialized equipment and electronics designed to test spacecraft have been removed and thrown out”
Susan, look up “The Streisand Effect”.
There have been legitimate book bannings (also see The Harm Principle ) but most are illegitimate. Add this to those.
https://www.freedomtoread.ca/resources/bannings-and-burnings-in-history/
I personally watched the conservative govt in Canada haul off a research library in Newfoundland to the land fill back when Harper was trying to erase science from Canada. One of the more sickening days of my life. Forget how many he did this too nationally…something like 12 of them or so. Also tried to melt a bunch of painfully and expensively gained arctic ice cores from the far North but as I remember they were saved by shipping them out of the country.
This is the most anti-science administration in US history.
What do you expect from a political movement that has based itself on love of “the uneducated”. These people see no value to books beyond the BTUs (and added CO2) they provide when burned.
I don’t know about love of the uneducated, more like pandering, exploitation, and celebration of ignorance and intellectual dishonesty — all for the power, greed and self-aggrandizement of the trashy unworthy,
Nigel: “ But Ive read quite a few articles on the issue and most people including the critics of DAC seem to use DAC in the sense of meaning extracting CO2 form the air with fans”
All it shows that is the promoters of the physical extraction succeeded in framing the discussion as if it was the only DAC technology in town.
Nigel: “they refer to tree planting and similar nature solutions as “carbon sequestration.”
That’s conflating two different processes – the capture of Co2 directly from air with sequestering it (storing it afterward). Plants couldn’t sequester if they first didn’t capture, while physical extraction would have been meaningless if it wasn’t followed by sequestration. So there is no justifiable reason to exclude direct air capture by plants from Direct Air Capture processes.
The actual reason I suspect is to contrast these – the plant as being a non-starter – since limited by the available land area and competing with growing food, and the other one having no such space limits,
by equating DAC with the physical concentration of CO2 from air (thus ignoring the geology-inspired and chemistry-based alternatives) – make their physical approach – the only viable game in town.
By doing so they cut the oxygen (by monopolizing governmental funding as the only viable solution) from both enhancing the plant and soil uptake, and from the mentioned above non-physical alternatives ( geology-inspired DAC (accelerating the uptake of CO2 via carbonate and silicate rock weathering)
and chemistry DAC – in which CO2 is absorbed onto chemicals in a liquid brew and scrubbed from it), neither of the two making so great a demand for energy as their brute-energy physical concentration approach.
So their framing of the discussion as: “DAC = physical concentration” is completely artificial and self-serving – promoting their technology at the expanse of the more sensible alternatives. And quite successful in that – since even its critics accepted their (DAC = physical concentration) framing and the valid criticism for the energy ineffectiveness of the physical concentration approach – will be understood by the politicians and the public as applying to all methods of DAC.
Thus my characterization of the presentation of the problem by the Physics blog, and well-viewed video it referred to – as throwing the baby with the (physical concentration) bathwater.
And as such is not merely not helpful, but actively harmful by discouraging methods that while cannot replace the decarbonization of the economy, can make it cheaper and therefore more likely to be successful – by removing the need to remove the highest hanging fruit – the last few % of the emissions on the way to net zero that are physically/technically most difficult, if not impossible, to mitigate.
Piotr I’m sure you are right about all of that. However the point I made last month was that in popular usage and even among experts DAC has generally come to mean extraction of CO2 from the air with fans and carbon sequestration is the terminology used for things like tree planting. It’s not technically logical but we are stuck with this terminology and usage. Going against this usage will confuse people. TK said something similar.
Wrong. But that goes without saying. DAC is a process of removal. Storage is a separate issue. Trees sequester, DAC does not unless or until the long-term storage is included. But, also, DAC is bullshit money-grabbing, so…
Not wrong. Both the industrial extraction of CO2 from the air with fans and tree planting and regenerative agriculture all involve “removal” and “:storage”. We are talking definitions not whether the technology is being used exactly as it should be. That would be like saying tree planting is not a potential form of carbon removal because the trees arent being kept long enough. For the record I dont promote the industrial form of DAC.
a comment on “Yebo Kando”, 1 Jan 2026 at 10:52 PM,
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843358
and 2 Jan 2026 at 9:10 AM,
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843385 ,
as well as on “Data”, 2 Jan 2026 at 12:50 AM,
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843362
Dear moderators,
To me, it appears that the dispute crystallized into two contradictory assertions. YK seems to insist in an assertion that certain models, commonly sharing the feature of high climate sensitivity, performed with a very good score in CMIP5 and that (likely not the same) models, again characterized by their high climate sensitivity, totally failed in CMIP6.
I have not noticed that he has ever exactly listed the respective models and/or clearly explained in which aspects of the test they succeeded or failed. Although Yebo’s opponents seem to generally disagree with his assertion, it appears that due to the lack of specificity on Yebo’s side, they cannot specifically disprove this core claim, too.
This is a point wherein, in my opinion, an intervention by moderators could resolve the dispute and prevent further endless exchange without any clear outcome. I do not think, however, that simply closing the thread without any conclusion is an optimal solution. Therefore, may I ask you if Yebo’s assertion has any real background? If so, could you perhaps explain the case in more detail?
And, finally, I still have a feeling that “Yebo” may be yet another embodiment of the multitroll that perhaps only not attracted your attention yet.
Maybe a comparison of IP addresses from that he posted during his entire appearance on this website with the IP addresses used by already proven multitroll brands (as well as by “Neurodivergent”, “Jim” and “Data”) could help resolve my questions as well.
Thank you in advance and best regards
Tomáš
TK et al.: Yes, it would be nice if the endless obstinate exchanges would stop. But noone will desist and the rights of the situation are buried.
Yebo is Obey spelled backwards and Kandu sounds like can do, which should tell us what we need to know.*
I agree that the activity is not a useful contribution. The exchanges, to anyone not informed enough to judge the rights of the situation, appear to be equal. A sensible person would realize this is neither helpful nor useful and desist. This should penetrate the thickest of skills, but what a hope!
*PS. We all know our future is fraught. Nobody benefits from fighting over just how dire, as that feeds those bent on deception and exploitation.
“Nobody benefits from fighting over just how dire, as that feeds those bent on deception and exploitation.”
False. Nearly impossible to design to a risk that is not acknowledged and/or not understood. You are correct in that the only framing that matters WRT an existential threat is the worst case, so, no, there is no point in arguing about it.
Tomas Kalisz, I think there is some truth in what you say. However I’m fairly sure YB is claiming the high climate sensitivity CMPI5 models have no skill and are the scribblings of children because some predictions were poor and some of the underlying physics was poor.
I said they still have some overall skill, because they made some good predictions (the warming rate being one apparently) and skill is primarily a test of predictive ability not the underlying physics, and the most important physics was correct. And so the comparison with children scribbling is both exaggerated and insulting. I have several times acknowledged those models clearly did have some deficiencies.
YB ignores all this and claims I dont address his issues despite the fact I just addressed them!
Nigelj says
1 Jan 2026 at 4:16 PM
comment included:
The warming spike in 2024 specifically has been well explained by a combination of AGW, el nino, aerosols reductions (using mainstream estimate of aerosol forcing number, not Hansens high number), and the solar cycle and this website did an analysis on that and I posted a study by Carbon Brief on it written by Zeke Hausfather here:
https://www.carbonbrief.org/analysis-what-are-the-causes-of-recent-record-high-global-temperatures/
The unusually intense warming in 2023 has not been as well explained. Even when accounting for AGW aerosols and el nino and the solar cycle, its still not fully explained. as Zeke mentions. It has indeed been framed as within the bounds of natural variability which isn’t a full explanation.
from Dc https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843342
zeke’s article included – ( the US National Oceanic and Atmospheric Administration’s (NOAA’s) multivariate ENSO index – show the 2023-24 event was much weaker than indicated in the Niño 3.4 dataset. )
Thanks — I largely agree with your framing, especially that 2023 remains anomalous in a way 2024 does not.
One point I’d push back on is the characterization (via Zeke) of 2023 as a “strong” El Niño. In the canonical sense, it wasn’t. ENSO strength is defined by tropical Pacific SST anomalies (e.g. Niño 3.4, MEI), not by the global mean temperature response. By those metrics, 2023–24 was weak-to-moderate and never approached 1997–98 or 2015–16. That was also how it was forecast.
A modest El Niño producing such an outsized GMST response is therefore the phenomenon requiring explanation, not evidence of ENSO strength. That same issue appears in Gavin’s attribution plots: the residual in 2023 remains large even after accounting for ENSO, solar, volcanoes, and mainstream aerosol forcing.
Carbon Brief itself notes declining cloud reflectivity and the possibility that aerosol and cloud effects are underestimated, with implications for a higher effective climate sensitivity (consistent with Hansen et al.). That represents a substantive physical uncertainty, not merely a statistical one — and it cuts directly against attempts to treat 2023 as simply “within variability” and move on.
In short, 2024 fits established attribution narratives reasonably well; 2023 still does not. Hansen may not have the complete answer, but he is asking the right physical questions, and his “acid test” remains consistent with temperature behaviour through 2025 to date. Despite time having passed, there is still no definitive synthesis — only partial, variable, and model-dependent attributions.
In Dec UV thread
Atomsk’s Sanakan says 2 Jan 2026 at 10:52 AM
Nigelj says 2 Jan 2026 at 2:41 AM
AS also provided a link to the warming projections in the last IPCC report, and even the worst case RCP8.5 scenario didn’t get to 3 degrees by 2050. So is the IPCC report crap as well?
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843372
Even the worst-case RCP8.5 scenario includes a 3 °C outcome roughly between 2040–2060, so the claim that it doesn’t reach 3 °C by 2050 is misleading. See my earlier summary here:
Data says 1 Jan 2026 at 10:45 PM
“Allow me to summarize and hopefully put this to bed once and for all.”
see https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843357
and here:
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843362
Readers can follow all my preceding comments and references leading up to this point.
Thanks for revealing your character, sockpuppet account, as per the other post. Again, you said:
But that’s a disingenuous fabrication since Forster 2025 was peer-reviewed. There’s even a tab at the paper’s link naming peer reviewers, showing their reviews, etc. You also left out other peer-reviewed sources you were cited, such as IPCC reports, Xu 2018, Dai 2023, and Hansen 2023. Good illustration of how these Data/Jim/Neurodivergent/Mo Yunus/… sockpuppet accounts try to deceive people.
So again: how you respond (or don’t respond) to being caught on this obvious disinformation should further reveal your character. You’ve already failed once. Try being honest, for once across your sockpuppet accounts.
And this is a response to more willful disinformation you posted on the other thread:
Re: “The crucial thing is: Notice what didn’t happen.
No one answered:
why high-ECS models passed CMIP5 skill tests“
Because, sockpuppet account, CMIP6 models with higher sensitivity (>4.7°C) did worse on those skill tests; they did worse than older models on projecting global mean surface temperature (GMST) and the ratio of warming vs. forcing (iTCR). This has been explained to you and Yebo Kando multiple times, including with cited evidence. But denialism means both of you never honestly address evidence. So your point is as worthless as asking ‘why does 2+2=5’, even after you’ve been shown that ‘2+2=4’. It was shown again above that improving cloud physics for CMIP6 models reduces their sensitivity down to what’s shown for older models, with accurate/skillful projected GMST and iTCR. That’s the case no matter how much you continue pretending and willfully ignoring published evidence on this.
Re: “why “skill” wasn’t falsified until CMIP6“
And like Yebo Kando you pretend points were not answered when they actually were. The fact that CMIP6 models with higher sensitivity were less skillful in projecting iTCR and GMST, does not change the fact that older models were skillful. They were less skillful because their sensitivity was higher than what the evidence supported and higher than older models.
Re: “how compensating errors are handled quantitatively“
You haven’t shown any compensating errors, sockpuppet. You’ve just made stuff up with no cited evidence, and then willfully ignored any published evidence showing you’re wrong. You’re disingenuously moving the goalposts from the fact that you’ve been cited papers showing that models accurately/skillfully projected GMST and iTCR, in virtue of get other aspects right, like positive water vapor feedback, the Planck feedback, etc. Yet you never honestly address that research, such as Hausfather 2019, IPCC 2021, Supran 2023, Frame 2013, Lapenis 2020, etc. All you have is baselessly whining about experts who publish that evidence you don’t understand. Speaking of which…
Re: “Instead, we got:
appeals to authority,
tone correction,
admission of ignorance (“I don’t know the answer”),
and a request that critics defer to experts.“
So in addition to not understanding climate science, you don’t understand critical thinking and epistemology. You also contradicted what your sockpuppet account Jim said. You lack the knowledge of informed experts, just like a belligerent passenger who thinks they know better than the pilot on how to fly the plane and whines that it’s an ‘appeal to authority’ when sensible passengers go to pilots for flying planes. Maybe one day you’ll finally stop trolling and realize how you implicitly trust experts (i.e. appeal to authority) every day, such as in relying on them on the safety and structural integrity of buildings you enter, water you drink, etc. But I won’t hold my breath. Neither you nor Yebo Kando will ever know better than experts who publish evidence, including those confirming model skill and accuracy. Neither you nor Yebo Kando will ever honestly address the evidence-based explanations you’ve been given, nor honestly address the questions you’ve been asked on that. Stew on it.
Data: “Even the worst-case RCP8.5 scenario includes a 3 °C outcome roughly between 2040–2060, so the claim that it doesn’t reach 3 °C by 2050 is misleading. See my earlier summary here:”
Your summary does not provide any source, link, or copy and paste for your claim. The graph of the IPCC projections is here.
https://www.ipcc.ch/report/ar6/wg1/figures/summary-for-policymakers/figure-spm-8
Based on that, you can see even RCP 8.5 does not get close to 3 degrees by 2050, even when taking into account the full uncertainty range which is shaded in. So the German geophysical Societies claims warming could be 2 – 3 degrees by 2050 aren’t backed up by the IPCC.
Re: “Based on that, you can see even RCP 8.5 does not get close to 3 degrees by 2050, even when taking into account the full uncertainty range which is shaded in.“
Yet the sockpuppet pretends otherwise by assuming folks won’t notice that 2041-2060 is not the same as 2050, especially in a projection where slope increases with time:
So as you said, the IPCC projections contradict the ‘3°C by 2050’ projection Chuck originally mentioned, despite Data’s pretense.
Anyway, feel free to take Susan Anderson’s advice to ignore their trolling, especially when they refuse to be honest about even basic points like this.
Nigelj says
3 Jan 2026 at 9:17 PM
Data: “Even the worst-case RCP8.5 scenario includes a 3 °C outcome roughly between 2040–2060, so the claim that it doesn’t reach 3 °C by 2050 is misleading. See my earlier summary here:”
Note: These are scenario-assessed ranges, not predictions, and cannot be extrapolated from recent observed trends to a specific year, 2050. They represent a range of possible temperatures across the full period, not a single outcome or date.
They cannot even be constrained to 2045–2050.
Re: “Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C.“
That table clearly shows what the AR6 SSP Scenarios presented :
Mid-term, 2041–2060 SSP5–8.5
Very likely range (C) 1.9 to 3.0
Ref: AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
alt pdf on page 14
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
Unrelated references remain irrelevant, no matter how often they are repeated. If a citation does not address the specific definition or methodological point under discussion, it adds nothing.
References (previously cited):
1) IPCC AR6 WG1 – Figure SPM.8
https://www.ipcc.ch/report/ar6/wg1/figures/summary-for-policymakers/figure-spm-8
2) Archive magnified snapshot of SSP5-8.5 ensemble spread ( Figure SPM.8 (a) )
https://d1z9kwz1j57ckx.archive.is/nToNi/e519ee9c1a8e60b3d5af48d3148238bb390a924e.jpg
3) AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Data, not sure what you are saying there or how a projection range means single years are irrelevant. Surely if they fall in the range they are still relevant?
This is my take on the issue. The summary for policy makers table smp1 rcp8.5 that you found and linked (a good find) shows warming could in fact potentially reach 3 degrees by 2050 but figure 8 that I mentioned shows it doesnt get near 3 degrees by 2050 so they seem inconsistent.
But one thing is certain reaching 3 degrees by around 2050 would be the very highest emissions scenario and my understanding is this has already been canceled by the progress made in moving away from coal, so the claim of 3 degrees by 2050 or very soon after this still lacks credibility.
The sockpuppet isn’t worth it. They can’t even keep their story straight, a sign someone is trolling and not telling the truth. Hence why, for example, they contradict themselves on whether Forster 2025 is a peer-reviewed paper.
Reply to Data (2 Jan 2026 at 5:16 PM)
To clarify for the record: Figure SPM.8 and Table SPM.1 do not make predictions. They show scenario-conditioned ensemble ranges. Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C. This is a statement about possible outcomes within the ensemble, not an extrapolation from recent observations. Conflating these categories is a methodological error.
References (previously cited):
1) IPCC AR6 WG1 – Figure SPM.8
https://www.ipcc.ch/report/ar6/wg1/figures/summary-for-policymakers/figure-spm-8
2) Archive magnified snapshot of SSP5-8.5 ensemble spread ( Figure SPM.8 (a) )
https://d1z9kwz1j57ckx.archive.is/nToNi/e519ee9c1a8e60b3d5af48d3148238bb390a924e.jpg
3) AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Re: “Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C.“
No, it doesn’t. I know that since I’m the one who originally posted the 2nd link you listed. No projection here gets to 3°C by 2050, the original end-year under discussion:
https://d1z9kwz1j57ckx.archive.is/nToNi/e519ee9c1a8e60b3d5af48d3148238bb390a924e.jpg
And you again avoided your fabrication being exposed on Forster 2025, where you falsely acted like it was not peer-reviewed. These will be my final responses to you on these sub-threads since I’ve run out of patience with disingenuous fabricators. I think it’s apparent to informed readers that your claims are not to be trusted, especially when your disinformation on this 3°C example is so obviously wrong. No doubt you’ll make new sockpuppet account(s) soon, as you’ve done before when folks see through your older accounts.
Reply to Data
Re: “Under SSP5-8.5, the assessed mid-century range extends up to ~3 °C.“
3) AR6 WG1 Table SPM.1 (2041–2060 assessed ranges)
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
and on page 14
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
and presented by Carbon Brief
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Note: These are scenario-assessed ranges, not viable predictions, and cannot be used to extrapolate from recent observed trends into the future.
That table clearly shows what the AR6 SSP Scenarios presented :
Mid-term, 2041–2060
SSP5–8.5
Very likely range (C)
1.9 to 3.0
Summary in plain language
1) AR6 SPM Figure SPM.8a does show a spread of model outcomes for SSP scenarios, including higher warming ranges above ~2C under SSP5-8.5.
2) The SPM graph and table does not “predict” warming nor state exact dates for thresholds like 3 °C.
3) Atomsk’s predictions about specific timings (e.g., “for ~2°C by 2045-2050”) are interpretations and extrapolations, not text from the IPCC reports.
More broadly, as discussed elsewhere, widening CMIP6 ensemble spread relative to CMIP5 underscores why these scenario-assessed ranges should not be treated as date-specific predictions.
–
Atomsk’s Sanakan says
5 Jan 2026 at 3:22 AM
“These will be my final responses to you on these sub-threads”
If only. We live in hope.
An alternative reference to the original SPM.1 Table (page 14) showing mid-term, 2041–2060:
Table SPM.1 | Changes in global surface temperature, assessed from multiple lines of evidence, for selected 20-year periods and the five illustrative emissions scenarios.
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
Note: These are scenario-assessed ranges, not viable predictions, and cannot be used to extrapolate from recent observed trends into the future.
in Re to MA Rodger, 1 Jan 2026 at 1:31 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843336
Hallo MA,
Unfortunately, what really matters cannot be found anywhere in such PR journalism. Namely, it is the following conclusion of the article about DAC thermodynamics:
“it would be better to use this electrical energy to avoid these emissions, rather than using DAC to later remove what has already been emitted.”
Please note that the author of the cited “thermodynamic” article
https://andthentheresphysics.wordpress.com/2025/12/17/direct-air-capture/
arrived at this conclusion even though (s)he omitted another, “kinetic” aspect of the DAC issue, namely that for a finite process throughput of such a separation, we must necessarily handle huge volumes of the starting very diluted mixture. Due to natural limitations for mass and energy flow rates, you will unavoidably need a huge (and commensurately expensive) equipment.
This kinetic aspect is from the practical point of view even more important than the thermodynamic aspect, because economically, the costs of the necessary equipment (and of the necessary “excess” energy) will be any time orders of magnitude higher than the expenses calculated merely on the basis of the thermodynamic limit (Gibbs energy of mixing).
Although the kinetic aspect may look more complex and/or less “scientific” than the thermodynamic aspect, I think that scientists and journalists should explain both. I am afraid that only then it will become clear why arranging ANY technical measure for direct carbon dioxide removal from the atmosphere while energy is still produced by burning carbon-based fuels, is a money wasting.
Let me repeat again: It is because replacing the fuel with other energy source or exploiting a natural carbon dioxide removal (CDR) method will be surely cheaper than removing from the air the produced carbon dioxide by technical means. The reason why separating something from its dilute solution is so expensive is, however, predominantly in the practical “kinetic” aspect, not in the value of the respective Gibbs energy of mixing.
As regards the PR article
https://carboncredits.com/solar-energy-developer-secures-415-million-to-power-the-worlds-largest-direct-air-capture-plant/ ,
I am pretty sure that if the Stratos plant will be indeed once operated, the “solar” electricity from the $415 Million Swift Air Solar project will form only a part of the entire energy supply (maybe even a quite small). I do not suppose that the Stratos plant (the last figure for it from 2023 being USD 1.3 bln) should be operated only when the sun in Ector County will shine, or, should it be so, that it will be capable to reach the projected capacity.
I can only add the answer of Google search engine to the following question:
“How many years is the Stratos DAC plant expected to be operable / what is the expected final price for one ton (1000 kg) of the captured carbon dioxide?”
It said:
“Oxy’s Stratos DAC plant aims for full operation by late 2025, with a lifespan tied to technology advancements and carbon markets, but specifics on its total years of operation aren’t fixed, while the price per ton of captured CO2 is highly variable, with current contracts for credits sold at significant prices (e.g., $500-$1000/ton range) but aiming lower long-term, with projections around $200-$600/ton for the coming years.”
I pretty doubt they will ever sink their figures significantly below $1000/ton which is, according to an article by House et al.
https://www.pnas.org/doi/epdf/10.1073/pnas.1012253108 ,
a value that can be derived from a comparison of best available technology for separation of air components with technologies presently used for purification of flue gases.
Personally, I consider the high numbers based on this critical approach for much more reliable than the flood of optimistic numbers presented (along with House et al, but, unfortunately, without any suggestion why they should be correct and House et al overly pesimistic) in a more recent review article by Fasihi et al
https://www.sciencedirect.com/science/article/pii/S0959652619307772 ,
I am therefore somewhat sceptical about conclusions of the said review article that read
•
CO2 Direct Air Capture could be a potential climate change mitigation solution.
•
Direct Air Capture technologies are already commercialised.
•
Massive implementation would significantly reduce the CO2 capture costs.
•
CO2 capture costs below 50 €/tCO2 are achievable by 2040.
Greetings
Tomáš
Tomas: “Let me repeat again: It is because replacing the fuel with other energy source or exploiting a natural carbon dioxide removal (CDR) method will be surely cheaper than removing from the air the produced carbon dioxide by technical means.”
While true for the majority of our sources of emissions, it may not be true for the last, say, 10% emissions on the way to net zero. This is the “highest hanging fruit” – those emissions that have been left to the end precisely because we can’t reduce them or the reduction is prohibitively expensive.
I am not sure what you mean by “natural carbon dioxide removal (CDR”
– if natural carbon sinks then they are not a part of net zero (we count on them to reduce atm Co2 after we stopped adding new Co2)
– if you mean nature-inspired methods then – then yes, they should be looked at in the first place, since they may have collateral benefits: reforesting of deforested areas not only takes up CO2, protects C stored in soil, but also protects biodiversity and might produce additional cooling by increasing cloudiness; regenerative agriculture/biochar improving soil C storage also reduces environmental impacts of industrial agriculture, spreading on the fields glacial flour from Greenland or basalt grinded in-situ – in addition to CO2 uptake, also increases crop yields.
However all these may not be enough, or implementation of some may be too expensive – then there would be space for the most effective of the technological DAC.
TK: “ The reason why separating something from its dilute solution is so expensive is, however, predominantly in the practical “kinetic” aspect, not in the value of the respective Gibbs energy of mixing.”
Except that most of the methods – all the nature-inspired and most of technological DACs – do NOT concentrate physically CO2 out of air,
hence they incur neither the thermodynamic nor your “kinetic” energy costs of the CO2 concentration.
Sure, they will have various other costs, applicable to their particular method – but those cannot be quantified from the energy cost of the process they don’t use.
Therefore, your scepticism toward “CO2 capture costs below 50 €/tCO2 are achievable by 2040” would need another justification.
in Re to Piotr, 4 Jan 2026 at 1:25 AM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843447
Hallo Piotr,
Indeed, under CDR methods based on “natural” processes, I meant e.g. tree planting. Instead of thermodynamics of gas mixing, herein applies thermodynamics of CO2 reduction to reactive intermediates of biochemical sugar synthesis. In fact, Gibbs energy of this process is significantly more positive than Gibbs energy for separation of gaseous components from each other, however, we do not need to worry about it, because plants exploit solar energy therefor.
Moreover, in forests, the sequestration process in parallel builds the necessary “equipment” (trees), so that we may not need to pay for the equipment anymore. Finally, the excess energy necessary for continuous CO2 supply (my “kinetic” component of the overall energy demand) is in this case also supplied “for free”, by wind. Therefore, my scepticism about DAC does not apply for carbon dioxide removal by tree planting. It is in accordance with both references that I cited, because House as well as Fasihi deal solely with DAC in its usual sense (CDR processes exploiting technical means).
Greetings
Tomáš
Tomas Kalisz Therefore, my scepticism about DAC does not apply for carbon dioxide removal by tree planting.
My point was that scepticism about DAC achieving its price targets cannot depend
a) on the blogger’s calculation of Gibbs free energy of mixing – since (great?) majority of DAC binds CO2 by reacting with it chemically, not by physically reversing mixing, to which the blogger number applies to
b) your “kinetic” cost argument maybe applicable only the extent they have to supplement wind “for your continuous air supply” and any other energy requirement for operation, particularly of removing the absorbed CO2 for absorbent molecules – assuming that they would want to recycle it instead of just burying it – this done typically by heating the absorbent medium – the cost of which may be reduced if you combine it with thermal power plants or industrial sources of waste heat.
But without knowing how big are these costs – you don’t have enough data to guess whether “CO2 capture costs below 50 €/tCO2 are achievable by 2040” or not.
In Re to Piotr, 5 Jan 2026 at 2:26 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843495
Hallo Piotr,
I think that the great majority of articles discussing carbon dioxide removal (CDR) from the ambient atmosphere by technical means, usually shortened as “DAC”, discusses chemical or physical-chemical processes (typically absorptions in a solvent or chemical reagent having an affinity towards CO2, or a physical adsorption on a solid with a high specific surface) that suppose recycling of the working medium by periodical CO2 release in concentrated form. The thermodynamics reported by Andrew Dessler applies equally for all these cases, and the same applies also for the non-thermodynamic contributions to the overall costs that I assigned as the “kinetic” process aspect.
As regards the CDR processes using an absorption medium that is not recycled but, instead, sequestered together with the caught CO2, the primary focus is on alkaline rocks that do not require any other treatment than their mining, milling and properly spreading so that their natural weathering by their reaction with CO2 speeds up about six orders of magnitude. Herein we exploit natural “free bases” produced by geothermal energy / volcanism to deliver the negative Gibbs energy required for a spontaneous CO2 binding, and, by spreading the rock in the environment, we resolve the “kinetic aspect” of the process.
You are right that that this approach is smarter than seemingly more straightforward “direct” DAC, because it replaces handling with huge amounts of very diluted CO2 in air by handling with several orders of magnitude smaller amount of the necessary base concentrated in the rock. When CO2 did not come to the rock, the rock will come to CO2.
Nevertheless, the joy is not yet completely for free. Following figures came from a short Perplexity search, I have not checked the cited sources for correctness.
It appears that for a sufficiently high weathering speed, so that the CO2 absorption process is significantly quicker than natural equilibration of the Earth energy imbalance (EEI) after the “net zero” (which is, without any further human intervention, expected to take several millennia), we need to mill our absorbent rock to a quite fine powder – with particle size below 0.1 mm to secure their full weathering within 1000 years, or below 0.01 mm, to enable their full weathering within 100 years.
A typical equivalent of a sufficiently abundant basic rock necessary for absorbing 1 ton CO2 appears to be about 2 ton. For sequestering 1 Gt CO2, we thus need to mine, crush, mill and spread about 2 Gt rock. For basalt with bulk density about 2800-3000 kg/m3, it corresponds to a volume about 0.7 km3. If we will consider loading 2 kg/m2 of the selected land with the obtained material (less than 1 mm high layer), the powdered rock will have to be spread over an area about 700 000 km2.
Milling 1 t rock to 0.1 mm grain requires, according to literature, about 20-40 kWh energy and reducing the grain size below 0.01 mm can require further 80-200 kWh/t. Provided that milling to the required grain size represents the main share of the total energy demand, sequestering 1 Gt CO2 by ERW may consume something between 50 and 500 TWh what still sounds significantly more reasonably than the figures for any “pure” DAC process comprising separation of neat CO2.
I think that in this case, the overall costs about (or even below) 50 USD or EUR for 1 t of sequestered CO2 could be indeed achieved. We have to take into account, however, that the sequestration will not be instant, because this cost estimation applies for sequestering the said 1 t CO2 in the timeframe roughly between 100 and 1000 years. In the present example, the “kinetic aspect” of the separation process reveals more apparently in prolonged process duration than in the overall energy consumption and/or costs of the necessary equipment.
Greetings
Tomáš
Tomas Kalisz: The thermodynamics reported by Andrew Dessler applies equally for all {DAC technologies]
How? His calculations are of the energy of (reverse) mixing – which would be applicable to the processes using this (reverse) mixing – i.e. physical concentration of 400 ppm CO2 to pure CO2.
Most (all?) DAC technologies do not preconcentrate ambient CO2 to pure CO2 before running it over the absorbent, hence energy of (reverse) mixing is irrelevant to them – it’s the energy of chemical reaction that matters – and this would be different for different absorbents (as it’s dependent on the affinity of absorbent to Co2).
So not only you don’t get a single value for it – but all the individual values will be DIFFERENT than the single value (“500 kJ/kg of CO2”) calculated for the physical concentration of CO2 from 400ppm to pure CO2 that your Dessler, the Then There is Physics guy and his YouTube inspiration calculate.
You can’t disparage DAC methods by assigning to them the high thermodynamic cost of a process they… don’t use.
in Re to Piotr, 5 Jan 2026 at 2:26 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843495
Hallo Piotr,
Thank you for raising your objections again. I hope that following explanation helps show that, in fact, usually both points a), b) are important – more generally than many people think.
To your point a)
The Gibbs energy of mixing is minimal work you must spend if you would like to separate a neat component from its diluted mixture, irrespective how you accomplish this task. It does not matter if you use chemical, physical or perhaps biological processes, nor in which order you perform individual operations. It is a fundamental principle of thermodynamics that the route between initial and final state of the system does not matter – what only does count is the difference between both states.
In other words, if you use a powerful concentrated chemical agent that binds CO2 very strongly, like calcium oxide, you may indeed easily separate CO2 from a significant volume of air and then release it in a neat form by calcining the formed calcium carbonate at a temperature above 900 °C. In this case, the necessary Gibbs work for the desired separation has been done at the expense of even much higher Gibbs energy of CaCO3 formation that spontaneously released during the separation step. You have not “saved” or anyhow circumvented the unavoidable “payback” of the CO2-air mixing Gibbs energy in this process. You had to pay the entire Gibbs energy of CaCO3 formation back during its decomposition, with a huge interest due to significant energy losses in the calcination furnace, while this Gibbs energy alone was much higher than the Gibbs energy of CO2 mixing with air that you intended to save.
In still other words, you are perfectly right that the respective energies spent on various chemical DAC processes will significantky differ from each other, depending on the reagent used. All these energies will be, however, significantly higher than the theoretical Gibbs energy of CO2 mixing with air. The high affinity towards CO2 (commensurate to the high Gibbs energy of the respective reaction) is the necessary driving force of the CO2 extraction process and the reason why all these chemical DAC processes do work.
By the way, the above mentioned circular calcium carbonate formation and its thermal decomposition is the basis of the billion dollar STRATOS process. In combination with reasons explained below, you may be pretty sure that this plant is a perfect machine rather for conversion of money (irrespective whether US tax dollars, carbon credits and whatever else investments) into thermal energy of Earth atmosphere than for anything else.
As you correctly pointed out, the only way how you can (seemingly) avoid spending the Gibbs energy of mixing (by letting someone else to pay it for you) is desisting from the release of captured component (in our case CO2) in its neat form.
If you from someone obtain the suitable CO2 binding agent ready to use, it will be for you energetically “for free”. If we have the alkaline rocks, provided in this reactive form as a gift of Nature, there remains only the problem how to bring them into contact with the atmospheric carbon dioxide.
To your point b)
If we simply wait millions years, the Nature will do the necessary work for us. If we decide to proceed quicker, we must pay for it with energy and with investments into equipment (which, basically, also represent various forms of spent energy). In the example with enhanced rock weathering, I tried to show that avoiding an active handling with exorbitant volumes of air and replacing it with handling with muhc smaller mass and volume of the concentrated rock only is a smart idea. Nevertheless, if you will insist that your separation process has to be complete within years or decades, you will have to mill all your rock to nanoparticles and thus skyrocket your expenses. Otherwise, you still have to count with processing in the timeframe of centuries.
In other words, although it may not look as obvious as for the thermodynamic limits for the minimal necessary energy, the kinetic limits for the throughput of separation processes (which finally anyway transform into energy – or money as an energy representation) are also very important
In separation of very diluted components regularly play more important role than the thermodynamic aspect.
Greetings
Tomáš
in Re to Piotr, 11 Jan 2026 at 6:23 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843774
Hallo Piotr,
I cited Andrew Dessler merely because I was aware of his explanation of the Gibbs energy of CO2 mixing with air and thought that it is more consistent and better comprehensible than that provided in the Then There is Physics article.
I must admit that I have not seen discussions about the “Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate” as attractive and therefore completely missed that Dr. Dessler played a leading role in critique of this document. Only thank to your remark, I became curious who actually is “my” Andrew Dessler and realized
https://en.wikipedia.org/wiki/Andrew_Dessler
that he is a prominent climate scientist.
Hereby, I would like to assure you that it was not my intention to argument by his authority (of that I was unaware). I cited his blog because I liked his clarity.
Greetings
Tomáš
Thomas Kalisz says:”The Gibbs energy of mixing is minimal work you must spend if you would like to separate a neat component from its diluted mixture, irrespective how you accomplish this task. It does not matter if you use chemical, physical or perhaps biological processes, nor in which order you perform individual operations. It is a fundamental principle of thermodynamics that the route between initial and final state of the system does not matter – what only does count is the difference between both states.”
This is exactly what I thought. The difference between industrial DAC and planting trees is with DAC we have to provide all the energy and with trees photosynthesis does some of the work naturally. And quite quickly. The problem being there is only limited land.
But I agree with Piotr that we have to use some form of CO2 extraction to deal with the last 10% of emissions that would be very costly or impossible to stop. Perhaps a combination of different approaches is the best thing.
Tomas Kalisz 12 Jan 1:31 PM “ Re to Piotr, 5 Jan 2:26 PM To your point a) ”
Huh? Why do reply my “Piotr, 5 Jan 2:26 PM” to which … you already replied on 6 Jan. We are well past that – on Jan 11 I have already replied to your Jan 6 reply, namely to
your claim
TK 6 Jan : The thermodynamics reported by Andrew Dessler applies equally for all {DAC technologies]
I have proven it false:
– “the single value (“500 kJ/kg of CO2”) for the physical concentration of CO2 from 400ppm to pure CO2 (that your Dessler, the Then There is Physics guy and his YouTube inspiration calculate) are inapplicable to the most (all?) DAC method that do not concentrate CO2 from 400ppm to pure CO2, [but react with the ambient CO2 by chemical or physical absorbents]”
For such reactions only the energy of reaction is relevant, NOT the energy of (reverse) mixing from 400ppm to pure CO2. Sure, the element of the entropy would be included in this energy of reaction (via the ratio of the concentrations of the products/reagents)
but the overall Delta G (kJ/kg CO2) won’t be = 500 kJ/kg of CO2”.
This is not say that the energy costs of various DAC techniques are high or low –
all I have been saying that one can’t dismiss DAC using inapplicable to them thermodynamic costs (“500 kJ/kg of CO2”).
Using an inapplicable number is worse than using no number at all – because it is false knowledge – it suggests QUANTITATIVE ACCURACY WHERE THERE IS NONE.
And as I wrote to you a couple posts back:
P: “Therefore, your scepticism toward “CO2 capture costs below 50 €/tCO2 are achievable by 2040” would need another justification [than Dessler et al. claim of “500 kJ/kg of CO2”].
Nigel: “This is exactly what I thought. The difference between industrial DAC and planting trees is with DAC we have to provide all the energy and with trees photosynthesis does some of the work naturally.”
I’d suggest you might re-examine this conclusion. DAC binds CO2 from the air to absorbents. The absorbent chosen are such that the reaction is spontaneous – which means that you don’t have to provide external energy for it to happen.
For the thermodynamic analysis of the process – the relevant quantity is the energy of reaction, NOT the energy of physical mixing.
That is not to say that DACs do not require human provided energy – they use energy but for DIFFERENT processes and in DIFFERENT amounts than the “500 kJ/kg of CO2” calculated by Tomas’s sources (Dessler, the Then There is Physics guy and his YouTube inspiration).
Using the 500kJ/kg of CO2 from the calculation of Gibbs free energy of physical (reverse) mixing to DAC, is akin to 1930’s aeronautical engineers applying the equations used to model large, fixed-wing aircraft to insects, concluding that insect flight was impossible. The difference was that anybody who saw a bee, knew that these engineers must have used inapplicable equations – something not immediately obvious for lay people who see the disparaging DAC’s draped in the authority of “the fundamental rules of thermodynamics”.
If you want more detail – see my post on 12 Jan 10:04 PM. And my overall conclusion there: “Using an inapplicable number is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“.
in Re to Piotr, 13 Jan 2026 at 5:57 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843857
and 12 Jan 2026 at 10:04 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843822
Hallo Piotr,
I apologize for referring to an earlier post – I put mistakenly January 5 instead January 11.
As regards the value of mixing Gibbs energy 432 kJ/kg CO2 (the work necessary for reversible separation 1 kg neat CO2 at standard temperature and pressure from ambient air comprising 430 ppm CO2), it corresponds 19 kJ/mol. It is exactly the difference between Gibbs energy of CaCO3 formation from CaO and CO2 at standard conditions (-131 kJ/mol) and Gibbs energy of CaCO3 formation from CaO and CO2 having atmospheric concentration 430 ppm (-112 kJ/mol). In a cyclic process, this cost never disappears; it is just hidden inside the chemical potential of the respective spontaneous reaction.
131 kJ/mol for CaCO3 decomposition would have been sufficient only if we could run this reaction at normal temperature in a reversible electrochemical cell. In the thermal process used in the Stratos plant, enthalpy of the reaction applies instead, which is 179 kJ/mol. This corresponds ca 4 MJ/kg CO2. Thus, if we exploit CaO as the sorbent for CO2 separation, we will unavoidably need at least 9.5 times more energy than the theoretical thermodynamic limit.
My primary point, however, was that the costs of handling at least 2500 t air for each 1 t of separated CO2 will, at required throughputs of the process, very likely significantly exceed the costs of the thermodynamically unavoidable energy required for the separation step alone.
Greetings
Tomáš
Tomas Kalisz: “I apologize for referring to an earlier post – I put mistakenly January 5 instead January 11“.
The _dates_ were correct – perplexing was why you had answered my Jan. 5 post – TWICE: (first time on Jan 6, and then again on Jan 12).
TK: the value of mixing Gibbs energy [=] 19 kJ/mol. It is exactly the difference between Gibbs energy of CaCO3 formation from CaO and CO2 at standard conditions (-131 kJ/mol) and Gibbs energy of CaCO3 formation from CaO and CO2 having atmospheric concentration 430 ppm (-112 kJ/mol).
You have just unwittingly proved my point – Dassler’s energy of mixing (+19 kJ/mol) is already INCLUDED in the energy of reaction (-112kJ/mol), therefore using +19kJ/mol to
QUANTIFY the energy (in)efficiency of DAC is not only inappropriate, but actively misleading:
Piotr Jan 12 and Jan 13: “Using an inapplicable number [+19kJ/mol, instead of -112kJ/mol] is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“. As is your defense of people doing it: TK: “ The thermodynamics reported by Andrew Dessler applies equally for all these cases”
in Re to Piotr, 17 Jan 2026 at 7:17 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844005
Hallo Piotr,
Thank you for your reply! I was afraid that my post got buried under pile of “Data” production and responses thereto, so I am happy that you found it and provided your feedback.
As regards my comparison of the Gibbs energies of CaCO3 formation under standard conditions and under ambient CO2 partial pressure, I introduced it deliberately, with the aim to support what I strived to express already before – namely that the Gibbs energy of CO2 dilution to the ambient air is the MINIMAL work that you MUST unavoidably carry out if you wish to extract the CO2 from the air back in its undiluted state, irrespective of the process used. Furthermore, I hope that in parallel, this example showed also that if you use such a spontaneous chemical process, the thermodynamically required minimal energy necessary for the CO2 release becomes several times higher than the ultimate thermodynamic limit 19 kJ/mol.
Nevertheless, I would like to repeat that the core of my objections that apply for all these DAC processes characterized by CO2 separation in neat form were not these thermodynamic hurdles but the kinetic aspect about which both the “ThenThere’sPhysics” article
https://andthentheresphysics.wordpress.com/2025/12/17/direct-air-capture/
as well as Andrew Dessler
https://www.theclimatebrink.com/p/thermodynamics-of-air-capture-of
remained silent.
Greetings
Tomáš
in addition to my post of 18 Jan 2026 at 3:46 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844046
Hallo Piotr,
To prevent further possible misunderstandings, Gibbs energy is defined as an energy that must be SUPPLIED to a system to bring it REVERSIBLY from a state A to a state B. Gibbs energy for thermodynamically spontaneous (exergonic) processes is thus by definition negative, whereas for opposite (endergonic) processes is positive.
In my example, if we could carry out the CaCO3 formation in a reversible electrochemical cell, we could extract therefrom 131 kJ electrical energy per each mol of CaCO3 formed / CaO and CO2 consumed, provided that our CaO will react with neat CO2 having standard temperature and pressure.
If we switch this cell to ambient air instead of neat CO2, we will be able to extract for the same extent of reaction 112 kJ electricity only.
If we had a machine that could generate work by reversible CO2 dilution with CO2-free air to the final concentration 430 ppm, we could obtain for each mol CO2 19 kJ energy.
If we extract 1 mol CO2 from ambient air by its reaction with CaO, released 112 kJ energy (“-112 kJ/mol”) becomes dispersed in the environment as waste heat. As soon as you will recycle the CaO and release the CO2 for the intended storage, you will have to spend the same energy (“+112 kJ/mol”) in form of electricity consumed in our hypothetical reversible electrochemical cell, provided that you will release CO2 back in the ambient air. If you will release CO2 as a neat gas at standard conditions instead, you will need to supply our reversible cell with +131 kJ electricity per mol CO2.
As no such reversible electrochemical cell is available and we must decompose CaCO3 thermally in a kiln, the high positive entropy of this decomposition becomes useless. We will therefore need to supply the entire enthalpy of the CaCO3 decomposition, +179 kJ per each mol CO2, instead of the Gibbs energy. The consumed energy cannot be anyhow re-used or recycled, because we spent it to break the strong chemical bonds spontaneously formed when CaO reacted with CO2. The only energy that can be partly recuperated is the excess heat (additional thermal energy that has not been consumed for breaking chemical bonds) that remains comprised in hot CO2 and CaO leaving the kiln.
You see that the minimal thermodynamic limit +179 kJ energy spent for each mol of separated CO2 in the cyclic process based on spontaneous CO2 reaction with CaO is significantly higher than the +19 kJ/mol thermodynamic limit for a reversible reversal of the spontaneous neat CO2 dilution in the ambient air. As an analogous analysis can be done for any separation process based on a spontaneous chemical reaction, I hope that it becomes clearer now why 19 kJ is indeed the UNIVERSAL lower limit for the energy that MUST be spent if we wish to separate CO2 as a neat gas at standard conditions from the ambient air.
Greetings
Tomáš
Tomáš Kalisz ( and also Piotr)
TK says: “namely that the Gibbs energy of CO2 dilution to the ambient air is the MINIMAL work that you MUST unavoidably carry out if you wish to extract the CO2 from the air back in its undiluted state”
Correct and I’ve generally gone along with your various explanations on the issue. My initial impression was that all CO2 extraction processes such as industrial DAC, enhanced rock weathering, and tree planting require this minimal work, just achieved in different ways.
But I think Piotr is saying that industrial DAC using fans and chemical substrates to effectively bond with CO2 in the air, doesn’t actually reduce CO2 to its undiluted state. It is bonding CO2 in the air with a substrate chemical, so that the entire Gibbs free energy argument doesn’t apply and so the energy required is really just a summation of the processes involved. And this applies to other forms of extracting CO2 out of the air. I hope im interpreting Piotr correctly.
In Re to Nigelj, 19 Jan 2026 at 2:44 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844090
Hallo Nigel,
Two weeks ago, on 6 Jan 2026 at 12:51 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843544 ,
I posted a quite detailed analysis of enhanced rock weathering (ERW), which is, besides tree planting, basically the single carbon dioxide removal (CDR) process that does not rely on artificial CO2 extraction from the atmosphere in neat form (direct air capture, DAC).
I tried to clarify that for all DAC processes, irrespective whether chemical or purely physical, 19 kJ/mol CO2 (which is the opposite value of the Gibbs energy of neat CO2 dilution with CO2-free air to the present CO2 concentration in air about 430 ppm) applies as the lower limit for the energy that is necessary for such an extraction.
I did so repeatedly and in various modifications because Piotr seemed to insist in an opinion that for DAC processes based on a chemical extraction step, this lower limit does not apply.
See e.g. his reply of 11 Jan 2026 at 6:23 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843774 ,
asserting
“Most (all?) DAC technologies do not preconcentrate ambient CO2 to pure CO2 before running it over the absorbent, hence energy of (reverse) mixing is irrelevant to them – it’s the energy of chemical reaction that matters – and this would be different for different absorbents (as it’s dependent on the affinity of absorbent to CO2).”
I repeatedly tried to explain that although this assertion is perfectly correct, the conclusions drawn by Piotr therefrom, summarized in his sentence
“You can’t disparage DAC methods by assigning to them the high thermodynamic cost of a process they… don’t use.”
are not.
It is because
(i) all these DAC techniques exploiting spontaneous CO2 sorption have their own thermodynamic limits that are significantly HIGHER than the lower limit derived for CO2 dilution with air and thus require (significantly) MORE energy than 19 kJ/ mol of extracted CO2, and
(ii) in all DAC methods, the major burden is not the thermodynamic limit for the necessary energy, but rather the costs incurred by the necessity of artificial handling with huge amounts of air, which is unavoidable for the desired throughput of the process.
Finally, let me comment on your last paragraph, wherein you seem to suppose that there are DAC processes wherein the used sorbent saturated with absorbed CO2 is not recycled but disposed (pardon, “stored”). If you indeed think so, I am pretty sure that you are wrong (and I do no believe that Piotr made the same mistake).
It is because in such a hypothetical DAC process,
a) the most relevant kinetic aspect (ii) would have applied anyway, and
b) except the alcaline rocks discussed in the above mentioned post treating ERW, you do not have ANY other sorbent available.
Please note that if you would like to use e.g. CaO or NaOH as the respective “disposable” CO2 sorbents, you will have to manufacture them, first, from available raw materials. There is no such active CO2 sorbent available on Earth surface that you could simply mine from some natural deposit thereof – just because we have CO2 in the atmosphere and everywhere in the environment. And as soon as you start manufacturing the respective chemical agents artificially (e.g. by thermal CaCO3 splitting to CaO and CO2 or by electrolysis of aqueous NaCl solution to NaOH, H2 and Cl2), the disputed thermodynamic limit 19 kJ/mol will apply again, because the required chemical affinity must exceed it (significantly) to enable that the sorption process runs spontaneously.
For the above reasons, I am extremely sceptical towards various assertions that any DAC technology (and likely any thinkable CDR method different from ERW and tree planting) could become as cheap as 50 USD per 1 kg extracted CO2.
Am I now clear?
Greetings
Tomáš
Tomáš Kalisz @ 20 Jan 2026 at 4:46 AM
Your explanations do seem clear. I get the main points you are making. I struggle to understand some of it but this is because I dont have a science degree. But I like trying to figure this stuff out. I was a very high achiever at school nearly did a chemistry degree. I’m grateful to be able to ask you guys the occasional question, when I cant understand things and google doesn’t supply an answer.
But I get the main points you are making. Im sure you’re right that industrial form of DAC is likely to remain expensive.
I did misinterpret Piotr. I seemed to get it into my head that the CO2 was bound to a chemical agent and the agent buried and stored, when in fact the CO2 is extracted from the sorbent and stored in liquid form. So Gibbs energy calculations do apply to industrial DAC because the CO2 in the air is ultimately reduced to its pure form.
But when Piotr uses the term “DAC” he assumes this includes any process that extracts CO2 including tree planting. I thought he was saying that that applying the Gibbs analysis to every form of CO2 extraction doesn’t make sense and is misleading, because for example with tree planting CO2 isn’t reduced and isolated to its pure form. And so the very huge costs of industrial DAC dont apply to tree planting.
Tomas Kalisz, using his post to Nigel to misrepresent my critique of his claims:
TK: “ I did so repeatedly and in various modifications because Piotr seemed to insist in an opinion that for DAC processes based on a chemical extraction step, this lower limit does not apply.
Stop “so repeatedly” misrepresenting my position – what I “insist on” was explained to you on 12, 13 and 17 Jan:
==== Piotr 17 Jan at 7:17 PM
” You have just unwittingly proved my point – Dassler’s energy of mixing (+19 kJ/mol) is already INCLUDED in the energy of reaction (-112kJ/mol), therefore using +19kJ/mol to QUANTIFY the energy (in)efficiency of DAC is not only inappropriate, but actively misleading – the point I have already made
Piotr Jan 12 and Jan 13: “Using an inapplicable number [+19kJ/mol, instead of -112kJ/mol] is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“. As is your defense of people doing it: TK: “ The thermodynamics reported by Andrew Dessler applies equally for all these cases”
======
You haven’t addressed this, but instead …. wrote several pages … rephrasing what you had already said before and what I already included in my “You have just unwittingly proved my point”, or going on unrelated tangents that I had already said were irrelevant to my point.
To put this to bed, I’ll repeat:
“Using an inapplicable number [+19kJ/mol, instead of -112 (= – 131+19)kJ/mol – is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“.
Furthermore – different DAC methods would use different reactions hence their energies of reaction are different – so you can’t use the SINGLE value for all of them, and even more so if this single value [+19kJ/mol] does not quantify the NET energy of reaction in ANY of them.
Nigel: “I thought [Piotr] was saying that that applying the Gibbs analysis to every form of CO2 extraction doesn’t make sense and is misleading, because for example with tree planting CO2 isn’t reduced and isolated to its pure form. And so the very huge costs of industrial DAC don’t apply to tree planting.”
Errr, not exactly – my argument is not limited to tree planting, It applies to all forms of DAC:
Thermodynamics of DAC is determined by the Gibbs free energy of reaction of the particular method used – NOT by a mixing energy value of +19kJ/mol that has been already implicitely included in the said energy reaction^*.
^* (in Tomas’s example the energy of reaction to capture Co2 from 400 ppm was -112 kJ/mol, ALREADY including the +19 kJ/mol of energy of mixing).
Therefore +19kJ is inapplicable to quantifying the energetics of this reaction (which was here -112 kJ/mol). Nor is it applicable to characterize the energy costs of the reaction to recycling of the absorbent:
– some DAC like the Enhanced Weathering (absorption of CO2 by crushed rock) don’t recycle. Nor does the net production by plants.
– other might do only limited recycling (if their absorbent has limited number of cycles it can work).
– even for the recycled stage it is the energy of a given (recycling ) reaction that counts, NOT the “+19 kJ/mol”.
So you can’t assign +19kJ/mol as the number that determines the energy consumption of all DAC to calculate what % of global production of electricity they would require, as Dessler, Then There is Physics guy and the video that inspired him, do.
Particularly, if the energy needed to recycle the absorbent is not electricity but …. a waste heat – I referred to the recent paper from Finland where they use an absorbent that can be recycled by “heating it 70C for 30 minutes”. So if you locate your DAC next to nuclear power plant or data centre and use their waste heat, in addition to the waste heat of your exergonic CO2 uptake reaction. – you get most? all? your heat needed to recycle – for free. Of course there will be some operational energy needed – but it WON’T be “19 kJ/mol.”
Hence my main point:
Piotr Jan 12, Jan 13, Jan 17: “Using an inapplicable number [+19kJ/mol] is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“
in Re to Nigelj, 20 Jan 2026 at 3:25 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844135
Hallo Nigel,
Thank you very much for your feedback! Without it, I would be still unsure if my explanations, in which I invested considerable time and effort, are understandable.
I would like to summarize my conclusion once again:
Provided that under “direct air capture” (DAC) are conventionally understood technical methods for CO2 extraction from ambient air as a neat compound, I am extremely sceptical towards various assertions that any DAC technology (and likely any thinkable carbon dioxide removal (CDR) method different from enhanced rock weathering (ERW) and tree planting) could become as cheap as 50 USD per 1 kg extracted CO2.
I am happy that with respect to ERW and tree planting, my view seems to overlap with Piotr’s and likely differs only in that Piotr insists in calling these CDR methods also “DAC”.
Greetings
Tomáš
in Re to Piotr, 20 Jan 2026 at 10:05 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844151
Dear Piotr,
If you think that I deliberately misinterpret your words, I respectfully disagree with this opinion.
I believe that in the entire thread, including the long posts that you object as repetitive and tangential, I did my best to address your particular arguments as well as your concluding assertion that I try to
“.. disparage DAC methods by assigning to them the high thermodynamic cost of a process they… don’t use.”
and strived to explain in detail why I cannot agree therewith. I reviewed the thread once again now and see that I have hardly anything to add.
I am sorry that we have not reached any mutual agreement on the disputed topic, however, I cannot afford to invest even more time and effort in this discussion.
Thank you for your understanding and best regards
Tomáš
Tomáš Kalisz says 21 Jan 12:22 PM If you think that I deliberately misinterpret your words, I respectfully disagree with this opinion.
Lady doth respectfully disagree too much. By her fruits not her “respectful” declarations, you shall know her. :
I wrote: Piotr 17 Jan: “You have just unwittingly proved my point – Dassler’s energy of mixing (+19 kJ/mol) is already INCLUDED in the energy of reaction (-112kJ/mol), therefore using +19kJ/mol [instead of -112kJ/mol) to QUANTIFY the energy (in)efficiency of DAC is not only inappropriate, but actively misleading.”
Which you portrayed on Jan 20:
TK 20 Jan “I did so repeatedly and in various modifications because Piotr seemed to insist in an opinion that for DAC processes based on a chemical extraction step, this lower limit does not apply.”
That’s a DIFFERENT argument. hence your “misrepresentation” of my argument saying that your source’s +19 kJ/mol is “inapplicable” to characterize the energy (in)efficiency of DAC. The applicable are the energies of reactions (in your example = -112 kJ/mol).
This and other points are presented also in my response to Nigel.
(Piotr 21 Jan at 12:31 AM)
in Re to Piotr, 23 Jan 2026 at 10:08 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844300
Hallo Piotr,
To avoid possible further misunderstandings, please consider that the following text does not deal with other processes for carbon dioxide removal (CDR) from the ambient air, such as tree planting or enhanced rock weathering (ERW), and applies solely to CDR processes that extract CO2 in its neat form. In accordance with common use of this term, I will call these specific CDR processes DAC (from “direct air capture”).
I can basically agree with your premise that
“Dassler’s energy of mixing is already INCLUDED in the energy of reaction (-112kJ/mol)”.
With a small correction (+19 kJ/mol is, in fact, Gibbs energy of CO2 “demixing”), I perceive this sentence as right.
I must, however, disagree with your conclusion
“therefore using +19kJ/mol [instead of -112kJ/mol) to QUANTIFY the energy (in)efficiency of DAC is not only inappropriate, but actively misleading.”
This sentence does not properly reflect what I strived to explain, and I would like to clarify once again why.
Your conclusion would have been correct if you could neglect the energy necessary for providing the respective reagent (in our example CaO). This is not possible because on Earth, you cannot find any geological deposit of this reactive substance. You must prepare it from natural calcite or recycle it from the calcite formed in the discussed CO2 extraction process.
In both cases, the thermodynamic limit (the lowest energy that you unavoidably must spend in the process) is the enthalpy of CaCO3 decomposition (179 kJ/mol CaO). This value is several times higher than the generic thermodynamic limit 19 kJ/mol CO2 derived for the reversible process of “CO2 demixing”. The difference between both values (let us call it “thermodynamic penalty”) is the price that you must pay for using a spontaneous chemical reaction in an irreversible arrangement as the separation step in your CO2 extraction process.
You are right that the thermodynamic penalty will decrease if you use another chemical agent, binding the CO2 less strongly than does CaO. It is the case of various organic sorbents, like that used by the Finnish group you have mentioned. The negative enthalpy of the spontaneous binding step will, however, still get lost as waste heat in all such processes. Consider please that the enthalpy will be released in a system that, besides the reagent and the CO2, comprises at least 2325 times more mols of other air components that will unavoidably absorb the released enthalpy as well. There are no reasonable technical means how you could efficiently recuperate the heat “diluted” in this extent.
My point, that I strived to demonstrate already in my posts of 31 Dec 2025 at 7:47 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843318
and 1 Jan 2026 at 7:21 PM,
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843346
and that I further repeated throughout the entire following discussion, was, however, different.
I strived to show that even if we consider a perfectly reversible process without any thermodynamic penalty, such as CO2 freezing out with a theoretical (100 % efficient) heat recuperation, and thus indeed (at least theoretically) arrive at the generic (LOWEST) thermodynamic limit 19 kJ/mol (0.43 MJ/kg) of the separated CO2, this will no way be the number decisive for estimation of technical and economic feasibility of the DAC processes.
In this respect, I have not defended or promoted Dassler’s critique of DAC processes but rather criticized it as incomplete and/or insufficient. I strived to show that relatively moderate value of that generic thermodynamic limit will not play any important role in practical processes that have to secure the desired process throughput, because in this case, kinetic limits (and costs incurred thereby) will strongly prevail.
In other words, I strived to show that although Dassler is perfectly correct and his general thermodynamic limit applies to all DAC processes as defined above, the true hurdle that makes DAC processes economically unfeasible is not this general thermodynamic limit but the necessity to move and handle huge volumes and masses of air within a limited time span.
Am I clear now?
Greetings
Tomáš
Tomáš Kalisz
Can you or someone clarify something for me? My understanding is Dasslers minimum energy requirement of +19kJ/mol to extract CO2 to its pure form obviously is true for DAC where fans are used and where CO2 is ultimately made liquid. But with for example enhanced rock weathering, this does not reduce CO2 to its pure form does it, so the 19 kJ/mol would not apply, or a lesser number would apply? But I assume that because of all the factors involved in the process you would still probably require more than 19kJ in reality.
Tomas Kalisz 24 Jan.
“ I can basically agree with your premise that “Dassler’s energy of mixing is already INCLUDED in the energy of reaction (-112kJ/mol)”.
I must, however, disagree with your conclusion “therefore using +19kJ/mol [instead of -112kJ/mol) to QUANTIFY the energy (in)efficiency of DAC is not only inappropriate, but actively misleading.” Your conclusion would have been correct if you could neglect the energy necessary for providing the respective reagent (in our example CaO).
====
So, you “can basically agree with” …. YOUR OWN argument (that +19 kJ/mol is already INCLUDED in the energy of reaction -112kJ/mol)), but then disagree with … the argument I didn’t make, but you assigned to me? And then patronizing explain to me what I know, framing it as the proof that I am wrong in the argument I …. wasn’t making (my point does not depend on “energy necessary for providing the respective reagent “)
How gracious of you.
As I have explained several time already – all DAC methods, whether they recycle the absorbents or not (Co2 uptake by photosynthesis, crushed rock) – will have energy costs – some larger, some smaller, but in the end – the sum of ALL positive and negative Delta Gs , i.e. the net energy cost of uptake of 1 mol of Co2 – may be larger or smaller than +19kJ/molCo2, but barring some cosmic coincidence – WON’T BE = +19kJ.
Hence the conclusion:
Piotr Jan 12, Jan 13, Jan 17, Jan 23: “Using an inapplicable number [+19kJ/mol] is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“
And that’s all he wrote.
As for whether the practical energy cost would be higher or lower than these +19kJ/mol – is pointless to discuss it in abstract here – since the answer has to be specific – will depend on what’s the std. energies of reactions of all processes used in a given DAC, what are T and pressure of the reactions, how much of the energy of the exergonic portion of the DAC can be harnessed and whether other forms of cheap energy (say waste heat from nearby thermal power plant or data centers) can be used, etc. etc
– and these answers will depend on which, and how, a given DAC method is used.
And that’s the reason why various DAC test-of-concept projects are pursued nowadays – to see if we can identify methods in which the cost of the uptake of CO2 by DAC would be lower than the cost of cutting the last, thus most difficult and expensive, 60 times total of current annual emissions of CO2) and concluded the that energy cost of that is , I didn’t see it coming – rather high.
—–
in Re to Nigelj, 24 Jan 2026 at 4:07 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844329
and Piotr, 24 Jan 2026 at 11:11 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844339 .
Sirs,
Thank you very much for your feedback.
Nigel,
As I tried to explain on 6 Jan 2026 at 12:51 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843544 ,
Dessler’s general thermodynamic limit does not apply for enhanced rock weathering (ERW).
In this sole case CO2 extraction from the ambient air, we do not need to invest energy into any chemical conversion, because we use natural alkaline reagents prepared by Earth volcanism and we do not anyhow recycle the spent CO2 absorbent.
The only correction I would like to add is that the value 19 kJ/mol CO2 has been calculated for CO2 separation as a gas having normal atmospheric pressure 101 325 Pa at a temperature 293 K. Liquefaction thereof would have consumed an additional energy.
The “only” problem with ERW is the slow reaction on the gas-solid interface that we strive to overcome by milling the rock to small particles. It appears that to achieve that all buffering capacity of the rock is spent during a few years, we would have needed to mill it to nanoparticles, what would have been indeed significantly more energy expensive than 19 kJ/mol of captured CO2. If we accept a longer time limit for the complete reaction with CO2 about 100 years, the spent energy can be around 19 kJ/mol CO2, and at even longer times it can drop below it.
Piotr,
My point was that all the DAC processes tested in various pilot projects (Climeworks, STRATOS etc.) rely on CO2 extraction in pure form and on sorbent recycling, and thus both the universal thermodynamic limit derived by prof. Dessler as well as even more important kinetic constraints do apply for them unavoidably.
I therefore think that the discussion of these limits definitely makes sense – it is why I strived (and still strive) to bring it to a conclusion. Can you now, in the light of the provided arguments, agree that estimation of the lower limit for the respective costs of CO2 extraction by such processes on the basis of the best available technology for gas separations, as provided by House et al.
https://www.pnas.org/doi/epdf/10.1073/pnas.1012253108 ,
was a reasonable and clever approach and thus could be suitable as a quite solid basis for a comparison of the derived limit with costs of “cutting the last, thus most difficult and expensive” part of annual anthropogenic CO2 emissions?
An additional question: In your last paragraph, I have not grasped what you meant by “60 times total of”. Could you clarify?
I apologize if my explanations anywhere sounded patronizing, it was not my intention.
Greetings
Tomáš
Tomas Kalisz: “the universal thermodynamic limit derived by prof. Dessler”
It is not a valid limit to any DAC method. The only relevant limit is the energy of reactions of a given DAC – that either:
– (in the uptake phase) ALREADY incorporated that +19kJ into net energy of reaction (in your own example energy of uptake of Co2 onto CaO = – 112 kJ (= – 132+19kJ).)
– or do not include Dessler’s 19 kJ/mol CO2 at all – in the recycling phase in those DAC that recycle its absorbents.
Each DAC will have its different sum of energies from both phases , and NONE of these sums will be = Desslers +19kJ. Thus his +19kJ.mol CO2 is inapplicable for representation of the thermodynamics of any DAC. And therefore, as I have explained to you before:
Piotr Jan 12, Jan 13, Jan 17, Jan 21, Jan 23, Jan 24: “Using an inapplicable number [+19kJ/mol] is worse than using no number at all – because it is false knowledge – it suggests quantitative insight where there is none“
And it is inapplicable even as the “minimum limit” – some DAC will require more than 19kJ/mol, other may require LESS than 19 kJ/mol. In the latter case, either:
– because they don’t recycle absorbent – e.g. use crushed rock – and uptake of CO2 from air is spontaneous – i.e. Delta G even after accounting for Dessler +19 kJ/mol – is NEGATIVE – so obviously Dessler’s +19 kJ/mol is not “thermodynamic limit”
– or because energy of reaction of uptake is not “symmetrical” to the energy of recycling + 19 kJ – so you can’t claim that after their cancel each other only the + 19 kJ will be left. In the example from Finland I gave you – to remove Co2 from it it’s enough to keep the absorbent in 70C for 30 min So if you use some of the spare energy from the uptake of CO2 from air (which has negative Delta G) and/or use waste heat from thermal power plant or data centre – you could reduce Delta G of the recycling phase to 0. so again your “universal thermodynamic limit” of +19kJ – does NOT apply.
Sure, there will be your “kinetic” energy needs – but each method will have a different needs and again NONE of them will be exactly =+19 kJ/mol of Co2.
Finally, even if by a cosmic coincidence the sum of thermodynamic and kinetic cost was to add up to 19 kJ/mol – this still would be IRRELEVANT to evaluation of whether we should with this DAC or not – the only relevant questions are:
does CO2 uptake cost LESS than that by other DAC, and is it less than the cost of the reduction of the last (and therefore the most expensive) CO2 emissions.
To sum up: your ” universal thermodynamic limit derived by prof. Dessler” is
– <b not universal, as it does not describe the energy demand of ANY of the DAC technologies
– is therefore inapplicable and as such, misleading if applied (it suggest quantitative insight where there is none)
– is irrelevant to the way we go for the net zero – as the only criterium there is – is it cheaper than the reduction of the last 10% of the emissions
So step aside, Tomas, because you have no leg to stand on.
(Tomas: ” tis just scratch!”, “I’ve got worse!”)
MA Rodger is factually wrong and misrepresenting Hansen’s position,(and my own) again. Let me break it down clearly:
Hansen’s “acid test” is about the spike itself through 2023-2024 — the abrupt, physically meaningful warming and its hemispheric/global distribution. It is not about whether extreme anomalies persist indefinitely. The spike happened in 2023–24, so the test is valid.
Reality check: 2023 spiked well above the expected background warming (~0.5°C above 2022 expectations) without a strong El Niño, just like 2025 remains elevated (~0.3°C above expectations). Both are clear, measurable deviations. You cannot claim that 2023 was “Bananas!!” but 2025 somehow invalidates the event. That’s logically incoherent.
In the middle came the record 2024 temps bookended by 23-25 non-El Nino years at the same GMST.
MA Rodger is denying observable reality by insisting that the absence in 2025 of current extreme anomalies as large as 2024 somehow negates what happened. That’s common-sense-defying and irrelevant to the physical phenomenon.
His ideological bias toward discrediting anything Hansen says has been persistent for 15+ years. That might explain the intensity of his confusions, pushback and lack of engagement with the actual Data. What it means and what Hansen actually says about it.
Bottom line: 2023’s GMST spike is real, measurable, and physically significant. 2025’s temperatures do not contradict it. They are more of the same. Anyone claiming otherwise is ignoring data and logic.
Data or whatever you call yourself,
You really have lost the plot here. A reply setting out all you have got badly wrong is a waste of my time and yours.
MAR: indeed. Meanwhile, he appears to be unaware that Dr. Hansen himself would not support his embrace of conflict uber alles, fighting with each other rather than doing something to improve the our prospects for the future and overcome liars and bullies whose extremist views endanger us all.
Meanwhile, year to year prices for Li-Ion battery storage (stationary approximately 40%, all types approximately 8%) continue to fall:
https://bsky.app/profile/mzjacobson.bsky.social/post/3mb5p2qjf5c2i
Direct link to article below:
https://www.ess-news.com/2025/12/09/bnef-lithium-ion-battery-pack-prices-fall-to-108-kwh-stationary-storage-becomes-lowest-price-segment/
Another step away from the outdated polluting and destructive fossil fuel industry and the problematic and expensive nuclear fuel industry, as their time gradually fades.
So, Trump welcomes the New Year by bombing Venezuela and abducting (and of course by implication deposing) Maduro. No US personnel were killed; no word on Venezuelan casualties. I suppose they aren’t considered to matter.
Well, that’s one way to raise oil prices in the short term, at least.
Meanwhile, storage is now drastically cheaper than even four years ago, despite some economic headwinds offsetting the declining price trend. BNEF puts EV batteries globally at $99/kWh (for the second year now), and stationary storage at just $70/kWh.
https://www.ess-news.com/2025/12/09/bnef-lithium-ion-battery-pack-prices-fall-to-108-kwh-stationary-storage-becomes-lowest-price-segment/
Nothing like fighting to hijack yet more of a rapidly failing and highly dangerous technology, So much winning!
Obama started the ball rolling with EO 13692. Then, during his first term, Trump put a bounty on Maduro for $15,000,000. Bribem increased that bounty to $25,000,000. OH YES, Bribem did it:
https://www.snopes.com/fact-check/biden-bounty-maduro/
So, Trump just continued what Obama and Bribem started.
Oil price seems stable so far at ~$58.00/barrel. Its been about the same for quite a while.
https://www.marketwatch.com/
US oil companies developed their oil infrastructure, so it rightly belongs to us.
https://www.opb.org/article/2026/01/04/five-things-to-know-about-oil-in-venezuela/
Quote: “U.S. oil companies like Chevron began drilling in Venezuela about one hundred years ago and played a key role in developing the country’s oil sector.”
Don’t build a house near a grid-scale battery storage facility:
https://www.zerohedge.com/energy/unreported-story-grid-scale-battery-fires
Trump is going to save us. His company is at the leading edge of science and engineering:
https://world-nuclear-news.org/articles/trump-media-announces-merger-with-fusion-firm-tae-technologies
The future looks bright!
#WINNING
KiA: “Bribem”
Your children must be so proud.
KIA: Bribem
BPL: Don’t spread lies.
https://www.youtube.com/watch?v=FDHakBGGwyk
Wink, wink.
The caption under the video says.” Back in 2018, President Joe Biden boasted about using a billion-dollar loan agreement with Ukraine as leverage to get a prosecutor fired as he was investigating corruption at Burisma, the Ukrainian energy company where Hunter Biden was a director”
Parts of this caption are false information. Biden did try to get rid of the prosecutor, but its nothing to do with Hunter Biden. Biden was acting under bipartisan policy to get rid of the prosecutor because he WASNT acting to stop corruption. Theres nothing in the video about Hunter Biden. Theres no evidence the prosecutor was investigating Burisma. Biden was also cleared of any wrongdoing in various REPUBLICAN investigations. Refer:
https://en.wikipedia.org/wiki/Biden%E2%80%93Ukraine_conspiracy_theory
KIA is a gullible fool.
Do your children know the extent to which you’ve sold them out?
https://www.newyorker.com/cartoon/a16995
Our Know It All Sophist’s choice: Waves hand, “Take them both. I just need the attention!”
https://revjrknott.blogspot.com/2018/07/a-few-favorite-cartoons-about-god.html
Creator’s remorse: What the hell was I thinking
and a few other good ones!
UAH has reported for December, with a global TLT anomaly of +0.30ºC, down on November’s +0.43ºC and indeed the lowest monthly anomaly since June 2023.
This puts December 2025 as the 6th warmest December in the UAH TLT record, behind 2023 (+0.74ºC), 2024 (+0.61ºC), 2019 (+0.43ºC), 2015 (+0.35ºC) & 2017 (+0.31ºC), with the rest of the top-ten warmest Decembers running 7th 2003 (+0.26ºC), 1987 (+0.25ºC), 2022 (+0.22ºC) & 2016 (+0.16ºC).
So there’s a lot of wobble in that 6th-place warmest Dec ranking.
Ignoring the wobbles, these UAH TLT numbers haven’t seen much of a drop since the start of the year with the NH showing no significant drop since May 2025 and the SH pretty flat since Jan 2025.
With the cool start to 2023, this puts 2025 warmer than 2023 and the 2nd warmest year on the UAH TLT record.
This annual ranking now runs:-
2024 … +0.77ºC
2025 … +0.47ºC
2023 … +0.43ºC
2016 … +0.39ºC
1998 … +0.35ºC
2020 … +0.35ºC
2019 … +0.30ºC
The RSS Browser Tool has also published to December for TLT etc. (NOAA STAR is still stuck in July, presumably having been ‘Trumped’.)
RSS TLT numbers have not been greatly different to UAH or STAR since 2005 so no great difference between the RSS & UAH rankings by year, except for 1998 which is well out of the top RSS rankings. (The big difference with RSS, and why their Browser Tool shows a warming rate of +0.23ºC/decade global warming while UAH & STAR only manage +0.16ºC/decade, is the calibration of the satellite data thro’ 1990-2005 when RSS somehow found an extra +0.35ºC of warming.)
RSS TLT Global Warmest Year Rankings (UAH in brackets)
1st ….. 2024 … +1.22ºC … … (2024)
2nd … 2025 … +0.93ºC … … (2025)
3rd …. 2023 … +0.91ºC … … (2023)
4th …. 2020 … +0.82ºC … … (2016)
5th …. 2016 … +0.81ºC … … (1998)
6th …. 2019 … +0.76ºC … … (2020)
7th …. 2017 … +0.69ºC … … (2019)
[Response: Go here for the NOAA STAR data. – gavin]
Just to be clear, my comments here were never about whether CMIP models (or others) can reproduce historical GMST trends reasonably well, nor whether CO₂ causes warming — those points are well established and not in dispute.
The issue raised months ago concerned interpretation, not detection: specifically, how ensemble spread, tuning choices, and compensating errors across interacting processes (clouds, aerosols, ocean heat uptake, circulation) limit the extent to which apparent GMST “skill” can be generalized to broader claims about climate system adequacy, scenario timing, or confidence in higher-order outcomes.
Narrowing the discussion to historical GMST alone while setting aside these structural issues sidesteps the question rather than resolving it.
Concerns of this kind are well known within the field. For example, Palmer & Stevens (2019) discuss the implications of model structural uncertainty and equifinality for interpretation and decision-relevance, independent of surface temperature skill:
https://www.pnas.org/doi/full/10.1073/pnas.1906691116
I don’t see much value in revisiting this further here, but I wanted to clarify the scope of the discussion for the record.
Multi_troll, sock-puppet version: “Data”: “ DAC is not the villain. Net zero is not the villain. Carbon budgets are not the villain. They are symptoms of a deeper refusal: to accept irreversibility”
Doomers and Deniers – similar in methods, similar in their fruits:
– Deniers try to discredit the climate science not telling them what they want to hear, Doomers try to discredit the climate science not telling them what they want to hear,
– Deniers try to discredit political/economic mechanisms and the technologies for reductions of CO2 emissions use the All-or-Nothing” fallacy (as if the world at 450 ppm was no worse than the world at 900 ppm). Doomers: dismiss renewables, DAC, net zero etc. on the same All-or Nothing fallacy.
– Deniers, use their All-or-Nothing narrative to sow apathy by portraying efforts to do anything about climate change as hopeless.
Doomers using the same All-or-Nothing fallacy, sows apathy by portraying political/economic mechanisms and the technologies to reduce CO2 as “ as not only hopeless, but in fact “ symptoms of a refusal to accept irreversibility“. And if AGW is irreversible, what’s the point of trying?
Les extremes sont touchant.
My original comment, in full and in context, without the distortions, misrepresentations, or baseless mischaracterizations:
Data says
31 Dec 2025 at 8:50 PM
I checked ATTP’s discussion. Mostly circular.
Most discussions around DAC, net zero, and carbon budgets sound like a lot of technical back-and-forth, but beneath the noise there’s a deeper structural problem that few acknowledge. The core failure everyone is skating around is this: climate mitigation treats a living, historical, relational Earth as if it were a controllable machine — and mistakes abstractions for reality.
DAC is not the villain. Net zero is not the villain. Carbon budgets are not the villain. They are symptoms of a deeper refusal: to accept irreversibility, to accept inheritance, to accept that the scale and power of civilization itself are the problem. That’s why techno-fixes proliferate, limits are endlessly deferred, and language becomes euphemistic and circular.
All the debate about energy requirements, efficiencies, or LULUCF ratios is still operating inside the machine metaphor: inputs, outputs, efficiencies, substitutions. As Whitehead would say, they are mistaking abstractions for concrete reality. To put it sharply: you cannot budget the future of a system whose governing dynamics change as a function of its past. That is the dagger cutting through all these circular debates.
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843319
The AndThenThere’sPhysics post for a proper context and background.
https://andthentheresphysics.wordpress.com/2025/12/17/direct-air-capture/
The RC DAC thread started by nigelj if you’re interested
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843268
Multitroll Data: My original comment, in full and in context, without the distortions, misrepresentations, or baseless mischaracterizations
That Dharma lady doth protest too much.
Put your money where your mouth is. Prove how quoting the rest of your self-satisfied logorrhea – would have dramatically CHANGES the meaning of your words I quoted:
– how is your lecturing them on their “ refusal to accept irreversibility” of climate change – DOES NOT imply that CO2 mitigation technologies, policies and science are not only pointless, but a sign of delusion (as in: “ refusal: to accept irreversibility”)?
– how your lecturing people trying to mitigate CO2 “ symptoms of a deeper refusal: to accept irreversibility” is NOT your sowing apathy, is NOT telling people there is no point to doing anything, because climate change is “ irreversible “?
– how is your attacking mitigation and net zero targets NOT benefitting oil and gas corporations and oligarchs, and not defending the interests of Russia, Saudi Arabia and Iran – whose economy, and therefore survival of their regimes and ability to wage wars and/or sponsor terrorism RELIES on the world’s apathy toward reducing the demand for fossil fuels (if climate change is “irreversible” then why bother?)
By your fruits, not your accusations “distortions, misrepresentations, or baseless mischaracterizations ” you are incapable to prove.
Incidentally – characteristics with which you try discredit your opponents – fit your mass production on RC to a tee. Hence – a clinical example of a projection.
Reply to Piotr et al
Everyone chooses what they will believe. My role is not to hold anyone’s hand or prove them wrong.
As Brandolini’s Law reminds us, the energy needed to refute nonsense is an order of magnitude greater than that needed to produce it.
I’ve said what I needed to say. The rest is up to others to discern.
Useful ref : https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
Data said: “It’s not my role to hold their hand, say it’s OK or prove they’re wrong.”
Then why do you spend so much time on this website trying to prove people wrong?
Data: “Reply to Piotr et al Everyone chooses what they will believe”
and this in your brain proves your accusations of “ distortions, misrepresentations, or baseless mischaracterizations” – how?
Since you declared yourself “neurodivergent” – I provided you points proving of which would have shown that you are not all bark and no bite. So what happened?
=== P. 7 Jan =======
” That Dharma lady doth protest too much.
Put your money where your mouth is. Prove how quoting the rest of your self-satisfied logorrhea – would have dramatically CHANGES the meaning of your words I quoted:
– how is your lecturing them on their “ refusal to accept irreversibility” of climate change – DOES NOT imply that CO2 mitigation technologies, policies and science are not only pointless, but a sign of delusion (as in: “ refusal: to accept irreversibility”)?
– how your lecturing people trying to mitigate CO2 “ symptoms of a deeper refusal: to accept irreversibility” is NOT your sowing apathy, is NOT telling people there is no point to doing anything, because climate change is “ irreversible “?
– how is your attacking mitigation and net zero targets NOT benefitting oil and gas corporations and oligarchs, and not defending the interests of Russia, Saudi Arabia and Iran – whose economy, and therefore survival of their regimes and ability to wage wars and/or sponsor terrorism RELIES on the world’s apathy toward reducing the demand for fossil fuels (if climate change is “irreversible” then why bother?)
Incidentally – characteristics with which you try discredit your opponents (“distortions, misrepresentations, or baseless mischaracterizations _ – fit your mass production on RC to a tee. Hence – a clinical example of a projection. ”
==================== end of quote =======================
Nigelj says
8 Jan 2026 at 1:18 PM
Unfortunately, you are directing that question to the wrong person. Wrong target, yet again. Persisting in it doesn’t improve the argument nor your subjective opinion.
Data says @8 Jan 2026 at 8:17 PM “Unfortunately, you are directing that question to the wrong person. Wrong target, yet again. Persisting in it doesn’t improve the argument nor your subjective opinion.”
I think Im directing the question at exactly the right person as follows. You claimed “It’s not my role to hold their hand, say it’s OK or prove they’re wrong.” My response was that “why then do you ten spend so much time on this website trying to prove people wrong” (something you did not deny doing in your reply). Nobody else is making the claim “it’s not my role to hold their hand, say it’s OK or prove they’re wrong.” so I think I replied to exactly the right person.
So do you think you can answer this time, or do we get yet more evasions and deflections?
Reply to Nigelj
9 Jan 2026 at 2:14 PM
I concede it’s a possibility you can’t help yourself and don’t know what you’re doing. But that is not my responsibility. Everyone ultimately chooses what they believe. It is not my role to hold anyone’s hand, reassure them, or prove them wrong. I’ve said what I needed to say.
Reply to Barton Paul Levenson
9 Jan 2026 at 9:28 AM
BPL: This is the third time I’ve read this in 2 areas. Straw man, and bloody condescending as well.
Are you sure about that? Maybe have another look and reconsider what you think you saw.
DATA,
I am not familiar with your posting, so do not know if you and I are aligned in our thinking or not, but I do know two things:
1. The specific response you gave to the fools in this thread is an absolutely accurate view.
2. If you are being harassed by the Peanut Gallery, you are either a denier or speaking truth to power to these fools. I.e., there’s a well-better-than-even chance you’re promoting sense into a vacuum of ideology and bullying bullshit.
I am not stating support for your posting generally because I really do not know, but in this thread? The trolls are the ones who have thrown that word around for 10-20 years to intimidate anyone who doesn’t suckle at the IPCC-as-gold-standard teat. They aren’t objective enough to realize the IPCC reports are nothing more than glorified literature reviews.
Whatever you are doing to raise their hackles, it’s likely far more intelligent than anything these fools produce. For the record, they produce nothing, thus their defensiveness and aggression bread by insecurity.
Killian says: “2. If you are being harassed by the Peanut Gallery, you are either a denier or speaking truth to power to these fools. I.e., there’s a well-better-than-even chance you’re promoting sense into a vacuum of ideology and bullying bullshit.”
Killian cant stand it when people politely criticise his views, so he claims hes being bullied. The irony is that Killian is the single biggest bully on this website. A man who calls people fools (see above) and for years harassed me (and others) calling me an idiot and to get lost and to shut up. Bullying is defined as relentless personal abuse and intimidation. Pick up a dictionary Killian.
Killian: “They aren’t objective enough to realize the IPCC reports are nothing more than glorified literature reviews.”
Of course the IPCC do literature reviews. Because that is their job. The IPCC look at all the literature out there, some of it conflicting, and make a decision on what literature is the most convincing. They do this carefully using teams of volunteer experts in their fields. So we end up with the best possible review of the science. And it is is the result of a team effort and a consensus procedure rather than putting one person in charge of deciding what is right with all the risks that would have. Its not possible anyway there is too much literature for one person to review..
And what is Killians better alternative?
Has anyone gotten Killian to share the papers they claimed showed a 5% chance of human extinction from climate change? You know, the papers that likely don’t exist because they were made up. Hence why Killian never shared them no matter how many people asked.
https://www.realclimate.org/index.php/archives/2025/10/unforced-variations-oct-2025/#comment-840307
https://www.realclimate.org/index.php/archives/2025/10/unforced-variations-oct-2025/#comment-840324
Reply to Piotr et al #2
Whatever you say to them they come back with something else more ridiculous. I call it “the conspiracy loop”, and it’s usually a sign that you’re dealing with someone operating in bad faith or they’re a lost cause.
Or both. I’ve said what I needed to say. The rest is up to others to discern.
A useful ref : https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
Interested readers can see my original comment, in full and in context, without the distortions, misrepresentations, or baseless mischaracterizations:
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843319
Multi-troll “Data” Whatever you say to them they come back with something else more ridiculous
Yeah, expecting Multi-troll to prove his insinuations – RIDICULOUS !
=== Piotr 7 Jan =======
Put your money where your mouth is:
Prove how quoting the rest of your self-satisfied logorrhea – would have dramatically CHANGED the meaning of YOUR words that I did quote:
1. how is your lecturing of the people involved in renewables, DAC, net zero and in studying climate -on their “ refusal to accept irreversibility” of climate change – NOT YOU implying that CO2 mitigation is not only pointless, but a sign of delusion (“ refusal: to accept irreversibility”)?
2. how is your pontificating that that climate change is “irreversible” NOT sowing apathy (what’s the point of doing anything, if the damage to the climate is “irreversible “?
3. how is YOUR attacking mitigation and net zero targets NOT benefitting oil and gas corporations and oligarchs?
4. how is YOUR attacking mitigation and net zero targets NOT defending the interests of Russia, Saudi Arabia and Iran – whose economy, and therefore survival of their regimes and ability to wage wars and/or sponsor terrorism – RELIES on the world’s apathy toward reducing the demand for fossil fuels, apathy promoted by your claim that nothing can be done because climate change is “irreversible”?)
===========================================
You are and will always be an ass.
Killian says: “2. If you (Data) are being harassed by the Peanut Gallery, you are either a denier or speaking truth to power to these fools. I.e., there’s a well-better-than-even chance you’re promoting sense into a vacuum of ideology and bullying bullshit.”
Killian cant stand it when people politely criticise his views, so he claims hes being bullied. The irony is that Killian is the single biggest bully on this website. A man who calls people fools (see above) and for years harassed me (and others) calling me an idiot and to get lost and to shut up. Bullying is defined as relentless personal abuse and intimidation. Pick up a dictionary Killian.
Killian: “They aren’t objective enough to realize the IPCC reports are nothing more than glorified literature reviews.”
Of course the IPCC do literature reviews. Because that is their job. The IPCC look at all the literature out there, some of it conflicting, and make a decision on what literature is the most convincing. They do this carefully using teams of volunteer experts in their fields. So we end up with the best possible review of the science. And it is is the result of a team effort and a consensus procedure rather than putting one person in charge of deciding what is right with all the risks that would have. Its not possible anyway because there is too much literature for one person to review..
And what is Killians better alternative? Since he claims hes so “productive”?
I’ve noticed that there was this a couple of years ago, and the warming *has* been a bit higher than the model’s projections, and I was wondering if this is a partial explanation? The water vapor doesn’t usually get to the Stratosphere and this didn’t dissipate as fast as a normal sort of pulse in the lower atmosphere.
https://www.nasa.gov/earth/tonga-eruption-blasted-unprecedented-amount-of-water-into-stratosphere/
Might it be worth a note on the temperature records?
BJC: This was discussed here over time after the event. Simply answer, partly. Sorry I am too lazy to look up links for you.
Yes, that eruption is the most likely cause of the rapid increase in measured global temperatures and it was predicted by several studies. From Wikipedia:
“Large volcanic eruptions can inject large amounts of sulfur dioxide into the stratosphere, causing the formation of aerosol layers that reflect sunlight and can cause a cooling of the climate. In contrast, during the Hunga Tonga–Hunga Haʻapai eruption this sulfur was accompanied by large amounts of water vapour, which by acting as a greenhouse gas overrode the aerosol effect and caused a net warming of the climate system. One study estimated a 7% increase in the probability that global warming will exceed 1.5 °C (2.7 °F) in at least one of the next five years, although greenhouse gas emissions and climate policy to mitigate them remain the major determinant of this risk. Another study estimated that the water vapor will stay in the stratosphere for up to eight years, and influence winter weather in both hemispheres. More recent studies have indicated that the eruption had a slight cooling effect.”
https://en.wikipedia.org/wiki/2022_Hunga_Tonga%E2%80%93Hunga_Ha%CA%BBapai_eruption_and_tsunami#Climate_and_atmospheric_impact
It likely isn’t.
“The record-high global surface temperatures in 2023/2024 were not due to the Hunga eruption.”
https://juser.fz-juelich.de/record/1049154/files/Hunga_APARC_Report_full.pdf
https://x.com/hausfath/status/2001756184554979837
https://x.com/AtomsksSanakan/status/1938001187611066784
Thanks to all of you.
I’ve pulled the report down for reference, but it seems roughly as I expected.
You’re welcome. And thanks for the question; it’s definitely a topic that’s worth exploring.
Powerplays, politics and panic – has BIG OIL wrestled back control? Dave Borlace, Just Have a Think, another useful review. Below, some quotes from transcript (qv: “more” along with sources).
https://www.youtube.com/watch?v=e6vQo4oE7L8
“And if the fossil fuel behemoths have their way, that demand will keep trundling on long after that. Of course, what those billionaire oligarchs, and arguably the IEA themselves, don’t appear to be factoring in here is the fact that on that trajectory there may not be a coherent global market to sell into anymore anyway, largely as a result of the damage those increased fossil emissions will have caused by then. But that is probably an entirely separate video’s-worth of ranting, so let’s not go there now, eh? So, what about renewables growth then?”
—
“perhaps the clearest commentary of all three reports on this Primary Energy Fallacy thing that we’ve all been hearing so much about recently.
“While primary energy peaks and starts to drop off, they say, USEFUL energy consumption actually grows through to 2050. And they explain that that’s because today’s energy system is highly INEFFICIENT. “To deliver the energy services necessary to support economic development and wellbeing, we do not need to replace all primary energy … only the useful share that actually powers economic activity.”
““As renewables and electrification accelerate, losses fall dramatically. Electric motors are 3-4 times more efficient than combustion engines, and new heat pumps deliver 3-4 units of heat per unit of electricity. These technological shifts … fundamentally change the conversion 8:06 efficiency of the global energy system.””
https://ocean2climate.org/2026/01/05/a-highway-of-heat-to-the-arctic-why-a-vital-ocean-current-is-losing-its-chill/
Another interesting article regarding the AMOC and how some of it isn’t losing its heat.
Paul beckwith found it but I’m just posting the link to the article and paper and not his video
If the Trump regime INTENDED to accelerate global warming with the GOAL of destroying the Earth’s biosphere and human civilization along with it at the earliest possible date, what would they be doing differently?
Not much.
No worries. Less than 3 years to go, unless they can prove Trump won in 2020, which could happen. If they prove it, he WILL get a 3rd term. I suspect it will not happen, but we can hope.
In the mean time, I see you’ve joined the Moral Majority. I like your light blue puffer jacket – that’ll keep you warm until AGW kicks in hard:
https://www.youtube.com/watch?v=CUSRxJ5OR8M
KIA: unless they can prove Trump won in 2020, which could happen.
BPL: It can’t happen, because he didn’t.
KIA: If they prove it, he WILL get a 3rd term.
BPL: See above.
Thanks for showing once again your lack of critical thinking skills. Well over 60 court cases have proven your orange messiah wrong. Guess you hate democracy. But do keep deny that the Pittsburgh synagogue mass shooter was one of your brethren. Hint: he was..
and in a similar vein:
If Russia/China wanted to weaken NATO to the point of irrelevance, pit its allies against USA, discredit and weaken Western democracies and rule-based world order, restrict US influence to the Western hemisphere, withdraw US from Europe and abandon Ukraine, make African and Asian countries reorient toward Russia/China, unify the world against USA, bypass it in trade, destroy the world’s trust in dollar and encourage the transition from dollar to yuan as the world currency, thus reducing or ending the ability of the US to finance its debt abroad, scuttle UN and Paris climate goals, thus preserving the Russia’s oil and gas exports without which their economy would collapse, and with it – Putin’s regime, wealth of their oligarchs and Russia ability to wage wars on other countries – then they couldn’t do much better than having Trump.
If there was no Trump, they would clearly have to invent him.
Several recent comments have continued to frame CMIP outputs as forecasts of future GMST. Since that framing is incorrect, I want to clarify the record once, for readers — in this thread, and across others for months now.
In this respect, I agree with JCM’s recent point that similar outcomes do not imply identical representations or understanding. That distinction matters — but it has been repeatedly blurred in this thread.
The central misrepresentation originates with Atomsk’s repeated claim that the debate concerns the “skill at projecting future global warming and iTCR.” This has already been clarified in-thread, but it bears restating clearly because the same misframing keeps being repeated.
CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error.
Recasting methodological critique as denial of “predictive skill” therefore misrepresents both the models and what was actually being argued. CMIP does not make forecasts; it explores conditional responses. Claiming forecast skill on that basis — and then accusing critics of denying it — is a strawman built on a false premise.
That is why extended discussions of Planck feedbacks, Callendar vs. Manabe physics, or water-vapour dominance do not address the issue. Those are defensive distractions. They do not repair the category error.
Similarly, the recent rhetorical piling-on — including Piotr’s framing — adds volume but no substance. It is a textbook illustration of Brandolini’s Law: producing confusion is far easier than correcting it. Repetition does not convert misframing into validity.
This Ref explains it all: https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
The central false claim being advanced — repeatedly, and now despite correction — is this: “The question was skill at projecting future global warming and iTCR.”
That statement is not merely debatable. It is categorically false. CMIP outputs are conditional projections, not forecasts of future GMST. Treating them as forecasts, and then claiming “skill” in the sense of predictive success, is a category error.
This distinction is critical for interpreting claims about model skill and for understanding the broader rhetorical patterns we see in these debates.
Why this is not a matter of opinion. CMIP / AR6 with SSPs do not generate predictions of what will happen. They produce conditional projections, not forecasts.
A forecast requires:
– a probability distribution over outcomes
– likelihoods attached to drivers
– evaluation against out-of-sample future realizations
CMIP / AR6 has none of these:
– SSPs are storylines, not likelihood-weighted futures
– ensemble spread does not correspond to real-world probabilities
– hindcast agreement does not convert conditional simulations into forecasts
This is precisely why the IPCC itself uses the word projection, not prediction.
Why “historical skill” is not decisive.
Once this distinction is respected, the rhetoric collapses.
Agreement with historical GMST does not uniquely validate:
– circulation
– clouds
– hydrology
– land–atmosphere coupling
– extremes
– degradation impacts
These are precisely the domains where models are acknowledged to struggle, and where equifinality — multiple structures producing similar outputs — dominates.
Equifinality is the buried landmine here:
– similar GMST trajectories can arise from different parameterisations
– compensating errors and structural calibration blur causal attribution
– tuning vs. physics becomes inseparable
– convergence stops being probative
Outcome coincidence ≠ correct system understanding.
This is why the cannonball analogy fails. Climate models are not asymptotic limits of an exact theory. Newtonian mechanics is. The analogy is invalid.
What’s actually happening
What we are seeing is not inquiry, but performative epistemology:
– authority asserted rather than uncertainty resolved
– boundary enforcement instead of explanation
– certainty used as a social signal
This pattern is not confined to one poster. It is reinforced by several regulars who have, over months, converged on the same rhetorical substitution: methodological critique → denial → moralised dismissal.
That may maintain group cohesion, but it does not advance understanding.
Everyone ultimately chooses what they believe.
It is not my role to hold anyone’s hand, reassure them, or prove them wrong.
This comment is simply to put the record straight — in black and white — and to draw a line under a framing that has now been repeated long past the point of honest disagreement.
D: It is not my role to hold anyone’s hand, reassure them, or prove them wrong.
BPL: This is the third time I’ve read this in 2 areas. Straw man, and bloody condescending as well.
Yup. It’s particularly tedious because the sockpuppet failed on basic points that have been explained on RealClimate for years. At least other folks have the integrity and intellect needed to learn. The RealClimate post below, for example, went over generation of GMST forecasts. The modeling had three different scenarios, with each scenario having its own conditional projection for ‘if this forcing trend, then this GMST trend’. Observed forcing was between scenarios B and C, so the forecasted GMST trend was between scenarios B and C. That GMST forecast matched observed warming, confirming model skill. I have no illusions the sockpuppet account will ever honestly admit how this shows they’re wrong, anymore than I have illusions that most AGW denialists will admit when experts have shown they’re wrong.
https://www.realclimate.org/images/h88_proj_vs_real6.png
https://www.realclimate.org/index.php/archives/2018/06/30-years-after-hansens-testimony/
I made a single comment which Atomsk’s Sanakan has replied to 3 times via Proxies, when he is actually addressing me. 3 posts in under an Hour to reply to my one comment? This is disruptive trolling:101 His comments bare no relationship to anything I have said, or to any reference I have shared here. iow, it’s verbose nonsense.
I will make one reply here to these 3 posts by Atomsk’s Sanakan:
1) 10 Jan 2026 at 12:30 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843737
2) 10 Jan 2026 at 12:40 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843738
3) 10 Jan 2026 at 1:29 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843739
My comment 8 Jan 2026 at 7:53 PM above is a double edged sword. It was meant to kill two birds with one stone by showing what the definitions are; leaving an opportunity to add in more factual data.
Comments like this one: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error.” can simultaneously address two things at once.
1) The definitions surrounding the AR6 Table SPM.1 and 2) the unfounded unscientific Temperature Change Predications given by Atomsk’s Sanakan back in Dec 23 2025.
This bout of disinformation and incessant trolling harassment by Atomsk’s Sanakan all began with Chuck says 20 Dec 2025 at 7:56 PM
With Predictions of 3 C by 2050. 5 C by the end of the century!
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843026
HIs link was to “The German Physical society (DPG) and the German metrological society (DMG) jointly issued a dire warning, stating if the current trends continue, global warming could reach 3°C above pre-industrial levels by about 2050, and up to 5°C by 2100.“
https://www.responsiblealpha.com/post/scientists-3-c-by-2050-5-c-by-2100
They backed up their predictions with a published article and science research paper. Agree or disagree that is basic science being done.
Atomsk’s Sanakan immediately pushed back presenting his own Predictions
I will quote him in full verbatim::
Atomsk’s Sanakan says 23 Dec 2025 at 7:23 AM
Those are incorrect claims from the German Physical Society (DPG) and the German Meteorological Society (DMG). They’re cherry-picking a small number of CMIP6 models that are known to overestimate warming, including warming up to 2025. If those models are too warm in 2025, then there’s good reason to think they’ll also be too warm for 2050. The observed warming trend, better observationally-constrained CMIP6 models, CMIP5 models, the IPCC, etc. instead show we’re on pace for ~2°C by 2045-2050, 3°C by 2075-2090 (2060 at the absolute earliest), and ~3.5°C by the end of the century. [My emphasis]
https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843058
Atomsk’s Sanakan has nothing to support his “claim” their Predictions are incorrect.
Points to note:
1) The German Physical Society (DPG) and the German Meteorological Society (DMG) are not “cherry-picking a small number of CMIP6 models.”,/i> That is plainly false. Atomsk’s Sanakan has provided no evidence to support such an accusation against them.
2) It is a high quality credible detailed climate science research paper under the HafenCity Universität Hamburg, and the two institutions already mentions. It has compelling supporting references. This publication is not equivalent to a throw away comment by a internet blogger, non-expert commenter or internet troll.
3) Their conclusions are compelling: “Human-induced global warming poses a real threat to the survival of human civilization.”
4) The “observed warming trend” is not in fact constrained by existing “CMIP6 models, CMIP5 models, the IPCC, etc. nor vice-versa. That is false.
5) There are no “observationally-constrained CMIP6 models, CMIP5 models, the IPCC, etc.” showing anything of the kind regarding temperatures.
6) The numbers in Atomsk’s Sanakan Predictions are his own. Plucked out of thin air apparently. He offers no supporting evidence of how he arrived at these figures. Unlike everyone else.
I put a number of challenges to Atomsk’s Sanakan, to which he has failed to supply any proof or supporting science for his many distorted claims or his GMST Anomaly Predictions.
Data says 23 Dec 2025 at 11:41 PM
– You need to prove that allegation is true, science based, then show your work and that list of “cherry-picked CMIP6 models”
– no one needs a model to see what the warming [is] up to 2025. The observational Data is available.
– RE If those models are too warm in 2025, then there’s good reason to think they’ll also be too warm for 2050. This is just nonsense. Prove that is true, show your work and reference everything why it matters.
– RE Listing his predictions as shown,
Data says: Prove that is true, show your work and reference everything.
– I summarized my comment as: “Predications of ~2°C by 2045-2050 [etc] is blatantly false and unfounded. Peer-Review doesn’t confirm a paper’s analysis and findings are in fact correct Scattered paintballs of ‘claims’ on a wall [posted by an Atomsk’s Sanakan] isn’t coherent science, let alone valid evidence of anything.”
To date there has been Silence on these questions along with the more incoherent trolling distractions in return. A flat refusal to show his “work” with no scientific basis for his Predictions.
Original comment in full: https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-843073
Re: “Predications of ~2°C by 2045-2050 [etc] is blatantly false and unfounded“
From the same sockpuppet account that tried to disinform people on whether Forster 2025 was peer-reviewed.
That disinformation was given to avoid acknowledging the projection of 2°C by approximately 2048 in that peer-reviewed paper’s Climate Change Tracker.
Data: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error….CMIP does not make forecasts; it explores conditional responses. Claiming forecast skill on that basis — and then accusing critics of denying it — is a strawman built on a false premise…..etc,etc”
No. The terms forecast applied to future warming is just SHORTHAND or an abbreviation for “scenario conditioned simulations”. Its implicitly acknowledged and understood that the forecasts are dependent on many variables and conditions. But if those conditions are met then we can say a model made accurate forecasts so had skill. And so the CMPI5 models had skill. So no strawman. Whether the models had good underlying physics is mostly a separate issue.
Reply to Nigelj
Then you are assuming I was making a criticism of the CMIP6 Modelling / language? But why would you think that?
Because you referred to “CMIP outputs,” which is a general statement for all CMPI models. Could that be it?
About what Data said
10 Jan 2026 at 1:49 AM
Reply to Nigelj
Then you are assuming I was making a criticism of the CMIP6 Modelling / language? But why would you think that?
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843720
Ah yes, I see now. The bold distracted you terribly. I should have put what came before in Bold too, so you didn’t miss it. Let me correct that oversight now:
“The central misrepresentation originates with Atomsk’s repeated claim that the debate concerns the “skill at projecting future global warming and iTCR.” This has already been clarified in-thread, but it bears restating clearly because the same misframing keeps being repeated.”
Yes, it definitely bares repeating. This too most probably:
“Several recent comments have continued to frame CMIP outputs as forecasts of future GMST. Since that framing is incorrect, I want to clarify the record once, for readers — in this thread, and across others for months now.”
Fixed. The only conclusion possible is there was no criticism of the CMIP6 Modelling / language by me.
Proving again the truth behind Brandolini’s Law, the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
See https://modelthinkers.com/mental-model/bullshit-asymmetry-principle
And so much for my “I want to clarify the record once, for readers” being enough. It wasn’t.
I refer new readers to my original comment in full:
Data says 8 Jan 2026 at 7:53 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843658
Data: Your response doesn’t change the issue I raised as follows:
“Data: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error….CMIP does not make forecasts; it explores conditional responses. Claiming forecast skill on that basis — and then accusing critics of denying it — is a strawman built on a false premise…..etc,etc”
“No. The terms forecast applied to future warming is just SHORTHAND or an abbreviation for “scenario conditioned simulations”. Its implicitly acknowledged and understood that the forecasts are dependent on many variables and conditions. But if those conditions are met then we can say a model made accurate forecasts so had skill. And so the CMPI5 models had skill. So no strawman. Whether the models had good underlying physics is mostly a separate issue.”
Refer also to AS responses @ 10 Jan 2026 at 12:30 PM @10 Jan 2026 at 12:40 PM
You have made no substantive response to the points made by me or AS, which are roughly saying the same things. So you’re a game playing, evasive, time wasting troll.
It’s just the sockpuppet spouting evidence-free whining again because the published evidence and expert assessments shows they’re wrong. About as worthless as a non-expert flat Earther rambling online about geologists being wrong on Earth’s shape.
Anyway, experts rightly call these forecasts. There are different conditional projections/scenarios that implicitly say ‘if this amount of forcing, then this GMST trend. That ratio, combined with observed forcing, gives you the GMST forecast. Hence why experts select the conditional projection whose projected forcing most closely matches observed forcing.
It’s no different than conditional projections/scenarios in other contexts. So, for instance ‘if you walk to work, then it will take 30 minutes to get there.’ And ‘if you drive to work, then it will take 5 minutes to get there.’ And so on. You then plug in the route you took to work as the antecedent for the conditional, and the consequent of the conditional gives your forecasted time for getting to work.
Why are you engaging the sockpuppet, though, Nigelj? They lack both the integrity and intellect needed to accept these points, unlike the experts they whine about.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843443
These points on conditional projections and ‘if this forcing trend, then this GMST trend’ were already dumbed down multiple times for the sockpuppet account. But of course they’ll pretend otherwise; like denialists, they’re incapable of honestly admitting when evidence shows they’re wrong. For instance:
Reply to Nigelj
9 Jan 2026 at 2:02 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843698
After clearing up Nigelj’s error of flawed framing, in that “I was NOT making a criticism of the CMIP6 Modelling / language” at any time. I was raisng mutiple issues about how commenters here misrepresent those matters. Nigelj included.
Gratefully he is still stuck in this misconceptions and has conveniently responded to me again, making it even easier for me to walk through this door as planned.
Nigelj says 12 Jan 2026 at 3:04 PM
Data: Your response doesn’t change the issue I raised as follows:
Refer also to AS responses @ 10 Jan 2026 at 12:30 PM @10 Jan 2026 at 12:40 PM
You have made no substantive response to the points made by me or AS, which are roughly saying the same things. So you’re a game playing, evasive, time wasting troll.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843801
Similar to Atomsk’s Sanakan he appears incapable of laying a finger on me. So let’s dig into his many errors and misrepresentations so far. Shall we?
Moving on from: “Clarifying a persistent misframing here: CMIP / AR6 SSP outputs are conditional projections, not forecasts of GMST — historical agreement ≠ system validation; full explanation here: https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843658
Nigelj alleges the following is true: “The terms forecast applied to future warming is just SHORTHAND or an abbreviation for “scenario conditioned simulations”. “
He fails to provide even one piece of evidence this true. It’s merely his personal “belief”. Or perhaps another deliberate attempt to defend the indefensible. I don’t know, but I challenge this assertion with a series of requests for evidence:
1) Can you show where climate scientists use such a shorthand? In their everyday “commentaries” and online articles that might be referenced back to here?
2) Can show where online commentators use such a shorthand to mean specifically AR6 “scenario conditioned simulations”.
3) Can you show me where you yourself, on a regular basis, have used the term “Forecasts” as a Shorthand that meant a “future warming” and / or as an abbreviation for “scenario conditioned simulations”?
4) I am not only signalling out times where someone is addressing what “skill” various CMIP and other models might have in measuring future warming changes over time. But in any ocasion wher either the CMIP or AR6 “scenario conditioned simulations”. arise.
Let me be very clear here. I do not deny people use the word “forecast” about temperatures. I am specifically calling you out in the context in which you have chosen to argue against my points made in the post above. In particular in relations to CMIP and AR6 modelling activities and outputs. You must be able to show that context that fits with what I was saying above, and you have conceded fits. You appear to be suggesting that this “shorthand” is a common thing, otherwise why would you be raising it in the first place.
Be cause if it is not common, then what is your point? For it says absolutely nothing against my commentary above and the clear points I was making in that.
A few examples here and there are not sufficient. If what you claim is true then people everywhere will be using tis “shorthand” — you are the one Nigelj saying “And so the CMPI5 models had skill. So no strawman.”
If the term is being used, who decided doing so is “legitimately accurate”? You? If as I suspect you find there is no general use of this term as a “shortcut” this leaves the question: Where do you get all these woolly ideas from? The nearby sheep?
Let’s see you prove this is correct using evidence. You may like to start showing us if such a shorthand is used in the AR6, in their SPM perhaps. Your moment to shine has arrived Nigelj..
Good Luck.
AR6 WG1 SPM
https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf
Or in Zeke’s summary of AR6 WG1 (and Table SPM.1 )
https://www.carbonbrief.org/in-depth-qa-the-ipccs-sixth-assessment-report-on-climate-science/
Or in the IPCC Core Concepts Central to this Special Report-Box SPM.1
https://www.ipcc.ch/sr15/chapter/spm/spm-core-concepts/
SR15 Summary for Policymakers
https://www.ipcc.ch/sr15/chapter/spm/
Anywhere that RC Authors use such a “shorthand” appropriately and meaningfully?
In any of Zeke’s articles?
https://www.carbonbrief.org/analysis-what-are-the-causes-of-recent-record-high-global-temperatures/
https://www.theclimatebrink.com/p/the-great-acceleration-debate
Maybe Hansen et al http://www.columbia.edu/~jeh1/mailings/
Forster et al. (IGCC 2024/2025) https://essd.copernicus.org/articles/17/2641/2025/ or discussions about it.
Grant Foster
https://tamino.wordpress.com/2025/05/28/how-fast-is-the-world-warming/
https://www.researchsquare.com/article/rs-6079807/v1
https://www.realclimate.org ?
https://www.realclimate.org/index.php/archives/2019/06/absence-and-evidence/
In the recent +3C German paper?
https://www.dpg-physik.de/veroeffentlichungen/publikationen/stellungnahmen-der-dpg/klima-energie/klimaaufruf/stellungnahme
Scientists: 3°C by 2050, 5°C by 2100
https://www.responsiblealpha.com/post/scientists-3-c-by-2050-5-c-by-2100
Beaulieu (2024)
A recent surge in global warming is not detectable yet
Claudie Beaulieu, Colin Gallagher, Rebecca Killick, Robert Lund & Xueheng Shi
https://arxiv.org/abs/2403.03388
??????????????????????????????????
Folks shouldn’t trust the sockpuppet on the links they spam, especially after they were caught pretending Forster 2025 wasn’t peer-reviewed.
They pretended that to avoid acknowledging the projection of 2°C by approximately 2048 in that paper’s Climate Change Tracker. They continue to willfully ignore that projection since it shows they were wrong.
Data, you are just splitting hairs / being pedantic.
Your original comment on the issue: “Data: “CMIP outputs are not forecasts of future GMST. They are scenario-conditioned simulations — conditional temperature responses given prescribed forcing pathways. Treating them as predictions, or framing debate in terms of “successful prediction of future warming,” is a category error….”
Its such a hair splitting view of things and so pedantic, and just doesnt make much sense, because the climate model simulations effectively forecast a variety of futures dependent on various conditions being met. Certain model outputs are then used as the basis for IPCC forecasts, based on likely / possible future conditions. This is hugely simplified but its what happens. The warming forecasts in the IPCC reports are not derived from examining the entrails of a goat or looking at the patterns of the tea leaves. They are based in large part on the results of climate modelling. Do you want me to have to prove that as well? Would you also like me to prove that night follows day?
And I’m not trying “to lay a finger on you”. You take things way too personally. I have several times questioned Piotrs conclusions as well. I would criticise a lot more peoples comments but I don’t like to spend too much time on such things. You post a lot of material and make a lot of controversial statements so you just get a bit more on my radar.
Nigelj says
14 Jan 2026 at 2:29 PM
Its such a hair splitting view of things and so pedantic,
Science is like that. Maybe you’re on the wrong blog site nigel? [ heat / kitchen ]
Nigelj: I suggest you not amplify Data’s endlessness. Part of it is vanity, part of it is using RealClimate in place of forming his own blog, because he gets an audience here. Don’t feed him (her). It’s not enough to be right if the discussion bores the pants off the rest of us.
I forgot to mention, Data is not wrong about a lot of stuff (except most of the personal attacks, and that applies to his/her respondents as well). The problem is not the facts, it’s the volume and the emotion, which get in the way of actual understanding.
Exploiting RC in this way does a disservice to us all.
I disagree on that, Susan Anderson. Data regularly makes stuff up, such as claiming work is not peer-reviewed when it actually was, pretending a warming projection for 2048 does not exist when it actually does, etc. They’re basically here to disinform across their sockpuppet accounts and never honestly admit when they’re wrong.
Atomsk: If you read my quite short comments for content, you will realize that I’m mostly complaining about amplifying the endlessness.*
Without wading through it all and setting aside the personal attacks and labeling, any lurker will be scratching their head rather than taking in the facts.
* Using AI enhancements makes this so much worse, because it generates content at speed and volume, obeying the directions of whoever is using it without regard for truth or falsehood.
in Re to Susan Anderson, 19 Jan 2026 at 12:34 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844083
Dear Susan,
I proposed to the moderators several times as a possible solution arranging an unmoderated section to that any new embodiment of the multitroll would have been redirected as soon as recognized. All further interactions with him would thus occur in this confined space.
The moderators likely do not consider it as a good idea, perhaps because this way, discussions in the moderated section would have shrunk to one fourth or one fifth of their present extent.
Greetings
Tomáš
Susan: “ I forgot to mention, Data is not wrong about a lot of stuff ”
Huh? Like what? Trying to discredit climate science? Disparaging renewables, DAC, net zero, and climate policies? Promoting the apathy by lecturing us that the root of our problem is that we refuse accept “irreversibility” of the climate change? Giving us China as an example to follow? Defending the aggression of Russia on Ukraine?
“ Data is not wrong about a lot of stuff “???
in Re to Piotr, 20 Jan 2026 at 6:07 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844142
Hallo Piotr,
I perfectly agree with all your objections against “Data” (aka “Ned Kelly”, “Simplicius”, “Lavrov’s Dog”, “Dharma”, “Poor Peru”, “Thessalonia”, “Thomas”, “Pedro Prieto”, “Yebo Kando”, “Thomas”, “Mo Yunus”, “Jim”, “James Hnasen” and many others), however, I still tend to agree also with Ms. Anderson that in multitroll’s writings, basically everyone can find lot of truth. It is possible if you split their opuses into small pieces and asses each of them separately, desisting from the context in which they are served.
As an example, I would like to offer the text block “many amateur pro-Science Activists do speak with certainty” from the third paragraph from the bottom in the long “Data” post of 19 Jan 2026 at 11:13 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844107 .
Although I can perfectly agree with this small element, the context in which it is served, namely the preceding sentence “The public is being led to think science speaks in certainties.”
already smells fishy to me.
I think that this pattern fits the old recipe for efficient manipulation that we know from propaganda of totalitarian regimes that ruled our countries: Do not lie only! Manipulative lies skilfully mixed with truth and half-truth can be sometimes swallowed even by quite cautious people. Personally, I do not wonder that people find in multitroll’s production lot of seemingly positive incentives to interact with him. Our society is far from perfect; it is often quite easy to build a positive acceptance by critique of commonly perceived deficiencies.
The multitroll, of course, does not offer any solution. He cannot, because his actions are part of the problem. The entire context of his actions (and, therein, most clearly steady changing his camouflages) makes me sure that he is acting in bad faith. I do not believe that discussing with a bad actor makes any sense – it rather fulfils his intention to harm. In my opinion, he causes damage already by consuming the time and energy of other people.
I admit that, contrary to my view, there still may be people who perceive their interaction with the multitroll as bringing some value. That is why I proposed several times confining him in a dedicated unmoderated section, wherein he could troll and receive the desired feedback ad libitum.
Greetings
Tomáš
Tomáš,
The moderators likely do not consider it as a good idea, perhaps because this way, discussions in the moderated section would have shrunk to one fourth or one fifth of their present extent.
You might be right.
Maybe if they make a separate deniers thread then limit the amount of postings a person can make in a day to it to say one? Then in the regular thread after someone says something that questions agw a couple time are answered but refuse the answer any subsequent comments go to the deniers thread?
Anyway, so what of the comments drop. The legitimacy of agw doesn’t depend on comments.
Tomáš Why not just give it a try? Right now I suspect that these constant battles and hostility are preventing a lot of people from commenting.
If someone is hostile to agw from the get-go he goes immediately to the deniers thread. Right now these seemingly scientific sounding battles are leaving the impression that there’s controversy in the scientific community that there’s not. That may be the aim.
Tomáš Kalisz @21 Jan 2026 at 11:09 AM
TK: “I perfectly agree with all your objections against “Data” (aka “Ned Kelly”, “Simplicius”, “Lavrov’s Dog”, “Dharma”, “Poor Peru”, “Thessalonia”, “Thomas”, “Pedro Prieto”, “Yebo Kando”, “Thomas”, “Mo Yunus”, “Jim”, “James Hnasen” and many others), however, I still tend to agree also with Ms. Anderson that in multitroll’s writings, basically everyone can find lot of truth. It is possible if you split their opuses into small pieces and asses each of them separately, desisting from the context in which they are served.”
I’m 95% certain multitroll has been posting here under different names since around 2015 starting with “Thomas”. Check the website archives for yourself . These can be accessed if you look at the Index information / archives at the bottom of the page. When he changed his name several times ending up with Reality check I complained about using multiple names because of all the spamming.
He posts a lot of material some of which is generally accepted wisdom, but IMHO he also posts a lot of BS. Some of it nicely summarised by Piotr, but that’s only the tip of the iceberg.
TK: “The public is being led to think science speaks in certainties.”
already smells fishy to me.”
Its absolutely fishy given the IPCC Reports go to extreme lengths to highlight the uncertainties. I think he likes to make controversial statements to get attention for ego gratification. And I think he has another convoluted sort of reason. He seems to think think the science is expressing a certainty that warming will be in the middle of the range when Multitroll is convinced it has to be at the extreme end of the range.
TK: “I think that this pattern fits the old recipe for efficient manipulation that we know from propaganda of totalitarian regimes that ruled our countries: Do not lie only! Manipulative lies skilfully mixed with truth and half-truth can be sometimes swallowed even by quite cautious people. Personally, I do not wonder that people find in multitroll’s production lot of seemingly positive incentives to interact with him. Our society is far from perfect; it is often quite easy to build a positive acceptance by critique of commonly perceived deficiencies.”
Multitroll is probably doing those things but I dont believe hes a Russian agent promoting a fascist totalitarian takeover of the world, (although he unwittingly supports them and gives them credibility) or that he is a deeply sinister sort of character . Hes more of an authoritarian leaning hard leftist and environmentalist promoting a well meaning form of global socialism. I scan almost all the comments posted on this website and this is my conclusion.
I have sympathy for some left leaning economic views but surely its obvious by now that global socialism is never going to work in any way shape of form given all the failed experiments. So I just get the impression multitroll is a well educated guy but doesnt think very clearly on these sorts of issues.
TK: “The multitroll, of course, does not offer any solution. He cannot, because his actions are part of the problem. The entire context of his actions (and, therein, most clearly steady changing his camouflages) makes me sure that he is acting in bad faith. I do not believe that discussing with a bad actor makes any sense – it rather fulfils his intention to harm. In my opinion, he causes damage already by consuming the time and energy of other people.”
I don’t think the multiple identities are bad faith as such. Although its obviously dishonest especially when he uses the names of real people. He’s had his comments blocked several times, because hes complained of this, so he obviously just changes his name. And when hes consistently proven wrong he changes his name, clearly as a way of avoiding accountability for mistakes. And when he uses multiple identities simultaneously its clearly a way to spam the website. I cant see inside his head to deeper motives but I get the feeling he has a massive superiority complex of some sort.
Multitrolls 2015 – 2017 Thomas identity said he used to be a public relations consultant. This explains his style of rhetoric and evasiveness.
If I thought multitroll was a Russian Agent or deliberately pushing a very sinister agenda, I wouldn’t interact with him and would be loudly complaining to this website in private by email. I think hes just some sort of deluded eccentric socialist guy. I actually agree with some of his views but disagree with an equal number. I respond to some of his views because 1) its important to debunk certain things for the benefit of the wider audience and 2) I like the mental stimulation doing this 3) I like interesting discussion although its very difficult getting that out of multitroll because hes so ultra sensitive and defensive.
TK: “I admit that, contrary to my view, there still may be people who perceive their interaction with the multitroll as bringing some value. That is why I proposed several times confining him in a dedicated unmoderated section, wherein he could troll and receive the desired feedback ad libitum.”
Makes sense in theory but the moderators probably cant be bothered with something like that its just more work for the moderators setting it up and making it work, and I sympathise with them over that.
For me the solution is very simple: All other websites I participate in have similar moderation rules to this one of not allowing blatant personal abuse, no spamming (huge word content every day), no off topic, no multiple identities, no lying BUT unlike this website they enforce those rules quite strongly or at least more strongly than this website, and as such most problems and problem people go away or clean up their act, but you still get good participation by the general public.
This website has at least got tough on sock puppetry and that has gone away. For now anyway. So be grateful for small mercies!
in Re to Ron R., 22 Jan 2026 at 8:48 AM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844196
Hallo Ron,
I agree that it could be an interesting experiment and that it might be worth of try.
The decision is, however, fully in discretion of the moderators.
Greetings
Tomáš
My scientific commentary is solid, steady well supported by references, and clear enough. For the record, I am not, and do not “make stuff up” nor am I here “to disinform” and neither am I maintaining multiple “sockpuppet accounts”. I do not engage in “personal attacks”, I do defend myself against spurious attacks. Furthermore when I find myself to be “wrong,” or make an inadvertent “Typo/Mistake”, I correct the record asap. I am never “emotional” nor do I engage in “strawmen logical fallacies” or make false unfounded “accusations against others.”
Tomas Kalisz: I still tend to agree also with Ms. Anderson that in multitroll’s writings, basically everyone can find lot of truth. It is possible if you split their opuses into small pieces and asses each of them separately
And in the writing of a anti-vaxers you can occasionally find 2+2=4. That’s the one of the most obvious rule of propagandists – if everything you write is false nobody would by it so the way to sell your propaganda is to mix some truths with half-truth and outright- lies – so the truths are the medium to deliver the lie. And to get to these morsels of truth you have to read all the garbage because without reading it how would you know that there is morsel there
So you come for the truth at the price of staying for the anti-science propaganda.
Why would I agree to such a bargain when I can get the same truths without all the lies and half-truths? The life is too short to waste it on mass producing trolls (Data: 95 posts in 22 days). And if at least his writing was good…
Aesthetics can be helpful in life one should not neglect the study of beauty ;-)
” Verily their rhetoric was made of cheap sacking
(Marcus Tullius kept turning in his grave)
chains of tautologies a couple of concepts like flails
the dialectics of slaughterers no distinctions in reasoning
syntax deprived of the beauty of the subjunctive
So aesthetics can be helpful in life
one should not neglect the study of beauty ”
Zbigniew Herbert
Associating Newton’s work with unphysical construction obviously distorts his legacy. Suppose NASA tried to send a vessel to a distant destination using a version of Newton’s laws that happened to get the endpoint right but misrepresented or omitted relations of velocity, momentum, mass, and gravity. Because the underlying structure is inconsistent, it would be incapable of computing the fuel budget or resolving trajectories. Nobody would rely on such a framework. Most importantly, it would never lead to Einstein’s later insights, which depend on the underlying symmetries that are revealed only from the consistent system Newton outlined. If the governing relations are unresolved, you cannot account for the energy (Joules) required to move the system. By analogy, Callendar’s 1938 framework arrives at a plausible GMST at some time by disregarding the required energy accumulation. By compensating omission his account fails to track the actual Joules moving through the system; it produces the right GMST for the wrong energy. which is essentially nonsense since global warming is diagnostic of energy. In modern terms, this is not a transient response but an equilibrium mapping. The transient climate response is defined by the time-dependent evolution of temperature under a persistent (TOA) radiative imbalance, governed by the coupled dynamics of atmospheric adjustment and ocean thermal inertia. The temperature evolution is explicitly a transient process, where the system state is determined by the integral of TOA net radiation over time, not by instantaneous surface radiative closure. Treating as equivalent his equilibrium GMST snapshots against the modern concept of a transient state dragged by energy accumulation is categorically false and a historical fiction – incompatible accounts that realscience should clearly distinguish.
Again, none of what you’re saying prevents one from recognizing that Callendar in 1938 accurately projected iTCR, i.e. the ratio of global warming vs. forcing. If you don’t like iTCR in terms of forcing then the iTCR can be stated as the amount of global warming per doubling of CO2. Bringing up differences between Callendar’s modeling vs. that of others does nothing to change that.
“Several decades after the publication of Arrhenius’ study, Callendar (1938) made a renewed attempt to estimate the magnitude of surface temperature change in response to the increase in atmospheric carbon dioxide, using absorptivities of carbon dioxide and water vapour that are more realistic than that used by Arrhenius.”
https://tellusjournal.org/articles/10.1080/16000870.2019.1620078
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843744
I do feel that a distinction matters, because the extent by which GMST is dragged at a certain time under a forced TOA imbalance depends on the feedback parameter, lambda, and the ocean heat uptake coefficient, gamma. Although you repeat claims to the contrary, Callander includes no analogues to a lambda or gamma. As previously discussed, Callander’s temperature evolution is the required change to maintain a surface radiation equilibrium; snapshots that have no connection to modern concepts of TCR.
It’s worth repeating that we have already defined skill as an outcome agreement which has no dependence on explanatory correctness. But, outcome agreement on fundamentally different quantities as a TCR (in a state far from TOA energy balance), vs a surface radiation equilibrium snapshot is meaningless. The only relation is that both are numbers representing a temperature anomaly.
Number-matching without regard to explanatory correctness may suffice in some engineering contexts, but even there, conflating fundamentally different quantities carries serious risk. For example, imagine confusing the transient state of a hydraulic model of river flood stage under a 1% 24-hour rainfall distribution with the 1% regulatory flood elevation. That would be seriously messed up and such a thing does not meet a basic engineering standard, much less a realscientific one.
No one particularly cares whether US Army Corps of Engineers tools like HEC-RAS resolve physics or rely on empirical parameterizations, so long as the outputs are skillful. But it would be super weird if engineers fail to retrospectively distinguish real-time transient states from regulatory benchmarks in an attempt to preserve a continuous-history narrative.
It is worth noting that scrutiny here matters because once someone comes along with the right idea in principle, that CO2 should cause a warming influence, all it takes is to find a way to make a line of increasing temperature to correspond to a line of increasing CO2 with the same shape. Once you actually get down into it, it’s a fascinating and complex issue which has come with major advances in concept along the way that should be celebrated rather than glossed over or suppressed.
JCM says 18 Jan 2026 at 4:13 PM in Reply to Atomsk’s Sanakan 14 Jan 2026 at 4:47 PM https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843898 and @ Atomsk’s Sanakan 8 Jan 2026 at 3:09 PM https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646
But it would be super weird if engineers fail to retrospectively distinguish real-time transient states from regulatory benchmarks in an attempt to preserve a continuous-history narrative.
Indeed, it would be super-weird. What isn’t weird is when a small cadre of commentators who make mistakes in one area will make mistakes and misrepresentations in others as well.
Like not accepting nor openly acknowledging that CMIP is a computational exercise, and not a scientific experiment.
Or ignoring and repeatedly distracting from:
Statistical significance thresholds are human conventions for managing error, not discoveries about nature. Whether a trend or acceleration is “significant” depends on choices about confidence level, error tolerance, and model assumptions. Treating 95% CI as a hard scientific boundary confuses decision rules with physical reality and invites overconfidence rather than clarity.”
Because we know that statement is:
Technically correct
Unassailable
Deeply inconvenient to absolutists
But is not to real scientists.
Statistics is a lens, not a judge. And anyone who pretends otherwise is mistaking the map for the terrain — with confidence intervals drawn in thick black ink.
This is really sad, because: The public is being led to think science speaks in certainties. It does not. Unfortunately many amateur pro-Science Activists do speak with certainty.
Some scientists encourage that illusion. And others weaponize thresholds to shut down discussion and to deny Uncertainties.
It’s worthwhile to never confuse a real scientist practicing Science with a Lawyer.
Why Elon Musk’s Mars Plan is SCIENTIFICALLY Impossible | Neil deGrasse Tyson
https://www.youtube.com/watch?v=ldRtMa1FX4M
“Neil deGrasse Tyson explains why Elon Musk’s plan to put one million people on Mars by 2050 is scientifically impossible. From deadly radiation exposure to toxic soil and atmospheric challenges, discover the physics problems that make Mars colonization a dangerous fantasy rather than achievable reality.
“Learn why SpaceX would need over 10,000 launches in 25 years, why Martian radiation would cause cancer within months, and how perchlorate-contaminated soil makes agriculture impossible. Tyson breaks down the fundamental biological and physical barriers that no amount of money or technology can overcome.
This isn’t science fiction speculation – this is rigorous physics analysis of Silicon Valley’s most ambitious claims. “Understand why tech billionaires’ future predictions often ignore basic scientific principles, and how their Mars obsession diverts resources from solving Earth’s actual problems.”
“Essential viewing for anyone interested in space exploration, Mars missions, scientific reality versus tech hype, and why astrophysicists remain skeptical of ambitious colonization timelines.”
Susan Anderson: “ Neil deGrasse Tyson explains why Elon Musk’s plan to put one million people on Mars by 2050 is scientifically impossible. SpaceX would need over 10,000 launches in 25 years, Martian radiation would cause cancer within months, and perchlorate-contaminated soil makes agriculture impossible.
Let’s meet halfway – send to Mars only one rocket – with Musk, DJ Trump, Steven Miller, and Cruella de Ville Kristi Noem on board (no puppies though). I can already see the last Truth-Social post from the Donald J. Trump ballroom of the Space-Force 1, sent minutes before landing:
“We have to own Mars, whether you Martians like or not. We can do it the easy way or the hard way. If we don’t own Mars – Russians, Chinese, Somalis, Sleepy Joe, drug cartels and Antifa will have it. I won’t let it happen. It will be tremendous. When you become the 51st state – we will cherish you, Martians, as nobody has cherished you before! Make Mars Great Again!
Thank you for your attention to this matter”.
Piotr: good one! “perfect”, as the gobbler in chief would say.
Susan: “Piotr: good one! “perfect”, as the gobbler in chief would say.”
Thank you Susan. I would have accepted also: “Beautiful!”, “Tremendous!”, “Unlike anything anybody seen before!”, “Best in the history of the USA and very likely, the world!”.
I am a little disappointed though that you didn’t follow up with a nomination for the Nobel Prize that I deserved biggly, more than anybody in history, but those crooked Norwegians, stole it from me. And this is how they repay me for my wishing we had more immigrants from Norway! (I’ll leave it to my adorable press secretary to tell you how ungrateful was that).
This is even better. It includes an interesting discussion of the morals of science fiction vs. the belief system of tech billionaires who miss the point. I love all the laughter. Unfortunately, it’s rather long. One might call it scientific locker room talk, if one wasn’t looking for vulgar sexism but good clean fun. I’m a fan of astrophysicist Adam Becker’s work and this video also increased my respect for NdGT’s communication chops.
https://www.youtube.com/watch?v=o6UdRXloqGc
“billionaires who miss the point” is a self-contradiction because the sole purpose of Life is competition to the death.
BEF: ick
Money and power are not the sole measures of value. I suggest you watch the video.
There are very few, if any, biologists who would accept this statement uncritically, especially if you are assuming individual competition between two individual organisms. if you are thinking in terms of whole species finding successful ecological niches well that’s a bit more modern thinking but still takes much nuance, much data, and a very large bank of computers.
“Nature red in tooth and claw” is a 19th century notion, and it really doesn’t capture actual natural selection mechanisms and their feedbacks and feedforwards in the least.
Question: What particular “competition” does a random apex predator “win” if it overeats its prey population? Or the herbivore equivalent: What particular competition does an herbivore win if it eats all the plants in its environment. (In this 2nd regard, consider the Isle Royal experience with moose and wolves.)
– Barry: billionaires who miss the point” is a self-contradiction because the sole purpose of Life is competition to the death.
– Susan: “BEF: ick. Money and power are not the sole measures of value. I suggest you watch the video”.
– Ron R.” Lynn Margulis would have disagreed with you.” {link to another video]
Deadpan, you two? ;-)
Piotr,
Ron R. …
link to another video]
Was there another video? Didn’t look past the paywall.
Piotr: the difference is that BEF was responding about my posted video from Neil deGrasse Tyson, which directly addressed what was wrong with his complaint.
Your snappy snark was a good way to change the subject. I felt the video was important. It’s another example of how being clever prevents insight.
Repeat link to NdGT, just in case someone else might take its point:
https://www.youtube.com/watch?v=o6UdRXloqGc
Susan: Piotr: the difference is that BEF was responding about my posted video from Neil deGrasse Tyson, which directly addressed what was wrong with his complaint.
What complaint? Barry RIDICULES tech billionaires by attributing to them the myopic thinking that “the sole purpose of Life is competition to the death.”
To which you and Ron responded … as if this it was Barry’s own views: “ick. Money and power are not the sole measures of value. I suggest you watch the video. ”
To me the Barry’s irony was so OBVIOUS, that I half-jokingly asked if you and Ron were deadpanning.
To which you lectured me “Your snappy snark was a good way to change the subject”
P: [facepalm emoji here]
Susan: It’s another example of how being clever prevents insight.
Since it was you who missed the Barry’s point, I didn’t missed any insight. If anything your response reinforced to me Mencken’s “snappy snark” that one can never underestimate [the sense of humour] of (American) public.
Still, unless it is a deadly serious matter – I’d rather err on the side of whimsy and humour than kill them both by stating the obvious in such a way that nobody could possibly miss the point. Where’s the fun in that?
And for the predictable, unoriginal. painfully obvious, boring, often inaccurate, slanted and
embittered answers – we already have Grok. Or “Data”.
Barry: “billionaires who miss the point” is a self-contradiction because the sole purpose of Life is competition to the death.”
– Susan: “BEF: ick. Money and power are not the sole measures of value. I suggest you watch the video”.
– Ron R.” Lynn Margulis would have disagreed with you.” {link to another video]
Ron: “Was there another video? Didn’t look past the paywall”
Sorry, my bad. It should have read: “[another link]” or [link to the article about Lynn Margulis]
P.S. My point was that “the sole purpose of Life is competition to the death” was NOT Barry’s opinion. but his irony toward tech billionaires, and as such, I suspect Lynn Margulis would go along with Barry Finch just fine.
P.S. My point was that “the sole purpose of Life is competition to the death” was NOT Barry’s opinion. but his irony toward tech billionaires, and as such, I suspect Lynn Margulis would go along with Barry Finch just fine.
Oh, oops. Then I would agree too.
Actually Piotr, my comments, in the right light, could be looked at as ironic too… :D
FWIW, I’d argue it differently, claiming that human “purpose” cannot be limited by purely biological paradigms.
– Piotr 23 Jan: “My point was that “the sole purpose of Life is competition to the death” was NOT Barry’s opinion. but his irony toward tech billionaires, and as such, I suspect Lynn Margulis would go along with Barry Finch just fine.”
– Ron R: 23 Jan 9:38 PM: “ Oh, oops. Then I would agree too.”
– Ron R.: 23 Jan 10:30 PM: “ Actually Piotr, my comments, in the right light, could be looked at as ironic too… :D ”
And that’s why I offered you and Susan the ramp off:
Piotr 20 Jan [to Susan and Ron:] “Deadpan, you two? ;-)” ;-)
Barry E Finch,
The concept of “the sole purpose of Life is competition to the death” is in a business sense , of course, the entire world of these ‘tech bros’ who have never been working with the public good in mind, this to the point that instigating serious public harm will be no direct restriction on their business plans.
When these characters are described, I do rather dislike use of the word ‘tech’. Even if you consider software a technology, these are not technical people, not technicians. They gain their positions in business by grift, by conning investers into thinking that they are the best thing since sliced bread and also by connng mass-customers into their orbit to demonstrate their ridiculous business models are not entirely vapourware. To that end, to achieve a functional product quickly and cheaply, they will cut corners and hack out that functional product without proper process, without being mindful of what they are actually trying to sell. And if they do achieve their goal of a bulging pile of investment capital, they use it to buy out competitors and establish an effective monopolistic position.
The only difference between the ‘tech bros’ and the Donald may be that they have the ability to hear what their competent advisers are telling them. And maybe they are ‘able’.
(Singling out Elon, he is completely ‘unable’ in this respect. His story does demonstrate how a tidy wad of cash from your dad and room mate during the dot-com bubble could be, by parts and with singlemindedness, used to bully and fool the rest world into funding insanity. And where does that foolish insanity come from?)
But for your average ‘tech bro’, when the game is properly afoot and the likes of AI technology is the football, the ‘tech bros’ will likely be as deaf to such advice as the Donald (or Elon). And that is a bigly big big worry!!
Lynn Margulis would have disagreed with you.
https://www.science.org/doi/10.1126/science.252.5004.378
He also points out the simple logical conclusion that if we *could* terraform Mars, then reterraforming Earth would be a hell of a lot easier, so…. why focus on Mars?
My answer? More money in it for Musk, et al.
Remember when Musk proposed nuking the poles to warm up the planet?
I propose redistributing all that wealth to urgent problems here on earth.
For clarity and ease I’m moving this comment away from the very noisy thread of replies it comes from.
Atomsk’s Sanakan says 10 Jan 2026 at 2:52 PM
……… you’re simply moving the goalposts from the original points.
That’s a bit rich. Another distortion fills out the pattern.
Data says 1 Jan 2026 at 6:53 PM
An addendum to addendums
Data says 28 Dec 2025 at 4:49 PM @
https://www.realclimate.org/index.php/archives/2025/11/raising-climate-literacy/#comment-843196
About the “models aren’t tuned” myth………………….
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843345
JCM responded correctly in context:
JCM says 4 Jan 2026 at 12:09 PM
It looks to me like:
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843457
Whereas the A’sS response is a clusterf*** of unrelated non-specific verbose noise.
Atomsk’s Sanakan says 3 Jan 2026 at 7:55 PM
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843443 and
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843479 and
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843480 and here
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843578 and still more again
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843587 more here
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843646 only to double down yet again
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843655
Finally ending in a whimper:
Atomsk’s Sanakan says 10 Jan 2026 at 2:52 PM
“Again, JCM, you’re simply moving the goalposts from the original points. These were that older models (including Callendar 1938) skillfully/accurately projected iTCR and GMST, with this not being explained by model tuning. I already explained how that iTCR can be stated in terms of warming per doubling of CO2. Your other concerns about model details do nothing to change those points.”
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-843744
Emphasising yet again it is hard to discuss anything with anyone who has not read, or has not understood, what the individual references they provided actually mean beyond the misleading rhetoric that accompanies them.
Be discerning who you choose to listen to. Repetitive overconfident noise ≠ an expert nor an honest broker. It’s just noise.
Re: “Emphasising yet again it is hard to discuss anything with anyone who has not read, or has not understood, what the individual references they provided actually mean beyond the misleading rhetoric that accompanies them.“
What then to say of a sockpuppet pretending a reference they neither read nor understood (Forster 2025) wasn’t peer-reviewed?
Or what to say of a sockpuppet account pretending a projection of 2°C by approximately 2048 wasn’t included in that paper’s Climate Change Tracker?
Atomsk’s Sanakan says
12 Jan 2026 at 6:19 PM
Re: “Emphasising yet again it is hard to discuss anything with anyone who has not read, or has not understood, what the individual references they provided actually mean beyond the misleading rhetoric that accompanies them.“
What then to say of a sockpuppet pretending a reference they neither read nor understood (Forster 2025) wasn’t peer-reviewed?
Data says: “Atomsk offers a mix of non-peer-reviewed “references” — Forster 2025, Copernicus, Carbon Brief, ERA5 —“
Or what to say of a sockpuppet account pretending a projection of 2°C by approximately 2048 wasn’t included in that paper’s Climate Change Tracker?
– Data says: “Atomsk’s Sanakan claims: the observed warming trend, better observationally-constrained CMIP6 models, CMIP5 models, the IPCC, etc., show we’re on pace for ~2°C by 2045-2050, 3°C by 2075-2090 (2060 at the earliest), and ~3.5°C by the end of the century.
[…]
I have checked each source individually; none definitively supports the precise numbers he asserts. These references provide plausible ranges or scenario envelopes, not point predictions.”
– Here from Climate Change Tracker in Forster 2025
– Forster 2025: “We have published a set of selected key indicators of global climate change via Climate Change Tracker (https://climatechangetracker.org/, Climate Change Tracker, 2025), a platform which aims to provide reliable, user-friendly, high-quality interactive dashboards, visualisations, data, and easily accessible insights of this paper.“
One more time makes no difference.
And the sockpuppet account continues to willfully ignore the projection that shows they’re wrong (2°C by 2048), just as they willfully pretended Forster 2025 was not peer-reviewed. Same sort of avoided of evidence that’s displayed by AGW denialists.
An honest person by now would have admitted that:
1) Forster 2025 was peer-reviewed,
2) Forster 2025 made a projection of 2°C by 2048 via its Climate Change Tracker,
3) that projection is consistent with the projection I gave of 2°C by 2045-2050.
But admitting that would require the sockpuppet to honestly admit they’re wrong. That’s something they’re incapable of doing, much like many other science denialists. So they’ll keep deflecting from answering that point.
Atomsk’s Sanakan says
14 Jan 2026 at 8:22 AM
And the sockpuppet account continues to willfully ignore the projection that shows they’re wrong (2°C by 2048), just as they willfully pretended Forster 2025 was not peer-reviewed. Same sort of avoided of evidence that’s displayed by AGW denialists.
An honest person by now would have admitted that:
1) Forster 2025 was peer-reviewed,
2) Forster 2025 made a projection of 2°C by 2048 via its Climate Change Tracker,
3) that projection is consistent with the projection I gave of 2°C by 2045-2050.
But admitting that would require the sockpuppet to honestly admit they’re wrong. That’s something they’re incapable of doing, much like many other science denialists. So they’ll keep deflecting from answering that point.
One more time makes no difference.
Nice to see the sockpuppet account reduced to simply copy-and-pasting rebuttals of what they said, since they can’t cogently address those rebuttals. That confirms the prediction that they’d deflect.
And “Same sort of avoided” should instead read “Same sort of avoidance“.
Data do you seriously believe any sane person is going to waste their time reading your long list of links? What really happened is this. Discussion on models and tuning and skill started where Yebo Kando made some comments claiming CMPI5 climate models allegedly lacking skill, and AS posted a counter argument, including making a point that their results are not simply a result of “tuning”. At no point did AS claim models weren’t tuned.
Now during all this discussion BPL entered the discussion with a claim models weren’t tuned, (which doesn’t look like a valid claim to me). So most of your replies to AS about how models are in fact tuned you are responding to the wrong person! You have misinterpreted things, which is understandable because the discussion became confused and messy. Theres more to it but those are the crucial points.
Nigelj says
13 Jan 2026 at 3:54 PM
Data do you seriously believe any sane person is going to waste their time reading your long list of links- [to Atomsk’s Sanakans repetitive unrelated non-specific verbose noisy commentary] ?
D: Let me answer that this way, anyone who read that many comments by Atomsk’s Sanakan in one sitting would never be the same again. So you’re more or less correct I think.
As for your journalist attempt to “tell the story of what happened” – don’t give up your day job or your retirement to be a journo. Sorry you don’t have what it takes, you’re as bad as the rest of the media are at informing the public of the really important facts told within a meaningful and proper accurate context. You seem to have a lot in common with our Atomsk’s Sanakan fellow, unfortunately.
If you believe a model is very good, then its response to a controlled (virtual) perturbation experiment may reasonably be interpreted as giving “wrong answers for the right reasons,” when evaluated against criteria such as observed accounting of shortwave and longwave components to net radiation.
In this situation, the model is producing a physically consistent response to the imposed inputs, even though its output disagrees with observation.
The danger arises when this discrepancy is taken as evidence that the model physics are wrong, leading to retroactive re-tuning of physics parameters to force agreement. Such tuning can degrade the model, because it implicitly assumes the experimental inputs are exact and shifts error into the model structure itself.
At this time, I don’t think it is known why many of the most “skillful” models in representing GMST do not match the observed shortwave and longwave components of the driving net radiation at all.
On the one hand, some people blame the aerosol inputs (and associated “indirect” effects); on the other hand, some people accuse “feedback”, the latter which is explicitly an expression of model structure.
To compound interpretive difficulty, others invoke the wildcard of unforced variation (unexplained low frequency wobbles).
Driving even deeper, especially with respect to realclimates, recent analysis suggests CMIP experiments show bizarre transsimulation consistencies:
“””Despite potentially large differences in how different models simulate aerosol and cloud physics, the spatial patterns of the trends are very consistent across the ensemble-mean of each mode”””
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2025GL119493
“Observed and Modeled Trends in Downward Surface Shortwave Radiation Over Land: Drivers and Discrepancies” from Karen McKinnon and Isla Simpson.
So there is an implicit tuning driving similar responses to aerosol concentration using different physical structures, resulting in the same biased spatial patterns of “brightening”.
From Figure 3, the spatial correlation of ERA5 and CMIP trends of surface SW down show basically no relation, and massive continental areas are outside the range of any simulation.
We see how the simulations, in one way or another, produce strong SW trends coincident with historically major aerosol emission areas (eastern USA, western Europe, India, eastern China), and totally miss the mark across major regions of USA, South America, Africa, Middle East, and Asia.
Figure 3B is quite clear, where we see ERA5 trend outranking hundreds of simulations, while 3A shows the practically exclusive dependence on aerosol patterns for CMIP MMM. https://agupubs.onlinelibrary.wiley.com/cms/asset/82c91524-1fc5-4c20-b12f-501afa15cac2/grl71765-fig-0003-m.jpg
This shows that while the models capture aerosol signal, they systematically fail to reproduce trends where no aerosol signal is to be found.
So what’s to do? Re-tune model structure? or trust that the models actually might be useful, and therefore capable to diagnose and identify novel and previously underappreciated experimental input problems?
Nigelj says
13 Jan 2026 at 3:54 PM
At no point did AS claim models weren’t tuned.
Oh boy. The one who has misinterpreted things, again, and gets it all back to front and upside down.
Yeah, the 3+ long month discussion is confused. I am not.
A’sS is not a credible reporter of the Real Climate Science.
Re: “At no point did AS claim models weren’t tuned.“
Yup. What I said was:
1) models skillfully/accurately projected global surface temperature trends and iTCR (i.e. the ratio of global surface temperature trends vs. forcing)
2) model tuning did not explain these skillful/accurate projections
Despite both you and I correcting the sockpuppet on what I said, there’s no chance they’ll honestly retract their misrepresentation of what I said.
HANSEN’S Latest post
https://jimehansen.substack.com/p/three-things-all-at-once-all-the
Prologue: Part I
Why a prologue, as well as a Preface? The prologue is a guide to a story with many facets. My goal with Sophie’s Planet is to use my life experience – my gradual education as a scientist and political independent – to help you understand climate change on our home planet, its origin in the energies that improve our lives, and the politics that make it difficult to achieve good public policy. I include my opinion about policies, but my aim is to illuminate my journey and let you form your own opinion. If you are not a scientist, the prologue will be hard because it includes science topics without full explanations. You can skim over things that are not clear, because the science will be explained from an elementary level in the book that follows.
Jan 14, 2026
Link: https://jimehansen.substack.com/p/three-things-all-at-once-all-the
Hansen’s reflections on his career are interesting, but one figure in particular stood out to me: Jule Charney. What Hansen says about Charney is not only insightful, but also seems timeless. It speaks directly to where climate science is today, and it applies now—especially in the way the field deals with uncertainty, authority, and “scientific certainty.”
That’s why I think these few extracts are so relevant right now.
Extracts from Hansen’s Charney section:
“Charney essentially ignored the implied request for policy guidance. He realized that there was a basic issue to be solved before it would be possible to say how serious the climate threat was.”
“Charney’s genius was to cut through a thicket of uncertainties and focus on the single quantity – climate sensitivity – that would largely determine the magnitude of human-made climate change.”
“Of course, we must also know the climate forcing that drives climate change: climate change is the product of the forcing times the sensitivity of the system.”
“Charney focused the problem by asking: what would the eventual global warming be in response to a doubling of atmospheric carbon dioxide?”
“Climate response to even such simple forcing is complex because of climate feedbacks.”
“Charney chose to evaluate climate sensitivity with ice sheets fixed, leaving ice sheet change for future research.”
“Charney’s best estimate for the equilibrium (eventual) global warming in response to doubled carbon dioxide was 3°C… but with great uncertainty such that there was only a 50 percent chance that sensitivity was in the range 1.5–4.5°C.”
“That range covered everything from modest climate impacts to global catastrophe; it was no wonder that Charney avoided policy statements in his report.”
“More than 40 years later, the 2021 United Nations climate assessment still gave 3°C as the most likely response… and with still a very large uncertainty range.”
“Charney would be surprised at how long it took to obtain data needed to understand the role of clouds in climate change.”
“However, Charney’s framework for studying climate change is finally beginning to pay off.”
“He would not be surprised about the role of clouds, especially moist convection… in explaining not only high climate sensitivity, but growing climate extremes…”
A feast of temperature data has arrived in close order. Copernicus ERA5 re-analysis and a UK Met Office report giving a 2025 average for HadCRUT5 has been joined by Berkeley Earth and GISTEMP and NOAA reporting numbers for December.
With the December anomaly well down on Sept, Oct & Nov, all but GISS series records 2025 into 3rd-warmest spot behind top-spot 2024 & second spot 2023. And for all those breathy ‘scorchyisimoists’, ERA5, the only SAT in this record (so ocean temperatures aren’t that dastardly SST), was the only one putting the last three-year average 2023-25 above the magic +1.5ºC.
With the exception of GISS, the table below shows these temperature series with a 1850-1900 anomaly base (GISS 1880-1920). Note the NOAA series is a lot warmer thro’ the period 1850-1900 anomaly base period resulting in the lower anomalies.
Annual … … … 2023 … … … 2024 … 2025 … (& 3-year ave)
ERA5 … … …+1.48ºC … +1.60ºC … +1.47ºC … (+1.52ºC)
HadCRUT… +1.47ºC … +1.53ºC … +1.41ºC … (+1.45ºC)
GISS … … …+1.45ºC … +1.56ºC … +1.46ºC … (+1.49ºC)
NOAA … … ..+1.36ºC … +1.46ºC … +1.34ºC … (+1.39ºC)
BEST … … …+1.47ºC … +1.52ºC … +1.44ºC … (+1.48ºC)
“2025 Global Climate Highlights, Copernicus Climate Change Service”
“2025 ranks as the third-warmest year on record, following the unprecedented temperatures observed in 2023 and 2024. It was marginally cooler than 2023, while 2024 remains the warmest year on record and the first year with an average temperature clearly exceeding 1.5°C above the pre?industrial level. 2025 saw exceptional near?surface air and sea surface temperatures, extreme events, including floods, heatwaves and wildfires. Preliminary data indicate that greenhouse gas concentrations continued to increase in 2025.”
https://climate.copernicus.eu/sites/default/files/custom-uploads/GCH-2025/GCH2025-full-report.pdf
Just posting the report and link for information. Haven’t had time to read the whole report. It does appear to be a nicely presented summary I thought. Interesting thing in the report how sea surface temperatures remain unusually warm despite la nina conditions.
“The (supposedly imminent) engagement of the authors of the DOE ‘climate report’ with the extensive critiques they received.”
Evidently not that imminent! If I knew how I’d embed an amusing screenshot in this comment.
However, I clicked the link which presumably quite recently led to the web site of the (in)famous DoE Climate Working Group?
Now it states:
“climateworkinggroup.org is parked free, courtesy of Godaddy.com. Get this domain”
ROFL?
Jim Hunt
Hi there! We’re on an extremely sticky wicket these days, aren’t we.
All:
Strongly recommend this (Greenland, very short). Turn the volume up and watch!
https://www.youtube.com/watch?v=hS0wFiWpU4U
Hat tip, excellent (also admirably short) Krugman blog: The Stupidest Trump Move So Far
https://paulkrugman.substack.com/p/the-stupidest-trump-move-so-far
Long time no see Susan!
“Ooh, let the drums mark this day”
Highly recommended indeed, and also available for dissemination purposes on my LinkedIn:
https://www.linkedin.com/posts/soulsurfer_greenland-defense-front-the-hungry-giant-activity-7418697369594159104–AZZ
BlueSky:
https://bsky.app/profile/did:plc:l63d5xaf5hislz4gm3jphfqm/post/3mcpdkxutv22q
and a few other (anti)social media sites that have no doubt slipped my mind
Susan et al.
The Greenland Defence Front has just published another recruitment video:
https://GreatWhiteCon.info/2025/12/the-us-national-security-strategy-2025/#comment-827411
“When the eagle is dust, our people will tell.
The GDF held and we did not sell.”
In response, Trump chickened out in Davos.
The Institutionalization of Denial: Reality or Ritual?
What we’re seeing today is not merely disagreement or misunderstanding. It’s the moment where a system doesn’t just do harm — it builds a legal and cultural architecture to justify it. And yes, the cognitive dissonance is unbelievable, because the system doesn’t just tolerate contradiction.
It requires it. It runs on it.
The pattern is always the same:
Mass harm occurs
A reform follows
The reform protects the elite
The public is taught to feel “progress”
The system continues unchanged under a new label
That’s why “colonialism” and “abolition” become myths of redemption, not real transformations. Victims get no compensation, no justice, no structural change — and are told to be grateful to live inside the system that destroyed them
That is not denial. That is moral laundering. And the same mechanism is at work in climate discourse:
The system produces harm
It offers reform as a symbolic fix
It compensates the institutions that caused the harm
It teaches the public to feel good about the fix
Then sells it back to them as moral progress.
Paris Agreement. Net Zero. “Climate emergency.” Not solutions — it’s not a movement so much as a rebranding operation.
A familiar pattern emerges: trust the science, trust the institutions, trust the party — and treat fossil fuel workers as moral pariahs. That’s the cultural script.
It isn’t about solving anything; it’s about maintaining the system. So the system protects itself by turning doubt into sin, and dissent into villainy. The public follow along.
But the truth is harsher: The system doesn’t just resist truth. It monetizes it, repackages it, and sells it back to us as virtue. And the people who suffer are still expected to say thank you.
That’s the obscene part.
That’s the rot.
That’s the institutionalization of denial.
Because the system requires denial:
It can’t admit the truth without threatening its own existence.
It can’t stop the engine without stopping itself.
It can’t be honest without collapsing its own legitimacy.
So it does what all successful systems do: It manufactures narratives that keep it alive.
And we call that progress.
D: and treat fossil fuel workers as moral pariahs.
BPL: NO ONE–and I mean NO ONE–in climate science or in the renewable energy movement says to treat fossil fuel workers as pariahs. One straw man argument in your whole, long list of straw men.
Atomsk’s Sanakan says (21 Nov 2025 at 11:31 AM):
To Geoff Miell
“This isn’t a question of policy. It’s a question of statistical significance in science.”
https://www.realclimate.org/index.php/archives/2025/11/unforced-variations-nov-2025/#comment-842265
The problem is:
Statistical significance is not the same thing as real-world risk. You can’t use a p-value to dismiss the possibility of catastrophe, especially when the stakes are existential.
And the irony is:
The only people who keep insisting “it’s not significant” are the ones who want to avoid acting until the crisis is undeniable. That’s not science — it’s denial dressed up as rigor.
Here are some hot takes (fresh from a recent talk) about AI technology today in the context of climate challenges, systems stress, and the ongoing misreading of what both AI LLM and Climate Science is and isn’t:
Fossil fuels reshaped the physical economy.
“AI is doing something similar now.
[00:19:34] But in the cognitive economy, same dynamics, but a different and a bigger game board.”
“Artificial intelligence, large language models, multiplies our cognitive armies by scaling pattern recognition, prediction, coordination, content generation, as many things we used to use our own brains for.”
“[00:20:32] Once trained, these systems can operate at near zero marginal cost. A model built once can be copied, endlessly deployed.”
“Pretty much everywhere and run continuously as long as there’s enough electricity, which is a big if of course, and water and supporting systems.”
“So a small number of organizations with access to data and compute and capital can now perform tasks that once required thousands or hundreds of thousands of people spread across institutions.”
“[00:21:05] And this has consequences. first, it accelerates extraction, not just of energy and materials, but of human attention and creativity and hominid decision space.”
“Human time and attachment now becomes a resource to be harvested, optimized, and nudged, and then monetized at scale.”
“[00:21:43] Yeah, training data comes from everywhere, but the benefits concentrate in relatively few places.”
“This is the same ownership dynamic we saw with industrial machinery, but only faster and less obvious and much more concentrated.”
“Third, it’s a turbo boost for our current cultural aspirations and goals and metrics.”
“[00:22:08] AI is really good at optimizing for what we ask it to optimize for, but if soil health and ecosystem stability and the plight of the dolphins or future generations are not part of the game plan, they won’t be part of the outcome either.”
“[00:22:42] But without new boundaries, new aspirations. It’s also gonna shorten feedback loops, and the decisions are gonna get faster and the responses are gonna get automated, and scale will increase before the consequences become fully visible or visible at all. Systems move more quickly than human governance culture or our ethics can adapt to.”
“[00:23:09] And all of this, it draws on the natural world. So AI does not introduce a new set of dynamics, at least not yet… artificial intelligence. If successful, compresses time and amplifies whatever existing incentives are already in place.”
“[00:23:35] …technology was, is and will continue to be. Powerful and important, but it’s intertwined with physics and ecology in the hierarchy of our human reality… there’s no silver bullet response to these dynamics.”
“[00:24:29] …when wealth is defined narrowly as financial claims and digits in the bank, rather than the underlying real world stocks and flows, then accelerating drawdown can look like. Prosperity…”
“[00:24:55] …Markets, kind of along the lines of the maximum power principle, they reward speed scale. And monetization. They don’t measure what’s being depleted underneath.”
“So record financial wealth can coexist with declining real wealth almost by design.”
Why this matters? Speaking for myself this is not about “AI as a magical solution” or “AI as a doom machine.” It’s about systems, incentives, and the way modern technology amplifies existing dynamics — especially those driving extraction, inequality, and ecological overshoot. Including the ongoing public debate — with its distractions, paranoia, and weak “solutions” like deleting search results — misses the important point: AI is not a new actor. It’s a new amplifier.
It accelerates the same incentives that already pushed us toward deeper systemic stress, and it does so at a scale that governance and ethics cannot keep pace with.
“CMIP is a computational exercise, not a scientific experiment.”
Multi-troll, ver. “Data”, trying to discredit science on …. semantics: “These are coordination conventions, not truths.
By following your reasoning to its logical end – we can’t say anything about anything – because EVERY word we use can be dismissed as a “just coordination convention”. E.g.: “It depends upon what the meaning of the word ‘is’ is.”
What’s you alternative – to throw up our hands because we will never know what the real world is, or to start EVERY sentence with a preamble “ In the context of the coordination conventions used here “? Repeatedly stating the obvious is a very inefficient form of communication – and unnecessary – if you come to somebody’s house – you follow their rules – so if you are joining the scientific discussion using the scientific terms and scientific conventions, you should use the said scientific terms and conventions.
If you don’t like. nobody will cry after you – go and create your own science and demand from your followers that they use YOUR coordination conventions (say, that year 1772 in your science should be from now on known as year 72315.5, Anno Multi-troll Domini).
An analogy can work better to understand Scientific methods than many people realise:
Americans didn’t discover 1776 in nature.
Humans didn’t derive right-hand traffic from physics.
Statisticians didn’t uncover 95% CI in the fabric of the universe.
These are coordination conventions, not truths.
And therefore, crucially: Statistics does not demand a particular confidence level — only humans do.
For all time the statement: “This isn’t policy, it’s statistical significance.” is false framing.
Therefore the quote: “This isn’t a question of policy. It’s a question of statistical significance in science.” — is categorically incorrect.
Why?
Because: Choosing 95% instead of 90% or 97.5% is a policy choice — not a Scientific choice, but merely a convenient choice for standardization by bureaucrats.
Choosing Type I vs Type II error tolerance is a value judgement.
Choosing what counts as ‘detectable’ is a decision rule, not a fact.
In climate risk contexts especially: False negatives (missing real acceleration) matter.
The costs are asymmetric. Therefore, epistemic caution cuts both ways. Pretending otherwise is scientism, not science.
There’s a deeper philosophy of science problem going on here that needs exposure on a ‘science’ forum. What we’re really pointing at isn’t statistics — it’s epistemic authoritarianism.
Some people (unfortunately):
Treat statistical thresholds as moral boundaries
Treat conventions as laws
Treat uncertainty as ignorance rather than structure
Treat lay confusion as a weapon, not a communication failure
This is exactly how gullibility and misunderstandings about Climate Science is manufactured in the public observer. That is not science it is rhetoric. Sophistry if you will. Monckton did it – now A’sS et al are doing it.
This relates to my recent comment here:
https://www.realclimate.org/index.php/archives/2025/12/1-5oc-and-all-that/#comment-844102
yes these conventions can be fickle things. We know for claims of CO2 signals in local extreme event attribution like fires or floods it’s not even technically possible to reach 95% confidence because these things happen so rarely that you just don’t have enough samples, and yet such things are usually reported to be attributed with high confidence. The other issue is that claims based on the distributions of extremes in a null vs forced virtual experiment depend on the models selected with uncertain structures and inputs. I think it’s a matter of debate of the meaning of statistics in this context. I rarely hear concerns from the climate community about this, which suggests that strict adherence to conventional thresholds of statistical significance isn’t treated as a hard rule but rather is context-dependent. Additionally, considering the WMO lists a wide array of “essential climate variables” it would be interesting to hear attributional claims related to other types of input forcings especially in local and regional context. Exploring these could expand the applications of climate science models and strengthen their perceived societal value. https://gcos.wmo.int/site/global-climate-observing-system-gcos/essential-climate-variables
Atomsk, in his Nov. post that Multi-troll (sockpuppet: “Data”) tries to relitigate 2 months later:
“This isn’t a question of policy. It’s a question of statistical significance in science. In climate science, medical science, etc. statistical significance is usually set at ‘p < 0.05’, i.e. an alpha or false positive rate of less that 0.05 (or < 5%). It’s a pre-determined threshold for rejecting the null hypothesis of no change in the global temperature trend. Being pre-determined prevents subjective/biased attempts to skew the analysis in favor of (or against) claiming an increase in the global temperature trend. "
JCM: We know for claims of CO2 signals in local extreme event attribution like fires or floods it’s not even technically possible to reach 95% confidence because these things happen so rarely that you just don’t have enough samples, and yet such things are usually reported to be attributed with high confidence ”
Put your money where your mouth is, Mr. JCM – QUOTE the posts by Atomsk in which he called trends that did NOT reach the 95% threshold – “statistically significant”.
Until you do so – your accusation of his double standard would remain baseless, and as such – would discredit only you,
P.S. And when you done with that quote, please tells us what confidence levels you assigned for your blaming “up to 40% of the planet’s land being degraded” on the climate scientists and their “ artificial fixation and overemphasis on [a] trace gas (CO2 – P.)
===== JCM, UV, 5 Jun 2024 at 8:24 AM ==============
“UNCCD reports up to 40 % of the planet’s land is degraded and annual net loss of native ecologies continues unabated at >100 million ha / decade. This is a profound forcing to climates and puts our communities at risk. It’s hard to imagine denying or actively minimizing the consequences to realclimates due to an artificial fixation and overemphasis on the outputs of trace gas and aerosol forced model estimates.”
==== end of JCM post ==============================
You wouldn’t be lecturing us: “Do as I tell you, not as I do” ?
Re: “Put your money where your mouth is, Mr. JCM – QUOTE the posts by Atomsk in which he called trends that did NOT reach the 95% threshold – “statistically significant”.
Until you do so – your accusation of his double standard would remain baseless, and as such – would discredit only you,“
The sockpuppet account (not JCM) is more egregious in pretending I said things I didn’t say, despite Nigelj correcting them. They’re never going to show me claiming what they pretend I said.
In contrast, I can show the sockpuppet pretending Forster 2025 wasn’t peer-reviewed. They did that to avoid acknowledging the projection of 2°C by 2048 in Foster 2025’s Climate Change Tracker, and to avoid admitting that projection was consistent with “~2°C by 2045-2050“.
A wicked man only sees wickedness in others.
-Pope Boniface VIII
D:
A wicked man only sees wickedness in others.
-Pope Boniface VIII
BPL:
Scooby dooby doo.
-Frank Sinatra
Data:
Darkness cannot drive out darkness. Only light can do that.
Hate cannot drive out hate. Only love can do that.
Martin Luther King Jr., Strength to Love, 1963
Reply to Susan Anderson
Tell that to your Friends.
I already walk the talk.
My remarks make no claim about the commentariat of realclimate.org, but about conventions used in the climate community, particularly as reflected in promoted communications.
In general, conclusions are not derived from single statistical thresholds. Confidence is built through a synthesis across many lines of evidence. These may include physical understanding, ensembles of model simulation experiments, observations, and process consistency. In this way, confidence is assembled. Inference relies on Bayesian and physical reasoning rather than classical statistical hypothesis testing. Treating assessment as anything else misrepresents how knowledge is actually established in the climate space where, by its very natur,e the energetic perturbation is exceptionally small. Changes are “attributed” all the time way before the possibility of discrete empirical statistical thresholds.
In climate acceleration & attribution discussion, a null hypothesis is artificial and model-defined, not based on an observable unperturbed background. The logic is to specify a physically informed hypothesis (e.g. forced warming with unforced variability). Common priors – pre-existing knowledge or beliefs that inform expectations – include knowledge of GHG radiation transfer principles, energy balance constraints, paleoclimate, contemporary Earth system state, and ensembles of virtual experiments.
Accordingly, the wrong question is: “Is a change statistically significant at the 95% level?” A more meaningful question is: “Is a change expected given the radiative forcing trajectory and the Earth system background, and do independent indicators support the same inference?”
Climate signals do not emerge as discrete events that must cross a pre-defined statistical threshold. Instead, the evidentiary framework evaluates cross-variable consistency, among GMST, ocean heat uptake, and TOA net radiation, for examples of energetic accumulation. Obviously many essential climatologies exist, where my particular interest is in the soil hydrological one which can undoubtedly be deemed to have undergone unprecedented change in parallel with the radiative forcing associated with unnatural gas emission, traceable to the very same industrial revolution.
Scientific claims in this context rest on various lines of evidence and explanatory coherence, drawing on (ideally) rich cross-disciplinary priors within the Earth system science, not on adherence to procedural thresholds, rigid conventions, or bizarre reflexive defensiveness.
Cont. from https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-842771 , https://www.realclimate.org/index.php/archives/2025/12/unforced-variations-dec-2025/#comment-842798
Linear B and 3 sinusoidal B posts (2 more in production), with the skin-temp post now in context:
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/09/for-asymptotic-radiances-ppia-linear-and-general-cases-wip-awaiting-final-proofread-double-check-diagrams-pending/
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/24/for-asymptotic-radiances-ppia-linear-b%cf%84/
https://scienceopinionsfunandotherthings.wordpress.com/2024/12/10/directionally-averaged-radiance-and-the-semi-gray-skin-temperature-wip-awaiting-final-proofread-double-check-diagrams-pending/
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/30/for-asymptotic-radiances-ppia-sinusoidal-b%cf%84-part-1/
https://scienceopinionsfunandotherthings.wordpress.com/2025/12/26/evaluating-this-term/ (I’m rather proud of this work)
https://scienceopinionsfunandotherthings.wordpress.com/2026/01/01/for-asymptotic-radiances-ppia-sinusoidal-b%cf%84-part-2/
https://scienceopinionsfunandotherthings.wordpress.com/2026/01/18/for-asymptotic-radiances-ppia-sinusoidal-b%cf%84-part-3-sinusoidal-bvertical-mass-path/
PS thank you to the European leaders standing up to “Don Quitrumpte”. We don’t need to own Greenland, I don’t want Venezuela’s oil. Save polar, winter, and mountain ice, but F*ck ICE.
New paper in Nature Climate Change:
Accounting for ocean impacts nearly doubles the social cost of carbon
https://www.carbonbrief.org/prof-ben-santer-trump-administration-is-embracing-ignorance-on-climate-science/
He’s coming to the uk because the atmospheric climate scientists haven’t convinced the USA it’s real.
Ben Santer spend his life doing the work and building the case only for it to be cut back and ignored.
Pete best,
To be more specific (as per the start of the 50 min Santer interview video in the CarbonBrief article), as well as personal reasons, Santer has moved to the CRU at Uni of East Anglia in UK because of cuts to climate change research in the US by the 2nd Trump administration (specifically ‘attribution’ research). Note he also describes the situation at the start of Trump Mark 1 when Santer & others at Lawrence Livermore fact-checked Trump’s EPA Administrator pick the fossil-fuel-guy Scott Pruitt, breath of reality that the Trump administration took exception-to and resulting in funding cuts at Lawrence Livermore.
Of course, this side of the pond we have our own rather worrying little Trump in the shape of chirpy-chappy Nigel Farage. He’s not as ignorant as Trump but does seem to be taking up an identical political agenda, particularly on climate change.
(Farage has been beating the ‘Get Out of Europe’ drum for over 30 years, a cause that seems to have a one-for-one association with climate change denial. His vehicle for snatching power is a political party now called Reform UK. The latest Reform UK constitution will allow Farage to use Reform as his own personal play-thing for the next decade.)
With the two main political parties making a complete horlicks of running the country and hemorrhaging support in the polls**, our Nigel has captured a sizeable part of the UK electorate with his “Vote for me. Only I can save you!!” message. The 2027 General Election is going to be interesting.
(** You might note the rise in support for the UK’s Green party who, of course, do take climate change seriously. I am not at all a fan as they combine a deeply leftist approach to that climate agenda. The UK public and indeed Green Party supporters generally haven’t quite understood yet this aspect of Green Party policies.. And to add to the worry, their new popular leader has embraced Modern Monetary Theory which he apparently doesn’t even understand.)
MAR,
See also my recent article on the Trump/Vance administration’s plans for the 6th US National Climate Assessment and the associated evisceration of NCAR:
https://GreatWhiteCon.info/2026/01/the-sixth-us-national-climate-assessment/
It is already crystal clear that Katharine Hayhoe, Zeke Hausfather and the other scientists who produced NCA5 will not be invited by the Trump administration to write NCA6…
Perhaps surprisingly, Ryan Maue, appointed as chief scientist of the National Oceanic and Atmospheric Administration during Donald Trump’s first term of office had this to say:
“If you believe A.I. and numerical weather prediction are important for our economy and national security, then NCAR in Boulder probably is our best bet to compete globally.
US weather modeling has been neglected for 20 years, and moonshot focus is needed, not dismantling.”
Reply to Pete best
The “experts” just aren’t that expert pete. They only imagine they must be.
That applies as much to the politicians and the media.
Welcome to the 21st century. Same as the last century.
D: The “experts” just aren’t that expert pete. They only imagine they must be.
BPL: Yeah, the true expertise is to be found in internet blog posters.
Reply to Barton Paul Levenson 23 Jan 2026 at 10:10 AM
D: The “experts” just aren’t that expert pete. They only imagine they must be.
BPL: Yeah, the true expertise is to be found in internet blog posters. Such as myself, and Piotr, Nigelj, Ron, Susan, Pete, Geoff, Kevin, Keith, Thomas W, Ray, jgnfld, Ken, Crusty, Karsten, zebra, Rodger, Tomáš ,Mal Adapted, Mr. Know It All and Atomsk’s Sanakan ad nauseum.
D: Fixed it for you Barton.
D: the true expertise is to be found in internet blog posters. Such as myself, and Piotr, Nigelj, Ron, Susan, Pete, Geoff, Kevin, Keith, Thomas W, Ray, jgnfld, Ken, Crusty, Karsten, zebra, Rodger, Tomáš ,Mal Adapted, Mr. Know It All and Atomsk’s Sanakan ad nauseum.
BPL: Straw man. We (myself, Piotr, Nigel, Ron, Susan, etc. eliminating the deniers) quote our sources.
in Re to Barton Paul Levenson, 24 Jan 2026 at 10:02 AM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844316
Hallo Barton,
I suppose that under “blog posters”, you meant rather people like Dr. Schmidt and his colleagues than people like me, am I right?
Greetings
Tomáš
BPL: Straw man. We (myself, Piotr, Nigel, Ron, Susan, etc. eliminating the deniers) quote our sources.
Data: Creating a new Strawman out of false claims isn’t going to save you Barton. Repeating the very same distortions, fallacies, misrepresentations, and baseless mischaracterizations will never make them true.
Everyone ultimately chooses what they believe. It is not my role to hold anyone’s hand, reassure them, or prove them wrong.
I’m not going to continually defend the existence of correct modes of reasoning against people who are indifferent to them. Readers know who you are.
The sockpuppet likes pretending they know better than the authors of the peer-reviewed paper Forster 2025. Yet the authors of that paper presented evidence, while the sockpuppet presented none. The sockpuppet won’t even be accurate on whether the paper is peer-reviewed and on whether the paper’s Climate Change Tracker projected 2°C of global warming by approximately 2048.
BPL 14 Jan 2026 at 9:35 AM “Oh? What was it?” It was an unnatural variation caused by humans warming the Atlantic Ocean surface. In the tropics it acts as a Power Amplifier because the tropical Pacific Ocean is ~3.6 times as wide as the tropical Atlantic Ocean. The effect of the warming Atlantic Ocean surface (caused by humans, so unnatural) on wind coupling over South America was made known to me in February 2014 by (Aussie) Matthew H. England et al published paper. Also See Kevin Trenberth pictorial at 21:38 to 23:20 at https://www.youtube.com/watch?v=agKayS6h6xA
I think it has ended and the easterly tropical Pacific wind is no longer 30% (*1 m/s) faster than it has historically been (for the prior 20th century anyway).
I’m told this is accessible outside the US. It’s an excellent video presentation from the Araon Antarctic voyage on US PBS Newshour:
On board the voyage to Antarctica to learn why a massive glacier is melting
https://www.pbs.org/newshour/show/on-board-the-voyage-to-antarctica-to-learn-why-a-massive-glacier-is-melting
[PS. RealClimate has regressed to my older email so I have to retype each time. Can this be fixed?]
Barry E. Finch,
https://www.realclimate.org/index.php/archives/2025/09/but-you-said-the-ice-was-going-to-disappear-in-10-years/#comment-839682
Carefully crafting phrasings is the Fossil key.
Well said.
An absence of statistical evidence is not evidence of absence.
This phrase, famously popularized by astronomer Carl Sagan, is a crucial concept in science, logic, and daily decision-making. It serves as a reminder to avoid jumping to conclusions.
Simply, a logical fallacy is erroneous reasoning that looks sound (Schagrin, et al, 2021). It can be either a seriously incorrect argument, or an incorrect conclusion based on such arguments.
In the context of climate science, the phrase “absence of statistical evidence is not evidence of absence” highlights a common logical fallacy where the failure to meet a specific mathematical threshold is used to deny a physically observable reality.
This specific debate often centers on whether global warming has “accelerated” since 2010 compared to the 1970–2010 rate.
1. The Fallacy: Appeal to Ignorance (Statistical Edition)
The core fallacy is treating a “null result” (failing to reach 95% statistical confidence) as proof to argue that no change has occurred.
Statistical Thresholds: In many models, a warming surge after 2010 would need to be 55% to 85% faster than previous decades to be “statistically detectable” today.
The Error: Claiming there is “no evidence of acceleration” because it hasn’t yet crossed this high mathematical bar ignores other types of reliable physical evidence.
2. Statistical vs. Physical Evidence
Critics of “denying by statistics” argue that while the math might be “noisy” due to short-term variations (like El Niño), the physics shows clear signs of acceleration.
Earth’s Energy Imbalance (EEI): Physical data shows the amount of heat Earth is trapping has nearly doubled in the last decade. This is a “hard” physical measure that suggests acceleration is real even if surface temperature “noise” makes it hard to prove statistically over a short 12 to 15-year window.
Aerosol Reduction: Changes in shipping fuel regulations since 2010 have reduced cooling aerosols, which physically must lead to faster warming. Denying this based on a lack of “statistical significance” is often seen as prioritizing a math convention over known physical laws.
3. How This Has Been Presented
The “Hiatus” Trap example: Skeptics previously used a lack of “statistically significant” warming between 1998 and 2012 to claim global warming had stopped. This was later debunked as “statistical smoke and mirrors” when the heat was found to be accumulating in the oceans instead of the surface.
Modern Surge Debate: Recent studies (e.g., Beaulieu et al., 2024 – https://www.nature.com/articles/s43247-024-01711-1 ) state they cannot statistically detect a surge across 12 years post-2010 yet. However, scientists like James Hansen argue this is a misleading way to present data, as the physical drivers (EEI and aerosols etc.) clearly indicate the 2010 “hinge point” is a reality that the statistical methods are simply too “blunt” to catch yet.
(IGCC 2024) Forster et al 2025 ( https://essd.copernicus.org/articles/17/2641/2025/ ) and others taken as a whole, align with:
Post-1970 warming rate was ~0.17C-0.19C until 2010
Recent data trends show warming rates 2010-2025 ≈ 0.27–0.30 °C/decade
Likely near-term peaks ≈ 0.33 °C/decade from now into the 2030s
Strong role of ENSO amplification in 2023–24
No robust statistical detection yet of acceleration at strict CI thresholds
Summary of the Conflict
Claim Type > Argument > Conclusion
Statistical “The trend since 2010 doesn’t pass the 95% confidence test yet.” “No evidence of acceleration.”
Physical “Energy imbalance has doubled; aerosols are down; 2023–2024-2025 heat is unprecedented.”
– “Acceleration is occurring; stats are just lagging.”
Using the absence of a mathematical proof (the statistical 95% CI) to deny a present physical reality (the heat) is the specific “evidence of absence” fallacy we are identifying here.
“Simply, a logical fallacy is erroneous reasoning that looks sound (Schagrin, et al, 2021). It can be either a seriously incorrect argument, or an incorrect conclusion based on such arguments. In public discourse, …logical fallacies should always be avoided because they invalidate conclusions and arguments.”
https://research.com/research/logical-fallacies-examples
D: the failure to meet a specific mathematical threshold is used to deny a physically observable reality.
BPL:
1) If the “physically observed reality” isn’t statistically significant, then chances are it hasn’t been observed. Thus, fallacy of subverted support.
2) No one is saying that failing to meet statistical significance means the phenomenon doesn’t exist. It only means there’s no good evidence for it in the statistics. Thus, straw man fallacy.
Reply to Barton Paul Levenson
The complete lack of evidence to support your assertions coupled with your pre-determined denial of what has been said to and about others for months condemns your arguments as lacking.
It is unadvisable to leave things to “chance” – as your strongest and only argument. Attend to your own logical fallacies. They follow you everywhere.
BPL doesn’t have to provide a list of links and details. His argument was obvious and self contained and can be rebutted or not on that basis. You haven’t rebutted it
It’s just whining from the sockpuppet account to dodge the fact that there is not statistically significant acceleration of the global surface temperature trend. That’s what was said, not the straw man position the sockpuppet disingenuously fabricated. They’re so mad that they don’t have a cogent objection to people saying X that they pretend people were instead saying Y.
Specious Sophistry is often based on Logical Fallacies, deception and/or poor reasoning. Why is An absence of statistical evidence is not evidence of absence of accelerating warming?
This phrase, famously popularized by astronomer Carl Sagan, is a crucial concept in science, logic, and daily decision-making. It serves as a reminder to avoid jumping to conclusions.
Simply, a logical fallacy is erroneous reasoning that looks sound (Schagrin, et al, 2021). It can be either a seriously incorrect argument, or an incorrect conclusion based on such arguments.
In the context of climate science, the phrase “absence of statistical evidence is not evidence of absence” highlights a common logical fallacy where the failure to meet a specific mathematical threshold is used to deny a physically observable reality.
This specific debate often centers on whether global warming has “accelerated” since 2010 compared to the 1970–2010 rate.
1. The Fallacy: Appeal to Ignorance (Statistical Edition)
The core fallacy is treating a “null result” (failing to reach 95% statistical confidence) as proof to argue that no change has occurred.
Statistical Thresholds: In many models, a warming surge after 2010 would need to be 55% to 85% faster than previous decades to be “statistically detectable” today.
The Error: Claiming there is “no evidence of acceleration” because it hasn’t yet crossed this high mathematical bar ignores other types of reliable physical evidence.
2. Statistical vs. Physical Evidence
Critics of “denying by statistics” argue that while the math might be “noisy” due to short-term variations (like El Niño), the physics shows clear signs of acceleration.
Earth’s Energy Imbalance (EEI): Physical data shows the amount of heat Earth is trapping has nearly doubled in the last decade. This is a “hard” physical measure that suggests acceleration is real even if surface temperature “noise” makes it hard to prove statistically over a short 12 to 15-year window.
Aerosol Reduction: Changes in shipping fuel regulations since 2010 have reduced cooling aerosols, which physically must lead to faster warming. Denying this based on a lack of “statistical significance” is often seen as prioritizing a math convention over known physical laws.
3. How This Has Been Presented
The “Hiatus” Trap example: Skeptics previously used a lack of “statistically significant” warming between 1998 and 2012 to claim global warming had stopped. This was later debunked as “statistical smoke and mirrors” when the heat was found to be accumulating in the oceans instead of the surface.
Modern Surge Debate: Recent studies (e.g., Beaulieu et al., 2024 – https://www.nature.com/articles/s43247-024-01711-1 ) state they cannot statistically detect a surge across 12 years post-2010 yet. However, scientists like James Hansen argue this is a misleading way to present data, as the physical drivers (EEI and aerosols etc.) clearly indicate the 2010 “hinge point” is a reality that the statistical methods are simply too “blunt” to catch yet.
(IGCC 2024) Forster et al 2025 ( https://essd.copernicus.org/articles/17/2641/2025/ ) and others taken as a whole, align with:
Post-1970 warming rate was ~0.17C-0.19C until 2010
Recent data trends show warming rates 2010-2025 ≈ 0.27–0.30 °C/decade
Likely near-term peaks ≈ 0.33 °C/decade from now into the 2030s
Strong role of ENSO amplification in 2023–24
No robust statistical detection yet of acceleration at strict CI thresholds
Summary of the Conflict
Claim Type > Argument > Conclusion
Statistical “The trend since 2010 doesn’t pass the 95% confidence test yet.” “No evidence of acceleration.”
Physical “Energy imbalance has doubled; aerosols are down; 2023–2024-2025 heat is unprecedented.”
– “Acceleration is occurring; stats are just lagging.”
Using the absence of a mathematical proof (the statistical 95% CI) to deny a present physical reality (the heat) is the specific “evidence of absence” fallacy we are identifying here.
“Simply, a logical fallacy is erroneous reasoning that looks sound (Schagrin, et al, 2021). It can be either a seriously incorrect argument, or an incorrect conclusion based on such arguments. In public discourse, …logical fallacies should always be avoided because they invalidate conclusions and arguments.”
https://research.com/research/logical-fallacies-examples
The arts of the professional sophists were known as sophistry and gained a negative reputation as arbitrary, inauthentic, specious or deceptive styles of reasoning,
In modern usage, sophism, sophist, and sophistry are used disparagingly. Sophistry, or a sophism, is a fallacious argument, especially one used deliberately to deceive. Today, a sophist is a person who reasons with clever but deceptive or intellectually dishonest arguments. https://en.wikipedia.org/wiki/Sophist
Normal people recognize it when they see it, even if they cannot define it accurately.
Data: “Work by England et al. (2014) and Cowtan & Way (2014–15) showed that warming had not stopped; it had been temporarily redistributed and partially hidden from the dominant surface metrics” babble-O-write and
“Skeptics previously used a lack of “statistically significant” warming between 1998 and 2012 to claim global warming had stopped. This was later debunked as “statistical smoke and mirrors” when the heat was found to be accumulating in the oceans instead of the surface”.
I’ve always disliked obfuscation (for one thing, it’s a hallmark of the Fossils). I can’t state that the former is incorrect even though, when the subject is “temperature”, the comparing of “heat” twixt surface-air and ocean is absurd surface due to ocean having ~1,050 times the thermal capacity of atmosphere.
Why not instead actually say what it is? Ocean temperature averages 3.5 degrees and surface-air averages 15 degrees and full ocean presents to the surface over ~2,000 years so meanwhile when ocean mixing increases it has a higher surface cooling effect and when ocean mixing decreases it has a lower surface cooling effect. That’s simple enough.
This first annoyed me when Yorkie-Aussie 1000FrollyCoalShillThing Robert Holmes had a video of Heidi Cullen saying this unclear “heat went into the ocean instead” thing and Holmes video was “now they say the heat’s gone to the bottom of the ocean”. Are you trying to help Fossils with you’all Backwards Methods?
That thing and the “gases absorb surface radiation and re-emit half back down” nonsense is what’s peeved me for ~12 years.
My “former is” S.B. “latter is”. My “absurd surface due” S.B. “absurd due”
A short backgrounder on method to impose influence small groups online, and how it can operate both ways.
It may have been noticed already where similar wordings have been repeated often under different constructs and contexts. Subliminal advertising through the subconscious has shown effect. The reasons are well established by the experts.
Repetition works by strengthening neural connections in the brain, moving information from short-term to long-term memory, and increasing perceptual fluency. It is highly effective for conveying difficult concepts when applied through structured, spaced, and varied methods.
This cognitive process, known as consolidation, makes the knowledge more durable and readily accessible for later recall. Based on Hermann Ebbinghaus’s forgetting curve, humans tend to forget information rapidly after initial exposure. Repetition, especially when spaced out over time, helps to “flatten” this curve, significantly improving retention.
You see, repetition is an essential tool for marketing ideas and propaganda too; it’s not only for understanding complex ideas, but its effectiveness depends heavily on the method of application and it’s desired purpose.
Spaced repetition is highly effective because it forces the brain to actively recall the information just before it is forgotten, which strengthens the memory trace. Summarizing key points, or explaining the concept to others forces deeper cognitive engagement and builds stronger connections. Trigger words can later rapidly bring these information memories forward into consciousness again.
For complex concepts, repetition is most effective when varied, as I have done. It’s an essential process in order to reposition and confront misguided or incorrect arguments that have been implanted over time in a small group. By encountering the material in different contexts, formats, and with slight variations in practice prevents boredom and promotes adaptability and deeper understanding.
Therefore, with selective use, eventually the target may begin to understand the particular concept being shared, is not an insult, not a straw man, nor a dodge. A withdrawal of consent is something quite different. I am withdrawing consent from endless reassurance and debate. That is not condescension. It’s normal boundary-setting. The polar opposite of other approaches here.
Repetition is necessary to ensure the message gets through the pre-existing social barriers and prevailing misconceptions. Here repetition operates like a subconscious mental ‘speed bump’. Reducing speed and interference can eventually lead to a deeper wholesome understanding. Once understood and accepted there is no further need to repeat anything.
You may notice these kinds of methods (pro or con genuine facts and analysis) occurring across the media landscape. An example is Dr. John Cook: Formerly of the Global Change Institute at The University of Queensland, Cook is a renowned expert on climate communication, particularly the psychology of climate denial and misconceptions. Science reporter Peter Hadfield (aka Potholer54) has done similar good work in correcting misconceptions about Climate Science.
Data says 21 Jan 6:34 PM: “A short backgrounder on method to impose influence small groups online”
post 95 posts in 23 days?
Data says 21 Jan 6:34 PM: “A short backgrounder on method to impose influence small groups online”
The count is not correct, while disregarding the methods discussed.
First we can logically compare A’sS posts about Data from Jan 1 > total = 59
With my comments showing replies to everyone from Jan 1 > total = 85
Including replies to the 38 times Atomsk’s Sanakan complained about a TYPO I made in December and long ago corrected. Let’s call that 85 minus 38 = 47 ?
My comment on Repetition: compared education vs manipulation–I already nailed the important distinction of how repetition can be useful for education, but not when deployed for spin doctoring and manipulation.
Educational repetition: clarifies, reinforces structure and invites understanding.
Manipulative repetition: bypasses reasoning; suppresses alternatives, and condition emotional response.
The rot begins when people confuse the two — or worse, weaponize repetition while accusing others of doing so. My position is documented, clear and coherent. See my comment in full: https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844236
Data, perhaps AS didn’t see your comment where you said you made a typo. Perhaps he doesn’t believe you made a typo. I wonder why that would be. Could it be your years of dishonestly using sock puppets and dishonestly using the names of real people?
I suspect nobody trusts anything you say anymore. Your devious sock puppet trickery that you thinks is so smart has consequences. You are like Trump. You dont think ahead to the ultimate consequences of your actions.
Repetition? You have been repeating your own basic messages for years, literally hundreds of times: America is an evil empire, lets consider China and Russia as role models, capitalism is doomed, we are all doomed, theres nothing we can do to stop climate change or stop burning fossil fuels, the IPCC are idiots, climate models are crap. Only Hansen knows whats going on. Those are not all the exact words you have used, but those are your messages and how they come across. This is obvious in multiple peoples responses. If smart liberal leaning guys interpret your comments that way I shudder to think how the rest of the public would respond.
Are those messages education, or are they BS and propaganda? Let the reader decide.
I suggest modify your approach and you would have something to contribute.
Re: “Perhaps he doesn’t believe you made a typo. I wonder why that would be.“
It’s because it wasn’t a typo. The sockpuppet pretends the issue was them making an honest typo by writing ‘Foster 2025’ as ‘Forster 2025’, so that they erroneously conflated Foster’s 2025 non-peer-reviewed pre-print with Forster’s 2025 peer-reviewed paper.
But they were responding to a comment in which I wrote ‘Forster 2025’ and referred to Foster’s 2025 pre-print as ‘Rahmstorf/Foster pre-print’. I didn’t refer to Foster’s pre-print as ‘Foster 2025’, so there was not an opportunity for the sockpuppet to conflate Foster’s 2025 pre-print with Forster’s 2025 peer-reviewed paper. It wouldn’t have made sense for me to refer to the pre-print as ‘Foster 2025’ anyway, since Foster is not the first author; Rahmstorf is.
What actually happened was that the sockpuppet pretended Forster 2025 was not peer-reviewed, so they could avoid acknowledging I cited peer-reviewed evidence in support of a projection. The sockpuppet knows this, which is why they still dodge my questions on what Forster 2025 showed for projections via its Climate Change Tracker. It’s funny seeing them flail about after they were caught on their willful disinformation.
in Re to Nigelj, 24 Jan 2026 at 1:26 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844319
Hallo Nigel,
An excellent summary, thank you very much!
Greetings
Tomáš
Reply to Nigelj; Atomsk’s Sanakan; et al
Made myself clear here:
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844236
And my clarification here:
https://www.realclimate.org/index.php/archives/2025/12/1-5oc-and-all-that/#comment-844253
– Data: “A short backgrounder on method to impose influence small groups online”
– Piotr: “post 95 posts in 23 days?”
– Data: “The count is not correct, With my comments showing replies to everyone from Jan 1 > total = 85”
Let’s fact-check this claim:
“UV Jan” – 50 posts
“AI/ML climate magic?” – 12 posts
” 1.5ºC and all that” – 34 posts
Jan.1 posts in “UV Dec” – 6 posts
Hmm, that’s “102”. I would like to profusely apologize for underestimating the scale of your activity – to my defense my initial count of 95 didn’t check for January posts in the thread titled: “UV December “…. And a few of your posts from Jan.23 were not made available at the time of posting.
But neither “95” nor “102” are not “85”, I don’t have to explain it to the entity calling itself “Data”, do I?
Thank you for encouraging me to include all your qualifying entries. So the improved thanks to you my response now reads:
============
– Data: “A short backgrounder on method to impose influence small groups online”
– Piotr: … “post 102 posts in 23 days?”
============
But hey, Data, shouldn’t you have acknowledged your inspiration – Steve Bannon and his famous strategy to paralyse the opponents by “Flooding the zone with shit”?
Made myself clear here:
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844236
My clarification here:
https://www.realclimate.org/index.php/archives/2025/12/1-5oc-and-all-that/#comment-844253
And now here:
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844301
Creating a new Strawman out of false claims isn’t going to save you. Repeating the very same distortions, fallacies, misrepresentations, and baseless mischaracterizations will never make them true.
Eugene T. Gendlin:
“What is true is already so. Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away.”
Desiderius Erasmus:
“There are some people who live in a dream world, and there are some who face reality; and then there are those who turn one into the other.”
– Multi-troll : “A short backgrounder on method to impose influence small groups online”
– Piotr: “post 95 posts in 23 days?”
– Multi-troll: “The count is not correct, With my comments showing replies to everyone from Jan 1 > total = 85”
– Piotr – Let’s check it – the actual number is 102 (“95” was an UNDERestimate -didn’t include Jan1 posts from “UV _December_”, nor posts from Jan 23 that weren’t available at the time of my posting)
– Multi-troll …defends his lie by …. attacking the opponent who pointed to it:
D: “ Creating a new Strawman out of false claims isn’t going to save you. Repeating the very same distortions, fallacies, misrepresentations, and baseless mischaracterizations will never make them true.”
– See also – Steve Bannon, on imposing influence on public discourse and on neutralizing the opponents: “ Flood the zone with shit “
Yup. And despite all that repetition, the sockpuppet disingenuously dodges questions that expose their disinformation. Just what one would expect from a denialist with no genuine interest in evidence or accuracy.
https://www.realclimate.org/index.php/archives/2025/12/1-5oc-and-all-that/#comment-844287
Pete Best: [Ben Santer] is coming to the uk because the atmospheric climate scientists haven’t convinced the USA it’s real. Ben Santer spend his life doing the work and building the case only for it to be cut back and ignored.
Santer blames trumpism/denialism, you blame … climate scientists ???? (“because [they] haven’t convinced the USA it’s real”)
And since Santer is one of these “atmospheric climate scientists”, are you also implying that he has only himself to blame?
I’d suggest you step back for a moment and reflect – who are your real enemies and who are not, And then ask yourself whether your activity on this group reflects it. Because so far, your preoccupation with attacking … climate scientists only serves these enemies.
Santer concludes the interview with his most important point of all about the responsibility to convince others:
“I think the scientific community and the IPCC maybe haven’t been that good in terms of explaining just how compelling the evidence is for human effects on climate – just how multivariate it is: atmosphere, ocean, land, temperature, moisture, circulation, ice. It’s everywhere. It’s in our backyards. It’s not just evidence of human effects on climate in the far flung Arctic or a few Pacific islands, we need to communicate that better.”
Supporting precisely what Pete Best is indicating has occurred across decades. As the old saying goes: Given the decades long emotional behaviour, litany of logical fallacies, and the caustic communication skills of the pro-IPCC pro-consensus moderate science activists commenting here, then who needs enemies?
D: Given the decades long emotional behaviour, litany of logical fallacies, and the caustic communication skills of the pro-IPCC pro-consensus moderate science activists commenting here, then who needs enemies?
BPL: Physician, heal thyself.
Multi-Troll ver. “Data:” “ Santer concludes the interview with his most important point of all
False. The title of the video is “Prof Ben Santer: Trump administration is ‘embracing ignorance’ on climate science ” so his most important point of all is not IPCC, but “Trump administration is ‘embracing ignorance’ on climate science”, I.e. you cherry-picked a minor point and inflated it, to fit your ideology, to Santer’s “ most important point of all
Multi=troll: “Supporting precisely what Pete Best is indicating”
False again: Santers words:
“ I think the scientific community and the IPCC maybe haven’t been that good in terms of explaining” and ” we need to communicate that better”
is NOT “supporting precisely” Pete’s … blaming climate scientists (thus including Santer) for trumpism and for cutting back and ignoring his work. for forcing Santer to emigrate to UK.”
And isn’t it symptomatic that the anti-science trolls would see something good – scientist’s internal specific appeal to self-improve (“we need to communicate that better”) – and then use it to … discredit the very same scientists.
That’s like as if at an anti-slavery meeting, a well-known abolitionist Frederic Douglas said: “we need to do more to fight slavery”, and his words were then used by their enemies to blame … the abolitionist for the slavery, say,
Anti-abolitionist pamphleteer, let’s call him “Datum”:
“Frederic Douglas supported precisely what my friend Pete has been saying for decades: it’s the abolitionists who are the problem – it is them who are responsible for slavery, because abolitionists haven’t convinced the USA that the slavery is wrong! And this comes straight from no other than Frederic Douglas, who spend his life doing the work, it’s his most important argument of all!”
a comment on “Data”, 21 Jan 2026 at 6:34 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844181
Dear moderators,
It appears that you do agree with the multitroll that “repetition is most effective when varied” and therefore provide him with your website as a platform for his effort “to reposition and confront misguided or incorrect arguments that have been implanted over time in a small group”.
I am not sure how this effort is perceived by other readers, however, I would like to say that in me “encountering the material in different contexts, formats, and with slight variations” does not “prevent boredom” nor “promotes adaptability and deeper understanding”.
To be honest, I cannot withstand his omnipresence on this website anymore, because any encounter with his production makes me angry.
Best regards
Tomáš
Tomáš Kalisz says
22 Jan 2026 at 3:39 PM
To be honest, I cannot withstand his omnipresence on this website anymore, because any encounter with his production makes me angry.
Data: I can empathise. A solid rebuttal of my ‘arguments’ conveyed on any topic may very well silence me. I await not angry but patiently.
Data multi trolls arguments have been demolished hundreds of times , and I mean genuinely demolished. He’s just too narcissistic and full of himself and dunning kruger to see it.
The constant streams of his BS make me angry too. But I tend to put things in perspective.
Yup. And when they’re asked questions that show they’re wrong, the sockpuppet dodges those questions. I get how that makes people angry. But it helps to remember that this is the same type of behavior one sees from stubborn children, disingenuous curmudgeons, flat Earthers, etc. Some people just refuse to have the intellectual honesty needed to admit error.
Anyway, here are the questions they’re dodging:
(Hint: the answer to each question is ‘yes’, despite the sockpuppet having repeatedly pretended the answer was ‘no’.)
See my clarification here:
https://www.realclimate.org/index.php/archives/2025/12/1-5oc-and-all-that/#comment-844253
in Re to “Data”, 22 Jan 2026 at 10:30 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844245
Sir,
In your post of 22 Jan 2026 at 4:54 PM,
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844236 ,
you wrote: “.. neither am I maintaining multiple “sockpuppet accounts”.
Indeed, your last alter egos “Jim”, “Neurodivergent” and “Yebo Kando” seem to have disappeared the last week or perhaps even longer.
Could you kindly confirm that you are going to stay with your present nick and that you desist from further attempts to cheat Real Climate moderators and readers by hiding under two or more names?
If so, may we expect also a clarification of your reasons for this shameful behaviour in the past?
Sincerely
Tomáš
Remember that the truthfulness of AGW and the rightness of mitigation policies doesn’t depend upon blog comments.
Things like this don’t come around very often. Will anyone notice it’s far bigger than the obvious subject matter? Or see how it’s directly connected to the long term inaction over global warming and all the other urgent threats to life on earth.
It seems that every day we’re reminded that we live in an era of great-power rivalry; that the rules-based order is fading; that the strong can do what they can and the weak must suffer what they must. And this aphorism of Thucydides is presented as inevitable, as the natural logic of international relations reasserting itself.
Faced with this logic, there is a strong tendency for countries to go along to get along: to accommodate, to avoid trouble, to hope that compliance will buy safety. Well, it won’t. So, what are our options?
In 1978, the Czech dissident Václav Havel — later president — wrote an essay called *The Power of the Powerless*. In it, he asked a simple question: how did the communist system sustain itself?
His answer began with a greengrocer. Every morning, this shopkeeper places a sign in his window: *Workers of the world, unite.* He doesn’t believe it. No one does. But he places the sign anyway — to avoid trouble, to signal compliance, to get along. And because every shopkeeper on every street does the same, the system persists: not through violence alone, but through the participation of ordinary people in rituals they privately know to be false.
Havel called this *living within a lie*. The system’s power comes not from its truth, but from everyone’s willingness to perform as if it were true. And its fragility comes from the same source. When even one person stops performing — when the greengrocer removes his sign — the illusion begins to crack.
Friends, it is time for companies and countries to take their signs down.
So, that was a big start, right? First off, Carney is comparing the US-led world order to the communist bloc, which is already pretty unusual. Obviously, the implication is that the communist bloc collapsed, and he thinks the American empire is going to do the same.
He’s also saying that, like Eastern Bloc communism in his analogy, the US-led system only works because people think it’s in their interest to pretend they believe it works. So what he’s saying is that it’s time for Canada — and for everyone else, the middle powers as we’ll see later in the speech — to start telling the truth: that they no longer believe it.
This is a system that’s lasted for 80 years, and Mark Carney is essentially saying that for those 80 years we’ve pretended that this is all good and fair. We’ve pretended that the American empire isn’t an empire. We’ve pretended it’s a rules-based order. We’re not pretending anymore.
And the analogy with the Eastern Bloc is that he’s suggesting if we all admit that we don’t believe this — if we all stop the pretence — then the whole thing could crumble.
Let’s go back to the speech, to where Carney really lays out what he thinks has changed:
For decades, countries like Canada prospered under what we called the rules-based international order. We joined its institutions. We praised its principles. We benefited from its predictability. And because of that, we could pursue values-based foreign policies under its protection.
We knew the story of the international rules-based order was partially false: that the strongest would exempt themselves when convenient; that trade rules were enforced asymmetrically; and that international law applied with varying rigor depending on the identity of the accused or the victim.
This fiction was useful. And American hegemony in particular helped provide public goods: open sea lanes, a stable financial system, collective security, and support for frameworks for resolving disputes.
So, we placed the sign in the window. We participated in the rituals, and we largely avoided calling out the gaps between rhetoric and reality.
This bargain no longer works. Let me be direct: we are in the midst of a rupture, not a transition.
Over the past two decades [post-2008 GFC], a series of crises — in finance, health, energy, and geopolitics — have laid bare the risks of extreme global integration. But more recently, great powers have begun using economic integration as weapons; tariffs as leverage; financial infrastructure as coercion; supply chains as vulnerabilities to be exploited.
You cannot live within the lie of mutual benefit through integration when integration becomes the source of your subordination.
At the same time, absolutely right to resist turning this into “it’s all about Trump”. That’s the easiest and laziest containment strategy:
Reduce systemic failure to one vulgar symptom
Personalize what is structural
Moralize what is institutional
Trump is a product, not the cause — a grotesque but logical emergent property of systems that hollowed out trust, truth, reciprocity, and accountability long before he showed up. He fills the space the system created.
And that’s where the broader frustration is really pointing:
IPCC
Paris
Net Zero
RC trolls, Mars and Moon missions
Overshooting Earth’s Boundaries
“Rules-based order”
Financial globalization
Media echo-chambers (mainstream and alternative)
They all operate, to varying degrees, on ritualized compliance:
Everyone knows the gaps.
Everyone knows the incentives distort behavior.
Everyone knows the metrics don’t match lived reality.
And yet… the sign stays in the window.
Not because people are evil — but because coordination around partial falsehoods is often more stable than coordination around uncomfortable truths. Until it isn’t.
— “if only people would recognize it permeates everywhere” — is the painful part. Once you see it systemically, you can’t unsee it selectively. . . post-2008 GFC, 1990, since 1945 or before?
The saying is: “There is no worse blind man than the one who doesn’t want to see”. So we get the expected display of “coordination around partial falsehoods is often more stable than coordination around uncomfortable truths. “ when holding opinions about everything while knowing next to nothing.
Piotr says 23 Jan 2026 at 11:36 PM and Nigelj says 23 Jan 2026 at 1:17 AM
“….. it doesnt say that. Its clearly referring to the last couple of years under Trump …..” [….] “This can only mean the last few years not the last 80 years. “
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844251
Clearly not so, given Carney is addressing 3 specific periods in time in the speech. Any historian will tell you:
The rules-based international order primarily began in 1945 following the end of the Second World War. It was established by the Allied powers to create a stable, interconnected system of institutions, laws, and norms, centered around the United Nations (UN) Charter, designed to prevent future conflicts.
Let’s go back to the speech where Carney really lays out what he thinks has changed:
“For decades, countries like Canada prospered under what we called the rules-based international order. We joined its institutions. We praised its principles. We benefited from its predictability. And because of that, we could pursue values-based foreign policies under its protection.
We knew the story of the international rules-based order was partially false: [………..]
So, we placed the sign in the window. We participated in the rituals, and we largely avoided calling out the gaps between rhetoric and reality.
Over the pasttwo decades [post-2008 GFC], a series of crises — in finance, health, energy, and geopolitics — have laid bare the risks of extreme global integration. But more recently, great powers have begun using economic integration as weapons; tariffs as leverage; financial infrastructure as coercion; supply chains as vulnerabilities to be exploited.” end quotes from above.
It is clear Carney refers to different times in his speech, which overall kept focusing on The rules-based international order established since post-1945 and selectively enforced by the USA.
Carney’s call to remove the scales from thy eyes is clear: “This bargain no longer works. Let me be direct: we are in the midst of a rupture, not a transition.”
Which Begs the Question: Did it ever work for all nations, humanity and our broken planet?
This isn’t news to some, but it helps to hear someone else put it into words. What’s oddly hard to articulate about the self-designated “moderate” phenomenon seen below and elsewhere, is that the ridiculousness, the denial, and the horror now arrives as a single package — and in such volume that they begin to cancel themselves out. That’s the crucial bit.
The rhetoric, the noise, the constant stream of confident dismissal eventually loses impact. Not because any single instance isn’t unconscionable, but because there are so many of them. When they come so fast on every topic you can’t even keep count, so they stop resonating in the way they would in isolation.
Repetitively you’re being told to ignore the evidence of your own eyes and ears — not once, repeatedly, relentlessly — until disbelief itself becomes background noise.
Data @ 24 Jan 2026 at 5:44 PM
Data is wrong about my views, and has quoted me completely out of context, cutting my sentence in half , to suit the angle he is taking, probably deliberately. Hes done this several times to people on this website. Refer to my original statement @23 Jan 2026 at 1:17 AM for the full quote. Data hasn’t provided a link to Carneys full speech so I haven’t read it.
Nigelj says 26 Jan 2026 at 8:00 PM
Data hasn’t provided a link to Carneys full speech so I haven’t read it.
Data: That’s plain laziness and dishonest duplicitous sophistry. Go find your own link of the thousands now available.
Repetitively you’re being told [by commenters here] to ignore the evidence of your own eyes and ears — not once, repeatedly, relentlessly — until disbelief itself becomes background noise.
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844330
and here:
https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844384
Verbatim partial transcript here: https://www.realclimate.org/index.php/archives/2026/01/unforced-variations-jan-2026/#comment-844242
I stand by my comments, my reasoning and references.
Data says: “This (American lead global rules based order) is a system that’s lasted for 80 years, and Mark Carney is essentially saying that for those 80 years we’ve pretended that this is all good and fair. We’ve pretended that the American empire isn’t an empire. We’ve pretended it’s a rules-based order. We’re not pretending anymore.”
Data is responding to the text in italics presumably by Carney, and it doesnt say that. Its clearly referring to the last couple of years under Trump where Americs is abandoning the American lead rules based order and other countries are meekly following and pretending its all inevitable. The text in italics says the rules based order is fading. This can only mean the last few years not the last 80 years.
However America does act like an empire at times.
Nigel: “Data is responding to the text in italics presumably by Carney, and it doesnt say that.”
This can only mean the last few years not the last 80 years.”
Exactly – for most of the last 80 years, the relationship with the US was not perfect, but the mutual benefits – security, free trade, prosperity, international organizations, rule-based order far outweighed the negatives. Only with the US under Trump actively destroying all these – the balance has dramatically changed and we can’t pretend it didn’t.
So Multi-troll tries to hijack the powerful speech by Carney to advance his own anti-Western, Communist ideology – by suggesting (in boldface for a better visibility) that Carney rejected all last 80 years of the Western value-based alliance as a lie and self-delusion,
and presents it as a vindication of Communism – if the last 80 years of Western democracies were no better than Communism – then by the same argument the stalinism, maoism and Pol Pot couldn’t be that bad – since they are no worse than what the West has done.
Thus Multi-troll erases or whitewashes the dozens? of millions killed under Stalin, Mao and Pol Pot – by relativization of their guilt (if comparable – then no worse than the West).
As we approach the end of January, an update on how global/NH/SH temperatures are developing.
The ClimatePulse ERA5 daily global SAT data shows January started with a bit of a warm wobble but has now seen the anomaly drop, dragging the month-to-date average down, presently at +0.53ºC (to 22nd) and likely to be down below +0.50ºC by months-end. The Uni of Maine Climate Reanalyser shows the warm wobble was driven by the NH.
For comparison, the months through the last half of 2025 averaged Jul +0.45ºC, Aug +0.49ºC, Sep +0.66ºC, Oct +0.70ºC, Nov +0.65ºC, Dec +0.48ºC.
And Jan 2024 came in at toasty +0.79ºC.
The not-so-wobbly ERA5 re-analysis 60N-60S SST numbers from ClimatePulse show a bit of warmth through January which is now in decline and may end up as just a warm wobble or perhaps as a bit of correction for the cooler anomalies seen through Nov & Dec.
Up-to-date graphs of this brought to you courtesy of The Bannana!! Watch.
The NINO3.4 temperature has been showing weak La Niña conditions and these strengthened a bit through the first half of Jan but forecasts show that these conditions appear unlikely to persist long enough for a proper La Niña to be declared within ONI.
With the 40th anniversary of the Challenger disaster approaching next week, it’s worth recalling why Richard Feynman’s role still matters — not as a personality, but as a methodological conscience.
Feynman was not opposing institutions for sport. He was insisting on a principle that applies equally to science, policy, and communication:
— “The first principle is that you must not fool yourself — and you are the easiest person to fool.”
What made Challenger so uncomfortable was not technical ignorance, but institutional self-deception — the quiet blurring of uncertainty, risk, and expectation under organisational pressure. Feynman’s objection was not political. It was epistemic.
He was explicit about the standard:
— “Scientific integrity is a kind of leaning over backwards to show how you might be wrong.”
That obligation does not vanish when the cause is presumed just, the direction broadly correct, or the stakes high. If anything, those are precisely the conditions under which the discipline matters most.
Feynman repeatedly warned that science fails not only through bad faith, but through sincere overconfidence — especially when surface forms (models, graphs, authority) are allowed to substitute for deeper honesty about limits, assumptions, and boundary cases.
His Challenger appendix ended with a line that still applies far beyond engineering:
— “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”
That wasn’t cynicism. It was respect for reality — and for the long-term credibility of science itself.
“Scientific integrity is a kind of leaning over backwards to show how you might be wrong.”
Quite right, and in all Data multi trolls thousands of comments I’ve never once seen him / her do that. Practice what you preach.
The RC archives are overflowing with useful content and reminders.
Dennis Horne says to PIOTR 17 Nov 2025
I’m pretty certain what I said, in the context of what I said earlier, is perfectly clear to nearly everybody. I surmise your need is to chastise rather than clarify. Good luck and count me out.
https://www.realclimate.org/index.php/archives/2025/11/unforced-variations-nov-2025/#comment-842130
Piotr says to Dennis Horne 17 Nov 2025
If you can’t stand the heat, Dennis, don’t start fires. Or find yourself a forum where nobody would question your claims.
https://www.realclimate.org/index.php/archives/2025/11/unforced-variations-nov-2025/#comment-842136
Dennis Horne says 18 Nov 2025
@MA Rodger. Thank you very much for your detailed explanation. I knew only in general terms.
I can’t remember if it was you or Mal Adapted who helped me some years ago finding a paper by Feynman which some clown from Saachi&Saachi was misquoting, anyway I am grateful to have expert help from this site when the need arises.
I was born during WWII and never thought the world would go mad again but here we are.
https://www.realclimate.org/index.php/archives/2025/11/unforced-variations-nov-2025/#comment-842182
Never to be seen or heard of again………..
Susan Anderson says to PIOTR 21 Nov 2025
Kindly desist on nonstop argument and telling people what they think.
…. but the focused hostility is a distraction for the rest of us, not to mention any possible lurker(s). The endlessness of it is not useful.
Pete Best is a useful contributor here. The above assumptions about him are incorrect and unhelpful.
https://www.realclimate.org/index.php/archives/2025/11/unforced-variations-nov-2025/#comment-842278
How long before Pete Best gives up, leaves, never to be seen or heard of again………..
Data: you are an equal if not worse source of the problems I describe. I suggested to Piotr that he desist because it amplifies the situation and I have little hope you will cut it out.
Although I mostly skip over both of you, your arrogant insistence is part of the problem. You do not represent Hansen well here. It is not surprising that people label you a sockpuppet, fake skeptic, or magat because you won’t look in the mirror and see yourself as part of the problem. I don’t think you are any of these, but please know that your efforts backfire as often as not.
Y’all please just stop. Find something worthwhile to do. You’re not helping, any of you, when you re-re-re-re bunk and debunkl.
Data: “ How long before Pete Best gives up, leaves, never to be seen or heard of again…
Hmmm – hasn’t he already quit, given up, submitted? You know:
– “Data” 12 Jan at 6:27 PM to Pete Best: “ So you are quitting outright, Submitting, and giving up. Which will further allow the verbose bullying trolls [i.e. non-doomers -P.]] presenting false, distorted unscientific data and information
And when surprised Pete protested: 14 Jan 9:13 PM: “My question is clear, your answer makes no sense to me”
– his supposed friend/ally “Data” …. never apologized for his baseless accusations, quite the contrary – added insult to injury by … patronizingly lecturing Pete:
Data: Jan. 14: “ I did not give an “answer” to your question/s. I ignored them. Is that a clue?
and bizarrely said that he berated Pete …. for his (Data’s) own benefit (???), and suggested that he has no expectations for Pete being capable to understand what Data said to him:
Data: “ Pete, I spoke to the rest of your comment […] and said what I thought about that [i.e.: “So you are quitting outright, Submitting, and giving up.” -P.] for my own benefit.
If this comment makes no sense either.,, that’s OK too. I come have no expectations.
So if the supposed friend and/or ally stabs Pete in the back like that – one could understand if Pete does what you already have said he did (“So you are quitting outright, Submitting, and giving up“).
New paper in Nature Sustainability: Lizana, J., Miranda, N.D., Sparrow, S.N. et al. Global gridded dataset of heating and cooling degree days under climate change scenarios. Nat Sustain (2026). https://doi.org/10.1038/s41893-025-01754-y.
Number of people living in extreme heat to double by 2050 if 2C rise occurs. Scientists expect 41% of the projected global population to face the extremes, with ‘no part of the world’ immune – https://www.theguardian.com/environment/2026/jan/26/number-of-people-living-in-extreme-heat-to-double-by-2050-if-2c-rise-occurs-study-finds [source is above link including Springer sharing token, thanks]
I’m in Columbia Heights, the suburb just north of Minneapolis on the eastern side of the Mississippi, and where ICE is rounding up Ecuadorians and Somalis. My immediate neighbors are from Ecuador and the 5 year-old that was kidnapped by ICE was just across the central avenue. Other immediate neighbors are from Ethopia, Tibet, and China. The Tibetan family’s house has become inactive.
I don’t think the USA gov’t will be of any service to science in the next 3 years, and so the community will need to adjust how they do research. I am cut off from gov’t funding so it doesn’t impact me, but researchers that are dependent are just like the Somalis and Ecuadorians, precariously holding on based on the whims of Trump.
The site climate.us is not the way forward, just a pretty facade to whatever has already been done.The right path forward is to use Github and make it so that the cost of entry to contributing is essentially zero.
https://bsky.app/profile/zacklabe.com/post/3mddrsjinxc2y
Zack Labe: Very exciting to share that @climatecentral.org is hiring a Climate Data Scientist to join a new climate services effort focused on advancing predictions of risks and hazards on seasonal-to-decadal timescales. The application closes on February 9, 2026 at 5pm ET, & the description can be found here:
https://www.climatecentral.org/open-position-climate-data-scientist
When social dominance and moral certainty and performative expertise replaces genuine inquiry the collapse becomes permanent, the culture terminal. Many recognize this in the Trump MAGA dynamic but few can see it when it is right in front of their own face.
Signs that Norm collapse has met with Epistemic dysfunction
— norms of charity, restraint, and clarification break down;
— institutions keep formal authority but lose epistemic legitimacy;
— communication becomes adversarial theatre rather than sense-making;
This isn’t news to some, but it helps to hear someone else put it into words. What’s oddly hard to articulate about the self-designated “moderate” phenomenon seen below and elsewhere, is that the ridiculousness, the denial, and the horror now arrives as a single package — and in such volume that they begin to cancel themselves out. That’s the crucial bit.
The rhetoric, the noise, the constant stream of confident dismissal eventually loses impact. Not because any single instance isn’t unconscionable, but because there are so many of them. When they come so fast on every topic you can’t even keep count, so they stop resonating in the way they would in isolation.
Repetitively you’re being told to ignore the evidence of your own eyes and ears — not once, repeatedly, relentlessly — until disbelief itself becomes background noise. At the very least remember: If a space makes thoughtful, ethical, evidence-respecting people doubt their own clarity — the problem is the space.
Swimming in epistemic acid? Anyone normal would feel it.
Data: “If a space makes thoughtful, ethical, evidence-respecting people doubt their own clarity — the problem is the space.”
I disagree. Because being thoughtful and evidence focused is no guarantee of being a clear writer and communicator. Immanuel Kant was undeniably thoughtful and evidence focused, but its generally agreed he was a terrible writer, because nobody could figure out what he was trying to say and they misinterpreted him and his sentence structures were convoluted. Because of this it took years before his views were understood, respected and accepted, although modern science has apparently superseded his insights. But all those sorts of problems can be easily fixed.