• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

RealClimate

Climate science from climate scientists...

  • Start here
  • Model-Observation Comparisons
  • Miscellaneous Climate Graphics
  • Surface temperature graphics
You are here: Home / Climate Science / Arctic and Antarctic / Predicted Arctic sea ice trends over time

Predicted Arctic sea ice trends over time

31 May 2025 by Gavin 21 Comments

Over multiple generations of CMIP models Arctic sea ice trend predictions have gone from much too stable to about right. Why?

The diagnostics highlighted in our model-observations comparison page are currently all temperature based, and show overall that climate models have being doing well on these trends for decades. But there has been increasing attention to trends in non-temperature variables, and there, model performance is more mixed (Simpson et al., 2025). As we’ve discussed before, model-observation discrepancies can arise from three causes: the observations could be wrong (unrealized biases etc.), the models are wrong (which can encompass errors in forcings as well as physics), or the comparison could be inappropriate.

One of the most high profile ‘misses’ in coupled modeling over the last few decades was the failure of the model projections in CMIP3 (circa 2003/4 vintage) to match the rapid losses in Arctic sea ice that started to become apparent in the middle of that decade (Stroeve et al., 2007), and were compounded by the summertime record losses of sea ice in 2007 and then 2012. With an additional decade, how does that look now?

Graph with two lines and observations for CMIP3 sea ice extent in Mar and Sep, along with the observations from 1979 to 2024.
Figure 1. Percent change in March (red) and September (blue) Arctic sea extent w.r.t. 1979-1988 in the CMIP3 ensemble and NSIDC observations. Spread is the 95% CI (with a 10 year smooth to reduce visual clutter). Solid lines are the ensemble mean. Historical forcings are used to 2000, and the SRES A1B scenario subsequently.

In a word, the CMIP3 Arctic sea ice projections were, and remain, terrible. The ensemble mean predicted rate of change of September Arctic sea ice extent is less than half that observed (-4.5 %/decade vs. -11 %/decade for 1979-2024), and there are only five single individual model simulations (out of 46) that have a loss rate greater than 10 %/decade (95% spread is [-12,-0.7] %/decade). The March trends are also under-predicted, but by a lesser degree. There is no real ambiguity in the observed trends, nor in the comparison (though extent is a little trickier than area to compare to), and so these discrepancies were very likely due to model failures – insufficient resolution to capture the polar sea ice dynamics, too simple sea ice physics, biases in the Arctic ocean simulations etc. Analyses have shown that errors in the absolute amount of sea ice were correlated to the errors in the trends as well.

Development of the CMIP5 models was ongoing as these discrepancies were manifesting, and there were improvements in sea ice physics and dynamics, increased resolution and a reduction in the overall climate biases. The simulations in CMIP5 were conducted around 2011-2013, and used historical forcings from to 2005, and scenarios subsequently. Did that make any difference?

Graph with two lines and observations for CMIP5 sea ice extent in Mar and Sep, along with the observations from 1979 to 2024.
Figure 2. Percent change in March (red) and September (blue) Arctic sea extent w.r.t. 1979-1988 in the CMIP5 ensemble and NSIDC observations. Spread is the 95% CI (with a 10 year smooth to reduce visual clutter). Solid lines are the ensemble mean. Historical forcings are used to 2005, and the RCP4.5 scenario subsequently.

Closer, but no cigar. The spread in the CMIP5 models is larger (a function of greater variability), and the observations are now more within the spread, but the September ensemble mean trend (-8%/decade) is still a bit too low. But nearly 40% of the 107 individual simulations (95% CI is [-20,-1.4]%/decade) now have losses greater than 10%/decade. The March trends are mostly well represented, but there are still large variations in the absolute extent.

There was a longer gap before CMIP6, but those models were developed through to 2017/8 or so, and so developers were well aware of the ongoing discrepancies (Stroeve et al., 2012). Again, there were improvements in sea ice physics, dynamical schemes, forcings (the addition of black carbon impacts on snow and ice albedo for instance), and again, improvements in resolution and in the base climatology.

As a minor aside, from 2007 to 2014 there was a spate of un-peer reviewed claims from a few scientists (Peter Wadhams and Wiesław Masłowski notably) that used non-linear statistical fits to the observed sea ice indices to predict essentially ice-free conditions by 2013, or 2016 or so. These predictions were not based on any physical insight or model, were heavily criticised by other scientists at the time (I recall a particularly spicy meeting at the Royal Society in 2014 for instance!), and (unsurprisingly) were not validated. But this kind of stuff is perhaps to be expected when the mainstream models are not providing credible projections?

Anyway, back to CMIP6. Third time’s a charm?

Graph with two lines and observations for CMIP6 sea ice extent in Mar and Sep, along with the observations from 1979 to 2024.
Figure 3. Percent change in March (red) and September (blue) Arctic sea area w.r.t. 1979-1988 in the CMIP6 ensemble and NSIDC observations. Spread is the 95% CI (with a 10 year smooth to reduce visual clutter). Solid lines are the ensemble mean. Historical forcings are used to 2014 (466 simulations), and the SSP2 4.5 scenario subsequently (247 simulations).

Actually, this isn’t bad. The CMIP6 ensemble mean for September area trends is now -11 %/decade (observed 13 %/decade) and the March trends are spot on. Note that the observed loss in ‘area’ is slightly larger than the trend in ‘extent’ (13 %/decade vs. 11 %/decade) and I’m using area here because that is what is available. The spread for September trends is [21,3] %/decade which is slightly tighter than in CMIP5, and 40% (again) have losses greater than 10 %/decade.

What lessons can be drawn here?

As we have often stated, models are always wrong, but the degree to which they can be useful needs to be addressed – by variable or by model generation or by model completeness etc. The utility of the CMIP6 ensemble (and presumably the upcoming CMIP7 models) for Arctic sea ice is clearly higher than the CMIP3 ensemble, but there doesn’t appear to be a single thing that needed to be fixed for that to happen. Rather, an accumulation of improvements – in physics, resolution, completeness, forcings – have led to a gradual improvement in skill (not just in the sea ice trends!).

As Simpson et al (2025) noted, there are increasing numbers of climate quality diagnostics that have long enough time series and emerging signals of change, such that there are an increasing number of tests for the model trends. The history of Arctic sea ice comparisons shows that it might be premature to conclude that any specific discrepancies imply that something is fundamentally wrong, or that climate modeling is in a ‘crisis’ (Shaw and Stevens, 2025), it may well be that these discrepancies will resolve themselves in the course of ‘normal’ model development (and as the observed signals become clearer). Or not ;-).

Note on sources: CMIP3 (Mar, Sep) and CMIP5 (historical, rcp45) processed extent data are from Julienne Stroeve and Patricia Derepentigny, and the CMIP6 area data is from the U. of Hamburg data portal (courtesy of Dirk Notz). Ensemble means are over the whole ensemble with one simulation = one vote. Also I haven’t screened the CMIP6 models by climate sensitivity (as I’ve done for the temperatures). These choices might make small differences, but not effect the main conclusions.

References

  1. I.R. Simpson, T.A. Shaw, P. Ceppi, A.C. Clement, E. Fischer, K.M. Grise, A.G. Pendergrass, J.A. Screen, R.C.J. Wills, T. Woollings, R. Blackport, J.M. Kang, and S. Po-Chedley, "Confronting Earth System Model trends with observations", Science Advances, vol. 11, 2025. http://dx.doi.org/10.1126/sciadv.adt8035
  2. J. Stroeve, M.M. Holland, W. Meier, T. Scambos, and M. Serreze, "Arctic sea ice decline: Faster than forecast", Geophysical Research Letters, vol. 34, 2007. http://dx.doi.org/10.1029/2007GL029703
  3. J.C. Stroeve, V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W.N. Meier, "Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations", Geophysical Research Letters, vol. 39, 2012. http://dx.doi.org/10.1029/2012GL052676
  4. T.A. Shaw, and B. Stevens, "The other climate crisis", Nature, vol. 639, pp. 877-887, 2025. http://dx.doi.org/10.1038/s41586-025-08680-1

Filed Under: Arctic and Antarctic, Climate modelling, Climate Science, Featured Story, Model-Obs Comparisons Tagged With: Arctic amplification, CMIP3, CMIP5, CMIP6

About Gavin

Reader Interactions

21 Responses to "Predicted Arctic sea ice trends over time"

  1. Kevin McKinney says

    31 May 2025 at 6:11 PM

    Thanks, Gavin! This reminds me of conversations past, in which I tried to explain that 1), more than one climate variable matters; 2) that yes, climate science was and is cognizant of errors, 3) is willing to admit that errors happen, and 4) works to learn from them. Also that, as Ray has said in the past, errors are not always overpredictions, and by extension that “uncertainty is not a friend.”

    Reply
  2. William says

    31 May 2025 at 7:34 PM

    GAVIN SAYS:
    The utility of the CMIP6 ensemble (and presumably the upcoming CMIP7 models) for Arctic sea ice is clearly higher than the CMIP3 ensemble, but there doesn’t appear to be a single thing that needed to be fixed for that to happen. Rather, an accumulation of improvements – in physics, resolution, completeness, forcings – have led to a gradual improvement in skill (not just in the sea ice trends!).

    Title:
    Have Climate Models Earned Their Arctic Sea Ice “Improvement” — Or Are We Just Smoothing Over 23 Years of Failure?

    The recent RealClimate article reviewing predicted Arctic sea ice trends across CMIP ensembles (CMIP3 through CMIP6) raises some troubling questions about how the modeling community is now reframing past model failures as part of a “gradual improvement” narrative.

    Let’s be clear: for over two decades, CMIP3, CMIP4, and CMIP5 generated Arctic sea ice projections that were not just slightly off — they were deeply, persistently, and systematically wrong. Most of those models predicted summer sea ice persisting well into the late 21st century, yet we now face plausible scenarios of an ice-free Arctic September in the 2030s or earlier. That’s not a minor deviation — that’s a massive forecasting failure for a major climate system component.

    And now, suddenly, we’re told that CMIP6 is doing better — as though this were the natural result of steady scientific progress. But this glosses over some vital issues:

    1. Where Is the Post-Mortem on CMIP3–5?
    There is zero transparency in most public-facing articles about why CMIP3–5 failed so badly on sea ice. What specific physics, parameterizations, forcings, or feedbacks were missing or mishandled? Without a detailed diagnosis, how can we be sure CMIP6 isn’t just accidentally “right” — or worse, tuned to appear so?

    Science is supposed to be about falsifiability and explanation. Yet there’s been no real accounting for how those older ensembles went so wrong, just vague talk of “improvements in resolution and physics.”

    2. Improvement… or Post-Hoc Tuning?
    The fact that CMIP6 now better aligns with observations after years of criticism about underestimation naturally raises the question: are models now being subtly calibrated or post-tuned to fit the observed data more closely? That’s not inherently unscientific, but it is problematic if:

    — It’s not disclosed.
    — It gives a false sense of predictive skill.
    — It masks ongoing weaknesses within individual models.

    3. The Ensemble Mean Hides the Outliers
    The RealClimate article relies heavily on smoothed ensemble means, which — while useful for broad comparison — can obscure the fact that many individual model runs still perform poorly. This statistical smoothing flattens out the actual spread and makes the results look more robust than they are.

    Even Gavin Schmidt and Roger Pielke Jr. have, in other contexts, pointed out that over-reliance on ensemble means can hide critical flaws. The question isn’t whether the average is better — it’s whether the individual models have learned to capture key dynamics, or whether we’re just cherry-picking those that happen to now align with observation.

    4. Coincidence or Competence?
    If CMIP6 now “gets it right,” we must ask: is this a real validation of the physical models — or just a statistical coincidence? After 23 years of flawed outputs, we’re owed more than hand-waving and retrospective optimism.

    Where’s the evidence that these improvements stem from first-principles physics, and not just smarter curve-fitting or scenario tweaking?

    Final Thought
    We’re talking about one of the most sensitive climate indicators on Earth — Arctic sea ice — and the narrative now seems to be: “We underestimated it for two decades, but trust us, CMIP6 is better.”

    Fine. Then prove it — with transparency, with detailed analysis of past errors, and with testable physical justifications for current model success.

    Until then, it’s not denialism to question whether this “success story” is being oversold. It’s just responsible skepticism — the foundation of good science and high public communication standards.

    Reply
  3. William says

    1 Jun 2025 at 12:14 AM

    Addendum – Following Up on Arctic Sea Ice Model Skill

    Gavin writes:
    “One of the most high profile ‘misses’ in coupled modeling over the last few decades was the failure of the model projections in CMIP3 […] to match the rapid losses in Arctic sea ice.”

    Yes — and that’s putting it mildly. The ensemble mean and most individual CMIP3–5 model runs not only failed to capture the magnitude of sea ice loss, they fundamentally misunderstood its timing, trajectory, and sensitivity. This was not a minor calibration error — this was a systemic failure, sustained over multiple generations of models.

    CMIP6: A Step Forward?
    Figure 3 now presents percent change in sea ice area, not extent — yet the CMIP6 ensemble still fails to match the observed trajectory.

    The 2007–2012 collapse remains a clear outlier, still not captured by the mean, nor bounded by plausible confidence intervals.

    The spread of model projections is disturbingly wide. For 2014, the 95% CI for September ice area ranges from ~3% to over 90% loss. By 2038, it ranges from 15% to well over 100%. These are not scientifically defensible bounds — they suggest fundamental incoherence, not physical realism.

    Why is there no work done on Arctic Sea Volume PIOMASS, surely the most critical component for forecasting a blue ocean event and everything else.

    Why Invoke Wadhams?
    Introducing extreme lowball forecasts (e.g. from Wadhams) serves only to make flawed model ensembles appear “reasonable” by comparison. But this rhetorical tactic distracts from the central issue: the models meant to inform policy, the IPCC, and public understanding have not performed well — and still don’t.

    If mainstream models lack credibility, say that plainly. Don’t frame it as a hypothetical:
    “Perhaps to be expected when the mainstream models are not providing credible projections?”

    That should have been the subtitle:
    “Mainstream CMIP models still do not provide credible projections for Arctic Sea Ice.”

    Backward-Averaged Success?
    You cite -11%/decade September sea ice area trend in CMIP6 (vs -13% observed) from 1979–2025 as evidence of “not bad” performance. But that’s a post hoc average, folding in newer data into older models. It tells us nothing about the forecasting skill of these models in real time. And it masks critical errors during the key period (2007–2012) when the models were most needed.

    The issue isn’t whether CMIP6 looks better when smoothed over 45 years — it’s whether it can tell us anything reliable about 2025–2040. From the September % change spread you cite — clearly, it can’t.

    Usefulness?
    You quote the classic line:
    “All models are wrong, but some are useful.”

    Then please — quantify the usefulness. How useful were CMIP3–5 for Arctic sea ice? What impact did they have on IPCC projections, or on the scientific literature? What decisions were based on them?

    More importantly: how is CMIP6 useful now — when the projected 2025 Arctic September ice loss ranges from ~5% to >100%? That isn’t predictive power — it’s noise. It suggests no practical understanding of how Arctic sea ice will behave in the next 12 months, let alone decades.

    Discrepancies “Resolving Themselves”?
    Gavin concludes:
    “…these discrepancies will resolve themselves in the course of ‘normal’ model development… Or not ;-)”

    This sounds flippant. Discrepancies don’t “resolve themselves.” Either:
    — The physics improves,
    — The parameters are constrained,
    — The resolution is refined,
    — Or the tuning is made more explicit.

    Or else, the model remains wrong.

    Worse, your statement seems to simultaneously defend the current ensemble and anticipate its future irrelevance. That’s not scientific humility — that’s hedging. And it weakens public trust.

    Final Thoughts
    You ask us to continue treating CMIP as the foundation of climate projection — yet when major failures persist for over 20 years, the response is: “They’re improving. Probably. Maybe. Let’s wait.”

    That’s not good enough. We deserve:
    — A clear technical post-mortem on CMIP3–5 sea ice errors;
    — A transparent discussion of how CMIP6 was corrected (or tuned) in response;
    — And a candid assessment of what confidence we should place in CMIP7.

    Until then, it’s hard to see this as anything more than an exercise in statistical cosmetics.
    It doesn’t matter how much lipstick you put on a pig — it’s still a pig.
    The ensemble mean might look smoother now, but the model’s skill remains unresolved.
    Accuracy is not the same as aesthetic.

    Reply
  4. William says

    1 Jun 2025 at 12:38 AM

    Follow-up: Persistent Divergence Between Observations and CMIP6 Projections

    One detail that still doesn’t get the attention it deserves: the last 18 years of observed September Arctic Sea Ice minimum trends (2007–2024) remain consistently out of step with the CMIP6 ensemble mean — not just as isolated years, but in the overall trajectory, both in magnitude and rate of change.

    The observed decline is more stepwise and abrupt, especially around the 2007 and 2012 minima.

    CMIP6 continues to show a smoother, more linear descent that fails to capture the inflection points of real-world losses.

    Even today, the ensemble mean lags behind, while the observed data have flattened somewhat in recent years — a nuance CMIP6 doesn’t reflect either.

    This persistent mismatch raises two key concerns:

    Is CMIP6 tuned only to capture long-term averages, rather than decadal-scale dynamics, tipping points, or variability?

    If so, how can it be considered useful for real-world policy, where near-term changes (like the prospect of ice-free Septembers before 2040) carry enormous implications?

    We are not talking about modest noise here. The divergence is systematic and enduring, and yet rarely addressed in detail. Why? Until that gap is explained or reconciled, confidence in CMIP’s ASI projections seems… aspirational at best.

    For instance, Stroeve et al. (2012) highlighted that earlier models underestimated the rate of sea ice loss, a trend that continues with CMIP6. Furthermore, Notz and SIMIP Community (2020) found that while CMIP6 models offer improved sensitivity estimates, they still fail to simulate a plausible evolution of sea-ice area alongside global mean surface temperature.
    MPG.PuRe+7eesm.science.energy.gov+7NOAA Institutional Repository+7

    This persistent mismatch raises questions about the models’ ability to accurately represent key processes affecting sea ice dynamics. Until these discrepancies are addressed, reliance on CMIP6 projections for policy-making and climate forecasting remains problematic.

    Source Refs
    https://eesm.science.energy.gov/publications/arctic-sea-ice-cmip6
    https://repository.library.noaa.gov/view/noaa/29934
    https://www.osti.gov/pages/biblio/1618526
    https://epic.awi.de/id/eprint/51815/
    https://link.springer.com/article/10.1007/s00376-022-1460-4
    https://scispace.com/papers/arctic-sea-ice-in-cmip6-1t3idhfbxw
    https://www.sciencedirect.com/science/article/pii/S1674927824000844
    https://pure.mpg.de/rest/items/item_3221097_3/component/file_3231260/content

    An assessment of the CMIP6 performance in simulating Arctic sea ice volume flux via Fram Strait
    Evaluating the simulation capabilities of the most recent CMIP6 models in sea ice volume flux provides references for model applications and improvements. Meanwhile, reliable long-term simulation results of the ice volume flux contribute to a deeper understanding of the sea ice response to global climate change.
    And note Fig.1
    https://www.sciencedirect.com/science/article/pii/S1674927824000844

    Reply
    • Ken Towe says

      1 Jun 2025 at 12:04 PM

      “Until these discrepancies are addressed, reliance on CMIP6 projections for policy-making and climate forecasting remains problematic.”

      A question. What sort of policy-making might result from better sea ice model projections and improvements…a deeper understanding of global climate change?

      Reply
      • William says

        1 Jun 2025 at 6:29 PM

        Reply to Ken Towe
        Great question — and you’re right to sense there’s a deeper conversation that should be occurring here. Here’s a response off the top of my head to:
        > “What sort of policy-making might result from better sea ice model projections and improvements… a deeper understanding of global climate change?

        —

        Improved sea ice projections wouldn’t just refine academic understanding — they could significantly influence a range of policy decisions, especially those tied to regional risk management, climate adaptation, and strategic planning. Specifically:

        1. **Shipping and Arctic Navigation**
        Reliable sea ice forecasts are crucial for commercial and military navigation through the Arctic (e.g., the Northern Sea Route or Northwest Passage). Better projections inform infrastructure investments, safety protocols, and insurance risk calculations for Arctic shipping.

        2. **National Security and Geopolitics**
        Nations with Arctic interests (Russia, Canada, the U.S., China, etc.) rely on long-term projections to shape defense postures and territorial claims. Sea ice decline forecasts influence everything from military base placement to submarine patrol routes and sovereignty disputes.

        3. **Indigenous and Coastal Communities**
        Accurate modeling affects community planning for northern populations that depend on sea ice for transportation, hunting, and cultural survival. It also influences relocation policies and climate resilience funding.

        4. **Climate Feedbacks and Carbon Budgets**
        Arctic sea ice loss affects albedo (reflectivity), regional amplification, and atmospheric circulation patterns. Getting those dynamics right is key to projecting downstream climate impacts elsewhere — which in turn affects global carbon budget calculations, timelines for net-zero targets, and urgency behind emission cuts.

        5. **Biodiversity and Ecosystem Protections**
        Sea ice governs key marine ecosystems. Policy on fisheries management, marine protected areas, and species conservation — from polar bears to krill — depends on reliable predictions of habitat change.

        6. **Credibility and Communication of Climate Risk**
        When models repeatedly under- or over-predict key features like sea ice, it erodes public trust in climate science. More accurate, verifiable sea ice projections help rebuild that trust and improve how risks are communicated to policymakers and the public.

        Over and above all that ASI changes influence global temperatures and our ability to provide short to medium term global temperature projections. Such knowledge should be feeding into everything from the IPCC work to understanding why the Paris Agreement and actions by the UNFCCC and COP system are deeply broken. The ASI and Antarctic SI projections of this CMIP6 remain so wrong. they not fit for any purpose.

        So yes — better projections wouldn’t just be about “understanding” climate change. They would refine *practical, real-world decision-making* in domains that touch energy policy, defense, trade, indigenous rights, environmental protection. Along with global climate diplomacy and potentially practical action plans to address global warming itself.

        Would you like to follow with your own points? I’d love to compare.

        Reply
    • Steven Emmerson says

      1 Jun 2025 at 4:29 PM

      William: “This persistent mismatch raises questions about the models’ ability to accurately represent key processes affecting sea ice dynamics. Until these discrepancies are addressed, reliance on CMIP6 projections for policy-making and climate forecasting remains problematic.”

      I can’t quite decide if this argument is an example of a Ignoratio elenchi (irrelevant conclusion, missing the point) fallacy, a straw man fallacy. or a logic chopping fallacy (nit-picking, trivial objections). In any case, the conclusion is unsupported by the given evidence.

      Reply
  5. Jim Hunt says

    1 Jun 2025 at 5:45 AM

    Thanks for an article on my favourite topic Gavin!

    I heartily recommend an additional reference, which perhaps goes some way to explaining the significant excursions below the September trend in 2007 and 2012?

    0. C. M. Bitz and G. H. Roe, “A Mechanism for the High Rate of Sea Ice Thinning in the Arctic Ocean”, Journal of Climate 2004

    https://www.atmos.washington.edu/~bitz/Bitz_and_Roe_2004.pdf

    “A general theory is developed to describe the thinning of sea ice subjected to climate perturbations, and it is found that the leading component of the thickness dependence of the thinning is due to the basic thermodynamics of sea ice. When perturbed, sea ice returns to its equilibrium thickness by adjusting its growth rate. The growth–thickness relationship is stabilizing and hence can be reckoned as a negative feedback. The feedback is stronger for thinner ice, which is known to adjust more quickly to perturbations than thicker ice. In addition, thinner ice need not thin much to increase its growth rate a great deal, thereby establishing a new equilibrium with relatively little change in thickness. In contrast, thicker ice must thin much more. An analysis of a series of models, with physics ranging from very simple to highly complex, indicates that this growth–thickness feedback is the key to explaining the models’ relatively high rate of thinning in the central Arctic compared to thinner ice in the subpolar seas.”
    ”
    Perhaps it also helps explain the alleged “pause” in Arctic sea ice decline, so popular in cryodenialistic echo chambers at the moment?

    Reply
  6. Pedro Prieto says

    1 Jun 2025 at 6:31 AM

    Link: https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2020GL087965

    Key findings from the paper (Notz et al., 2020):

    No significant improvement in Arctic sea ice projections between CMIP5 and CMIP6.

    “Models that participate in CMIP6 do not show a clear improvement in simulating observed sea‐ice trends over recent decades compared to models from earlier phases of CMIP.”

    Still too much Arctic sea ice in CMIP6 simulations for recent decades.

    The models systematically overestimate September sea ice extent compared to observations. Same problem as before.

    Antarctic sea ice is still mishandled:

    The observed increase from 1979 to ~2015 is not captured in the models — which instead show a decrease. That’s a complete miss.

    So… What Is Going On? What is the purpose of this article here now?

    It’s a little suspicious:

    Damage control / reputation management: CMIP3–5 have been (rightly) criticized for years — particularly in terms of Arctic sea ice projections. CMIP6 was supposed to be better. It’s not — at least not clearly — so some scientists may now be trying to massage that narrative.

    Preemptive narrative setting: With CMIP7 now being discussed, this may be a way to gently sweep past failures under the rug and frame them as “normal model evolution” — a soft reset, rather than an admission of serious structural flaws or poor tuning.

    Defensive posture disguised as openness: Gavin’s tone seems reflective and modest — but the actual content is evasive, hand-wavy, and deflects scrutiny. Mentioning Peter Wadhams as a foil is a distraction tactic, not a serious engagement with past model failure.

    Audience management: RC serves both policymakers and laypeople, many of whom are not model specialists. A vaguely reassuring tone — “things are improving” — reduces anxiety and protects institutional credibility, even if it glosses over fundamental problems.

    Combine that with vague language like:
    “…discrepancies will resolve themselves… Or not ;-)”
    and
    “…models are always wrong, but still useful…”

    — and you’re left with a kind of rhetorical fog machine: soften the failures, spotlight only the ensemble mean, and dodge the hard accounting.

    The framing of this post — and the timing — doesn’t quite add up. Compared to dozens of other pressing climate concerns, why this topic, and why now? Five years on from CMIP6, its performance in simulating Arctic and Antarctic sea ice (extent, concentration, and trends) remains poor. It has shown little demonstrable skill in predicting critical inflection points like blue ocean events — and the record hasn’t meaningfully improved since CMIP5.

    That the 2020 paper linked here flatly contradicts any claim of serious CMIP6 improvement is telling. The issues have all been covered before in dozens of similar papers.

    One of the quieter scandals in climate science communication — particularly around modeling — is how little post-hoc accountability or rigorous performance review takes place. For all the resources poured into CMIP modeling, almost no one in the mainstream is willing to step back and plainly ask:

    “How well did these projections actually match reality over the past 10 to 20 years?”

    Even serious published critiques (like Notz et al. 2020) are rarely engaged with in public-facing commentary. The pressure to preserve institutional credibility — and to avoid feeding denialist talking points — often leads to a kind of professional omertà: silence, spin, or deflection.

    But here’s the truth:

    Scrutiny is not denial.

    Critiquing poor model performance is scientific due diligence, not heresy.

    And holding public scientists to higher standards of clarity, transparency, and engagement only strengthens public trust — it doesn’t erode it.

    One day it might begin to happen. Until then there will be no looking under the climate models hood.

    Reply
    • Piotr says

      1 Jun 2025 at 12:01 PM

      Multi-troll: “Key findings from the paper (Notz et al., 2020): No significant improvement in Arctic sea ice projections between CMIP5 and CMIP6.”

      and … what statistical criteria have you, or your source, used to determine the “insignificance” of the improvement? Because if you/your source didn’t – then you are using this word in the unscientific, colloquial meaning, of how it LOOKS to you. Which would make it a SUBJECTIVE (i.e. untestable) OPINION. Something along the lines of Chico Marx chiding a women surprised to see him in her bedroom:

      – But I saw you [leaving the bedroom] with my own eyes!
      – Well, who ya gonna believe me or your own eyes [Fig. 2 vs Fig. 3]?”

      Multi-troll: “But here’s the truth: Scrutiny is not denial. Critiquing poor model performance is scientific due diligence, not heresy.”

      But using the imperfect model representation of one of the most difficult to replicate outcomes of AGW (change in the sea-ice cover area) to question the credibility of ALL AGW modelling, in order to DISMISS the sense of reductions in GHG emissions using existing technologies and implementable mechanisms (price on GHG emissions) – is a MAINSTAY of DENIALISM – deniers focus on some not necessarily crucial aspect – to throw the entire AGW modelling, and the need to reduce GHGs, with the bathwater.

      But it is been already discussed on RC to death, E.g.

      ==== UV, May 2025 =============
      – William: “ I’m not a denier”

      – me: “We don’t have to rely on your self-serving declarations, your posts would do: you have just tried to discredit the only feasible way to mitigate AGW (by calling the reductions in GHGs NEITHER “ feasible [nor] wise“, and justified it by saying that they merely “treat symptoms, not causes“) in favour of an alternative [rapid deindustrialization and reduction of population by many billions] THAT YOU KNOW cannot be realistically implemented on the necessary time-scale [next 1-2 decades].
      Which is a very definition of an “anything-but-GHGs denier“.
      =================================

      Reply
      • Ken Towe says

        1 Jun 2025 at 7:25 PM

        “to DISMISS the sense of reductions in GHG emissions using existing technologies” …and to throw the entire AGW modelling, and the need to reduce GHGs, [out] with the bathwater.”

        Surely, you are aware of the fact that rapid reductions in CO2 emissions will take none of the CO2 already added (420 ppm) out of the atmosphere to lower global temperatures. It does leave carbon in the ground, But that makes it more expensive and difficult for transportation to continue the energy transition to renewables and EVs.

        Reply
      • Pedro Prieto says

        1 Jun 2025 at 8:05 PM

        Reply to readers
        Catastrophic climate change impacts are always local, not global.
        a reminder from https://climateandeconomy.com

        Mindful that nothing that the above cheap troll does has any effect on anything. He is just another self-infatuated bloviating bullshit artist who loves to hear himself talk and has done nothing but praise himself and insult everyone else since the day he arrived at RC. He is a boor and a bore, and his comments are worthless and empty.

        Hat tip to secular animist for the above content he posted. Imitation is the most sincere form of praise.

        Reply
    • John N-G says

      1 Jun 2025 at 1:24 PM

      Pedro –

      The link you provided is to Shu et al. (2020), which does not include your quoted key finding but instead says, “The observed Arctic September SIE declining trend (−0.82 ± 0.18 million km2 per
      decade) between 1979 and 2014 is slightly underestimated in CMIP6 models (−0.70 ± 0.06 million km2 per
      decade),” which sounds pretty good to me.

      The most plausible Notz et al. (2020), titled “Arctic Sea Ice in CMIP6” (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL086749) doesn’t include your quoted key finding either but instead says, “In particular, the latest generation of models performs better than models from previous generations at simulating the sea-ice loss for a given amount of CO2 emissions and for a given amount of global warming.”

      Reply
      • Kevin McKinney says

        1 Jun 2025 at 7:18 PM

        A shocker… Or not.

        Thanks, Dr. N-G.

        Reply
      • Pedro Prieto says

        1 Jun 2025 at 7:54 PM

        Reply to John N-G
        Apologies that seems to be a sticky copy paste url. Your url is correct: https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL086749 or
        https://repository.library.noaa.gov/view/noaa/29934 or
        https://par.nsf.gov/servlets/purl/10173113
        And I didn’t convey my meaning or sources accurately either, I was in a rush, sorry.

        I covered broader ground and cannot detail every point I’ve seen related to or in Notz et al accurately here, sorry. It’s old news anyway, long ignored. Feel free to draw your own conclusions based on what you choose to read and check yourself.

        Try for example the above quote you mention came from: Arctic Sea Ice in CMIP6
        https://scispace.com/papers/arctic-sea-ice-in-cmip6-1t3idhfbxw
        TL;DR: In this article, the authors examined CMIP6 simulations of Arctic sea ice area and volume and found that most models fail to simulate at the same time a plausible evolution of sea-ice area and of global mean surface temperature.

        Challenges in Simulating Sea Ice and Temperature: Despite the advancements, most CMIP6 models struggle to simultaneously simulate a realistic evolution of both sea-ice area and global mean surface temperature. This discrepancy highlights ongoing challenges in climate modeling.
        Sensitivity Metrics Comparison: When comparing sensitivity metrics between CMIP6 and previous CMIP phases, it is generally difficult to distinguish CMIP6 models from those in CMIP5, except for a few highly sensitive simulations. This suggests that while some models have improved, the overall sensitivity landscape remains similar.

        This: “Models that participate in CMIP6 do not show a clear improvement in simulating observed sea‐ice trends over recent decades compared to models from earlier phases of CMIP.” is out there somewhere. No time to chase, sorry. The questions and info provided by william may be useful to balance the positive only professional omertà. As he points out the 95% CI range is implausible and the last 18 years results (mini mean trend line) remain far below the ensemble mean even if not as bad as CMIP3 and 5 were.

        Still, most CMIP6 models fail to simulate at the same time a plausible evolution of sea-ice area and of global mean surface temperature.
        see https://eesm.science.energy.gov/publications/arctic-sea-ice-cmip6

        but some simulate implausible historical mean states compared to satellite observations, leading to large intermodel spread. Summer SIA is consistently biased low across the ensemble. Compared to the previous model generation (CMIP5), the intermodel spread in winter and summer SIA has reduced, and the regional distribution of sea ice concentration has improved.
        https://www.bas.ac.uk/data/our-data/publication/antarctic-sea-ice-area-in-cmip6/

        As a whole, the models successfully capture some elements of the observed seasonal cycle of sea ice but underestimate the summer minimum sea ice area. – Models project sea ice loss over the 21st century in all scenarios, but confidence in the rate of loss is limited, as most models show stronger global warming trends than observed over the recent historical period.
        https://www.x-mol.net/paper/article/1251324725277122560

        There are more of course but it’s unlikely anyone commenting would be interested here anyway.

        There remains more important issues than messy ref urls and expecting any independent person on a forum without the resources of NASA-GISS et all to perfectly convey over 5 years of critical discussions of CMIP6 and sea ice extent at the poles.

        I repeat the issue is more about _ “— and you’re left with a kind of rhetorical fog machine: soften the failures, spotlight only the ensemble mean, and dodge the hard accounting.
        The framing of this post — and the timing — doesn’t quite add up. Compared to dozens of other pressing climate concerns, why this topic, and why now?”

        It’s old news and everyone in the field already knew these CMIP6 sea ice data were still not useful or reliably accurate. The same as Gavin’s 2023-2024 global mean temperature projections were not accurate. Or rather a dismal failure and like CMIP6 (5 and 3) they still cannot work out why.

        “But trust me, the cheque is in the mail.”

        Reply
    • jgnfld says

      1 Jun 2025 at 3:38 PM

      Re. “scrutiny is not denial”…In ANY area of science when we are talking bleeding edge, it takes a _qualified expert_ to do the “scrutinizing”. And, as you may or may not know it takes being in possession of a huge knowledge and skill base that takes years and even decades to master..

      Did you do your homework?

      Reply
  7. JCM says

    1 Jun 2025 at 8:42 AM

    The IPCC AR6 (WGI, Chapter 8.5.2.3) emphasizes that changes in land surface properties, especially soil moisture, can significantly influence atmospheric circulation patterns, including stationary waves, and contribute to polar climate anomalies. Specifically, it states:

    “Changes in land surface properties, including soil moisture, vegetation, and snow cover, can alter the surface energy and water balance, which in turn influences atmospheric circulation patterns, including stationary waves and monsoon systems.”
    “Such land–atmosphere interactions have the potential to modulate energy and moisture transport into the Arctic and thereby contribute to polar climate anomalies such as sea ice extent and surface temperature patterns.” (AR6 WGI, Ch. 8.5.2.3)

    These dynamics are conceptually supported by available data depicted in Seo and co. 2025 “Abrupt sea level rise and Earth’s gradual pole shift reveal permanent hydrological regime changes in the 21st century” https://www.science.org/doi/10.1126/science.adq6529.

    ERA5-Land reanalysis (Fig. 4) shows that since around 2000:

    https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd09e3f0a-2498-489f-9893-3af2064dcb1c_998x431.png

    Potential evapotranspiration (PET) has sharply increased due to warming,

    Actual evapotranspiration (ET) has flattened or declined, suggesting soil moisture limitation,

    Global soil moisture has steadily declined, and

    The water balance (P − ET) has become more negative, reflecting increased evaporation demand not met by precipitation.

    This would imply that drier soils are reducing latent heat flux and enhancing sensible heating, which in turn could intensify land–ocean thermal contrasts and modify midlatitude circulation, enabling poleward energy transport into the Arctic.

    Arctic warming events are often associated with persistent mid-latitude circulation anomalies.

    Needless to say, the shape, timing, and character of the ERA5 land moisture reanalysis product shows striking similarity to the Arctic sea ice extent.

    However, previously in response to Tomas https://www.realclimate.org/index.php/archives/2025/05/unforced-variations-may-2025/#comment-832708

    Gavin Schmidt rightly cautions:

    “The whole discussion rests on an assumption that the ERA5 reanalysis is truth, but for variables like soil moisture, the reanalysis trends are going to be highly influenced by whatever datastreams are available – which change over time. I am not persuaded (as yet) that this is not just a data source switch as opposed to a real phenomenon.”

    The consistency between theory, observed circulation anomalies, and modeled feedbacks continues to support the plausibility of continental desiccation as a contributor to Arctic anomalies, even as the magnitude and reliability of the soil moisture signal itself remains an active research question. We see how the CMIP suites are only capturing the monotonous creep of the prescribed radiative forcing, and appear to be limited in the scope of tuning.

    It could be useful to see if improved model atmospheres arise from prescribing the questionable ERA5L reanalysis moisture product, especially considering these phenomena in ERA5L have striking resemblance to the more reliable observation of Arctic sea-ice extent variation.

    Reply
  8. Susan Anderson says

    1 Jun 2025 at 10:19 AM

    Recommendation to lurkers: avoid those using this platform to see themselves in print and attack the authors who have shared their knowledge and expertise with us all. And to those correcting the corrections, who will then correct the corrections of the corrections … you get the picture!

    Reply
  9. zebra says

    1 Jun 2025 at 10:36 AM

    Gavin, three questions:

    1. What is your evaluation of this?

    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024EF004961

    2. Why is there so much focus on September, rather than March? March going below a certain level is certainly far more serious a “tipping point”, and that’s obvious without any fancy modelling.

    3. Is there any fancy modelling that gives an indication of what that level might be?

    (I was hoping there might be an actual scientific discussion on this topic, although it seems like the denial-of-service sock-puppet spam-bots are already at work.)

    Reply
  10. Piotr says

    1 Jun 2025 at 11:20 AM

    Of the 6 responses to Gavin article as of Jun 1, I see:
    – one paragraph from Kevin,
    – half a page post from Jim Hunt (most of it a quote from a source)
    – 8? screenfulls from doomer troll(s) “William”/”Pedro Prieto”

    That’s a known paid-troll/bot technique – downgrading forums that are not conductive to the interests of the troll/bot owners – by saturating such forums with its own troll/bot drivel, and thus drowning the discussions on topics they want to suppress. A version of the “denial of service” hacker attack.

    Reply
    • Tomáš Kalisz says

      1 Jun 2025 at 5:59 PM

      In Re to Piotr, 1 Jun 2025 at 11:20 AM,

      https://www.realclimate.org/index.php/archives/2025/05/predicted-arctic-sea-ice-trends-over-time/#comment-834024

      Hallo Piotr,

      Let me, first, shortly return to your last May post of 31 May 2025 at 11:09 PM,

      https://www.realclimate.org/index.php/archives/2025/05/unforced-variations-may-2025/comment-page-2/#comment-834009

      I agree almost fully to your points, except your opinion that it is necessary to (directly) engage the trolls. Herein I rather tend to agree with Ms. Anderson who thinks that it may rather amplify their influence and in certain sense, endorse their presence on the website.

      It appears that various suspicious subjects share a common love to activities of Russian government. In the past, I tried several times to ask them if they know that Russia strives to subjugate a neighbour nation in an unprovoked aggressive war. They have never admitted any fault, and eventually disappeared from the website. I therefore see thse repeating attempts of their successors under various names as a strong hint that this website indeed faces a systematic organized attack.

      I decided to avoid further attempts to interact with such subjects directly, and since then restrict myself only on a warning to others that there was a further shameful attempt to switch Real Climate into another platform for Russian hybrid war. I supopose that any such attempt can be seen as sufficient evidence that the respective subject has a zero credibility and does not come in a good will.

      Although this approach has not found followers yet, I still hope that putting such subjects spreading the “Russian mir” infection into a “quarantine” could represent a more effective response than speaking with them gently or shouting on them loudly.

      Greetings
      Tomáš

      P.S.
      In the flood of contributions from entities that we both suspect to be professional troll(s), you might have missed my post 27 May 2025 at 5:51 PM,

      https://www.realclimate.org/index.php/archives/2025/05/unforced-variations-may-2025/comment-page-2/#comment-833799

      wherein I tried to explain why I have not understood yet why you so strictly refuse the idea of the proposed modelling experiment, with the aim to resolve the question if climate sensitivity may or may not depend on water availability for evaporation from the land.

      If so, could you do me the favour and, as the May thread is already closed, reply herein?
      Many thanks in advance!

      Reply

Comment Policy:Please note that if your comment repeats a point you have already made, or is abusive, or is the nth comment you have posted in a very short amount of time, please reflect on the whether you are using your time online to maximum efficiency. Thanks.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Search

Search for:

Email Notification

get new posts sent to you automatically (free)
Loading

Recent Posts

  • Unforced variations: Jun 2025
  • Predicted Arctic sea ice trends over time
  • The most recent climate status
  • Unforced variations: May 2025
  • Unforced Variations: Apr 2025
  • WMO: Update on 2023/4 Anomalies

Our Books

Book covers
This list of books since 2005 (in reverse chronological order) that we have been involved in, accompanied by the publisher’s official description, and some comments of independent reviewers of the work.
All Books >>

Recent Comments

  • Tomáš Kalisz on Unforced variations: Jun 2025
  • Kevin McKinney on The most recent climate status
  • Pedro Prieto on Predicted Arctic sea ice trends over time
  • Pedro Prieto on Predicted Arctic sea ice trends over time
  • Kevin McKinney on The most recent climate status
  • Kevin McKinney on The most recent climate status
  • Ken Towe on Predicted Arctic sea ice trends over time
  • Kevin McKinney on Predicted Arctic sea ice trends over time
  • William on Predicted Arctic sea ice trends over time
  • Tomáš Kalisz on Predicted Arctic sea ice trends over time
  • Steven Emmerson on Predicted Arctic sea ice trends over time
  • jgnfld on Predicted Arctic sea ice trends over time
  • John N-G on Predicted Arctic sea ice trends over time
  • patrick o twentyseven on Unforced variations: Jun 2025
  • Ken Towe on Predicted Arctic sea ice trends over time
  • Piotr on Predicted Arctic sea ice trends over time
  • Piotr on Predicted Arctic sea ice trends over time
  • Piotr on The most recent climate status
  • zebra on Predicted Arctic sea ice trends over time
  • Susan Anderson on Predicted Arctic sea ice trends over time

Footer

ABOUT

  • About
  • Translations
  • Privacy Policy
  • Contact Page
  • Login

DATA AND GRAPHICS

  • Data Sources
  • Model-Observation Comparisons
  • Surface temperature graphics
  • Miscellaneous Climate Graphics

INDEX

  • Acronym index
  • Index
  • Archives
  • Contributors

Realclimate Stats

1,367 posts

11 pages

243,391 comments

Copyright © 2025 · RealClimate is a commentary site on climate science by working climate scientists for the interested public and journalists.