• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

RealClimate

Climate science from climate scientists...

  • Start here
  • Model-Observation Comparisons
  • Miscellaneous Climate Graphics
  • Surface temperature graphics
You are here: Home / Archives for Climate Science / Model-Obs Comparisons

Model-Obs Comparisons

Update day 2021

22 Jan 2021 by Gavin

As is now traditional, every year around this time we update the model-observation comparison page with an additional annual observational point, and upgrade any observational products to their latest versions.

A couple of notable issues this year. HadCRUT has now been updated to version 5 which includes polar infilling, making the Cowtan and Way dataset (which was designed to address that issue in HadCRUT4) a little superfluous. Going forward it is unlikely to be maintained so, in a couple of figures, I have replaced it with the new HadCRUT5. The GISTEMP version is now v4.

For the comparison with the Hansen et al. (1988), we only had the projected output up to 2019 (taken from fig 3a in the original paper). However, it turns out that fuller results were archived at NCAR, and now they have been added to our data file (and yes, I realise this is ironic). This extends Scenario B to 2030 and Scenario A to 2060.

Nothing substantive has changed with respect to the satellite data products, so the only change is the addition of 2020 in the figures and trends.

So what do we see? The early Hansen models have done very well considering the uncertainty in total forcings (as we’ve discussed (Hausfather et al., 2019)). The CMIP3 models estimates of SAT forecast from ~2000 continue to be astoundingly on point. This must be due (in part) to luck since the spread in forcings and sensitivity in the GCMs is somewhat ad hoc (given that the CMIP simulations are ensembles of opportunity), but is nonetheless impressive.

CMIP3 (circa 2004) model hindcast and forecast estimates of SAT.

The forcings spread in CMIP5 was more constrained, but had some small systematic biases as we’ve discussed Schmidt et al., 2014. The systematic issue associated with the forcings and more general issue of the target diagnostic (whether we use SAT or a blended SST/SAT product from the models), give rise to small effects (roughly 0.1ºC and 0.05ºC respectively) but are independent and additive.

The discrepancies between the CMIP5 ensemble and the lower atmospheric MSU/AMSU products are still noticeable, but remember that we still do not have a ‘forcings-adjusted’ estimate of the CMIP5 simulations for TMT, though work with the CMIP6 models and forcings to address this is ongoing. Nonetheless, the observed TMT trends are very much on the low side of what the models projected, even while stratospheric and surface trends are much closer to the ensemble mean. There is still more to be done here. Stay tuned!

The results from CMIP6 (which are still being rolled out) are too recent to be usefully added to this assessment of forecasts right now, though some compilations have now appeared:

CMIP6 model SAT (observed forcings to 2014, SSP2-45 scenario subsequently) (Zeke Hausfather)

The issues in CMIP6 related to the excessive spread in climate sensitivity will need to be looked at in more detail moving forward. In my opinion ‘official’ projections will need to weight the models to screen out those ECS values outside of the constrained range. We’ll see if other’s agree when the IPCC report is released later this year.

Please let us know in the comments if you have suggestions for improvements to these figures/analyses, or suggestions for additions.

References

  1. Z. Hausfather, H.F. Drake, T. Abbott, and G.A. Schmidt, "Evaluating the Performance of Past Climate Model Projections", Geophysical Research Letters, vol. 47, 2020. http://dx.doi.org/10.1029/2019GL085378
  2. G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geoscience, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons

Update day 2020!

26 Jan 2020 by Gavin

Following more than a decade of tradition (at least), I’ve now updated the model-observation comparison page to include observed data through to the end of 2019.

As we discussed a couple of weeks ago, 2019 was the second warmest year in the surface datasets (with the exception of HadCRUT4), and 1st, 2nd or 3rd in satellite datasets (depending on which one). Since this year was slightly above the linear trends up to 2018, it slightly increases the trends up to 2019. There is an increasing difference in trend among the surface datasets because of the polar region treatment. A slightly longer trend period additionally reduces the uncertainty in the linear trend in the climate models.

To summarize, the 1981 prediction from Hansen et al (1981) continues to underpredict the temperature trends due to an underestimate of the transient climate response. The projections in Hansen et al. (1988) bracket the actual changes, with the slight overestimate in scenario B due to the excessive anticipated growth rate of CFCs and CH4 which did not materialize. The CMIP3 simulations continue to be spot on (remarkably), with the trend in the multi-model ensemble mean effectively indistinguishable from the trends in the observations. Note that this doesn’t mean that CMIP3 ensemble means are perfect – far from it. For Arctic trends (incl. sea ice) they grossly underestimated the changes, and overestimated them in the tropics.

CMIP3 for the win!

The CMIP5 ensemble mean global surface temperature trends slightly overestimate the observed trend, mainly because of a short-term overestimate of solar and volcanic forcings that was built into the design of the simulations around 2009/2010 (see Schmidt et al (2014). This is also apparent in the MSU TMT trends, where the observed trends (which themselves have a large spread) are at the edge of the modeled histogram.

A number of people have remarked over time on the reduction of the spread in the model projections in CMIP5 compared to CMIP3 (by about 20%). This is due to a wider spread in forcings used in CMIP3 – models varied enormously on whether they included aerosol indirect effects, ozone depletion and what kind of land surface forcing they had. In CMIP5, most of these elements had been standardized. This reduced the spread, but at the cost of underestimating the uncertainty in the forcings. In CMIP6, there will be a more controlled exploration of the forcing uncertainty (but given the greater spread of the climate sensitivities, it might be a minor issue).

Over the years, the model-observations comparison page is regularly in the top ten of viewed pages on RealClimate, and so obviously fills a need. And so we’ll continue to keep it updated, and perhaps expand it over time. Please leave suggestions for changes in the comments below.

References

  1. J. Hansen, D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, "Climate Impact of Increasing Atmospheric Carbon Dioxide", Science, vol. 213, pp. 957-966, 1981. http://dx.doi.org/10.1126/science.213.4511.957
  2. J. Hansen, I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, "Global climate changes as forecast by Goddard Institute for Space Studies three‐dimensional model", Journal of Geophysical Research: Atmospheres, vol. 93, pp. 9341-9364, 1988. http://dx.doi.org/10.1029/JD093iD08p09341
  3. G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geoscience, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons, Scientific practice

How good have climate models been at truly predicting the future?

4 Dec 2019 by Gavin

A new paper from Hausfather and colleagues (incl. me) has just been published with the most comprehensive assessment of climate model projections since the 1970s. Bottom line? Once you correct for small errors in the projected forcings, they did remarkably well.

[Read more…] about How good have climate models been at truly predicting the future?

Filed Under: Climate modelling, Climate Science, Greenhouse gases, Instrumental Record, Model-Obs Comparisons, statistics

Update day

7 Feb 2019 by Gavin

So Wednesday was temperature series update day. The HadCRUT4, NOAA NCEI and GISTEMP time-series were all updated through to the end of 2018 (slightly delayed by the federal government shutdown). Berkeley Earth and the MSU satellite datasets were updated a couple of weeks ago. And that means that everyone gets to add a single additional annual data point to their model-observation comparison plots!

[Read more…] about Update day

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons

Comparing models to the satellite datasets

7 May 2016 by Gavin

How should one make graphics that appropriately compare models and observations? There are basically two key points (explored in more depth here) – comparisons should be ‘like with like’, and different sources of uncertainty should be clear, whether uncertainties are related to ‘weather’ and/or structural uncertainty in either the observations or the models. There are unfortunately many graphics going around that fail to do this properly, and some prominent ones are associated with satellite temperatures made by John Christy. This post explains exactly why these graphs are misleading and how more honest presentations of the comparison allow for more informed discussions of why and how these records are changing and differ from models.
[Read more…] about Comparing models to the satellite datasets

Filed Under: Climate modelling, Climate Science, El Nino, Greenhouse gases, Instrumental Record, IPCC, Model-Obs Comparisons, statistics

NOAA temperature record updates and the ‘hiatus’

4 Jun 2015 by Gavin

In a new paper in Science Express, Karl et al. describe the impacts of two significant updates to the NOAA NCEI (née NCDC) global temperature series. The two updates are: 1) the adoption of ERSST v4 for the ocean temperatures (incorporating a number of corrections for biases for different methods), and 2) the use of the larger International Surface Temperature Initiative (ISTI) weather station database, instead of GHCN. This kind of update happens all the time as datasets expand through data-recovery efforts and increasing digitization, and as biases in the raw measurements are better understood. However, this update is going to be bigger news than normal because of the claim that the ‘hiatus’ is no more. To understand why this is perhaps less dramatic than it might seem, it’s worth stepping back to see a little context…

[Read more…] about NOAA temperature record updates and the ‘hiatus’

References

  1. T.R. Karl, A. Arguez, B. Huang, J.H. Lawrimore, J.R. McMahon, M.J. Menne, T.C. Peterson, R.S. Vose, and H. Zhang, "Possible artifacts of data biases in the recent global surface warming hiatus", Science, vol. 348, pp. 1469-1472, 2015. http://dx.doi.org/10.1126/science.aaa5632
  2. B. Huang, V.F. Banzon, E. Freeman, J. Lawrimore, W. Liu, T.C. Peterson, T.M. Smith, P.W. Thorne, S.D. Woodruff, and H. Zhang, "Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons", Journal of Climate, vol. 28, pp. 911-930, 2015. http://dx.doi.org/10.1175/JCLI-D-14-00006.1

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons

2012 Updates to model-observation comparisons

7 Feb 2013 by Gavin

Time for the 2012 updates!

As has become a habit (2009, 2010, 2011), here is a brief overview and update of some of the most discussed model/observation comparisons, updated to include 2012. I include comparisons of surface temperatures, sea ice and ocean heat content to the CMIP3 and Hansen et al (1988) simulations.
[Read more…] about 2012 Updates to model-observation comparisons

Filed Under: Aerosols, Arctic and Antarctic, Climate modelling, Climate Science, El Nino, Greenhouse gases, Instrumental Record, Model-Obs Comparisons

2011 Updates to model-data comparisons

8 Feb 2012 by Gavin

And so it goes – another year, another annual data point. As has become a habit (2009, 2010), here is a brief overview and update of some of the most relevant model/data comparisons. We include the standard comparisons of surface temperatures, sea ice and ocean heat content to the AR4 and 1988 Hansen et al simulations.
[Read more…] about 2011 Updates to model-data comparisons

Filed Under: Climate modelling, Climate Science, El Nino, Greenhouse gases, Instrumental Record, Model-Obs Comparisons

2010 updates to model-data comparisons

21 Jan 2011 by Gavin

As we did roughly a year ago (and as we will probably do every year around this time), we can add another data point to a set of reasonably standard model-data comparisons that have proven interesting over the years.
[Read more…] about 2010 updates to model-data comparisons

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons

Updates to model-data comparisons

28 Dec 2009 by Gavin

Translations: (Italian)

It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?

[Read more…] about Updates to model-data comparisons

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Go to Next Page »

Primary Sidebar

Search

Search for:

Email Notification

get new posts sent to you automatically (free)
Loading

Recent Posts

  • National Climate Assessment links
  • Ocean circulation going South?
  • Melange à Trois
  • Unforced variations: July 2025
  • Unforced variations: Jun 2025
  • Predicted Arctic sea ice trends over time

Our Books

Book covers
This list of books since 2005 (in reverse chronological order) that we have been involved in, accompanied by the publisher’s official description, and some comments of independent reviewers of the work.
All Books >>

Recent Comments

  • MA Rodger on Unforced variations: July 2025
  • jgnfld on Unforced variations: July 2025
  • David on Unforced variations: July 2025
  • Paul Pukite (@whut) on Unforced variations: July 2025
  • Victor on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • Mr. Know It All on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • The Prieto Principle on Unforced variations: July 2025
  • jgnfld on Unforced variations: July 2025
  • Kevin McKinney on Unforced variations: July 2025
  • John Pollack on Unforced variations: July 2025

Footer

ABOUT

  • About
  • Translations
  • Privacy Policy
  • Contact Page
  • Login

DATA AND GRAPHICS

  • Data Sources
  • Model-Observation Comparisons
  • Surface temperature graphics
  • Miscellaneous Climate Graphics

INDEX

  • Acronym index
  • Index
  • Archives
  • Contributors

Realclimate Stats

1,371 posts

11 pages

244,810 comments

Copyright © 2025 · RealClimate is a commentary site on climate science by working climate scientists for the interested public and journalists.