{"id":25443,"date":"2024-01-28T19:33:16","date_gmt":"2024-01-29T00:33:16","guid":{"rendered":"https:\/\/www.realclimate.org\/?p=25443"},"modified":"2024-01-31T23:26:33","modified_gmt":"2024-02-01T04:26:33","slug":"spencers-shenanigans","status":"publish","type":"post","link":"https:\/\/www.realclimate.org\/index.php\/archives\/2024\/01\/spencers-shenanigans\/","title":{"rendered":"Spencer&#8217;s Shenanigans"},"content":{"rendered":"<div class=\"kcite-section\" kcite-section-id=\"25443\">\n\n<p><strong>A <a href=\"https:\/\/www.heritage.org\/environment\/report\/global-warming-observations-vs-climate-models\" title=\"recent sensible-sounding piece\">recent sensible-sounding piece<\/a> by Roy Spencer for the Heritage foundation is full of misrepresentations. Let&#8217;s play spot the fallacy.<\/strong><\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Comparing climate models to observations is usually a great idea, but there are some obvious pitfalls to avoid if you want to be taken seriously. The most obvious one is to neglect the impacts of internal variability &#8211; which is not synchronized across the models or with the observations. The second is to avoid cherry picking your comparison &#8211; there is always a spread of results by just looking at one small region, in one season, in one metric, so it&#8217;s pretty easy to fool yourself (and others!) if you find something that doesn&#8217;t match. The third is to ignore what the rest of the community has already done to deal with what may be real issues. Spencer fails to avoid each one of these.<\/p>\n\n\n\n<p><strong>Where&#8217;s the model spread, Roy?<\/strong><\/p>\n\n\n\n<p>The first figure in Spencer&#8217;s article is the following &#8211; which I have annotated.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" width=\"600\" height=\"475\" data-src=\"https:\/\/www.realclimate.org\/images\/\/spencer_fig1_annotated-1-600x475.png\" alt=\"Annotated version of Spencer's Fig 1, commenting on the incorrect description of the y axis for the models, the lack of model spread, and lack of screening. \" class=\"wp-image-25445 lazyload\" data-srcset=\"https:\/\/www.realclimate.org\/images\/spencer_fig1_annotated-1-600x475.png 600w, https:\/\/www.realclimate.org\/images\/spencer_fig1_annotated-1-300x238.png 300w, https:\/\/www.realclimate.org\/images\/spencer_fig1_annotated-1.png 884w\" data-sizes=\"(max-width: 600px) 100vw, 600px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/475;\" \/><\/figure>\n<\/div>\n\n\n<p>You can see the impact of his choices by comparing to this similar figure from <a href=\"https:\/\/www.realclimate.org\/index.php\/archives\/2024\/01\/not-just-another-dot-on-the-graph-part-ii\/\" title=\"Not just another dot on the graph? Part II\">our annual update<\/a>:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" width=\"600\" height=\"417\" data-src=\"https:\/\/www.realclimate.org\/images\/\/cmp_cmip6_nice-600x417.png\" alt=\"Comparison of global mean SAT from GISTEMP and the CMIP6 models, means and spreads, including a subselection based on TCR. \" class=\"wp-image-25405 lazyload\" data-srcset=\"https:\/\/www.realclimate.org\/images\/cmp_cmip6_nice-600x417.png 600w, https:\/\/www.realclimate.org\/images\/cmp_cmip6_nice-300x208.png 300w, https:\/\/www.realclimate.org\/images\/cmp_cmip6_nice-1536x1066.png 1536w, https:\/\/www.realclimate.org\/images\/cmp_cmip6_nice-2048x1422.png 2048w\" data-sizes=\"(max-width: 600px) 100vw, 600px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/417;\" \/><\/figure>\n<\/div>\n\n\n<p>Our figure is using annual mean data rather than monthly (which is less noisy). First, the baseline is what it says on the box &#8211; there isn&#8217;t an extra adjustment to exaggerate the difference in trends. Second, you can see the spread of the models and see that the observations are well within it. Third, the impact of <a href=\"https:\/\/www.realclimate.org\/index.php\/archives\/2021\/08\/notallmodels\/\" title=\"model selection\">model selection<\/a> &#8211; that screens the models by their transient climate sensitivity <span id=\"cite_ITEM-25443-0\" name=\"citation\"><a href=\"#ITEM-25443-0\">Hausfather et al., 2022<\/a><\/span> &#8211; is also clear (the difference between the pink and grey bands). To be quantitative, the observed trend from 1980 0.20\u00b10.02\u00baC\/dec (95% CI on the OLS trend). The full multi-model mean and spread is 0.26\u00baC\/dec [0.16,0.46], while for the screened subset it&#8217;s 0.23\u00baC\/dec [0.16,0.31]. Note that the SAT\/SST blend in the observations makes a small difference, as would a different recipe for creating the mean from the individual simulations. <\/p>\n\n\n\n<p>To conclude, the observations lie completely within the spread of the models, and if you screen them based on an independently constrained sensitivity, the fit is very close. <strong>Reality 1: Spencer 0.<\/strong><\/p>\n\n\n\n<p><strong>Cherry-picking season<\/strong><\/p>\n\n\n\n<p>Spencer&#8217;s second figure reflects a more classic fallacy. The cherry pick.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-large is-resized\"><img decoding=\"async\" width=\"337\" height=\"600\" data-src=\"https:\/\/www.realclimate.org\/images\/\/spencer_fig2_annotated-337x600.png\" alt=\"Annotated version of Spencer's Fig 2, commenting the small area and single season cherry pick. \" class=\"wp-image-25446 lazyload\" style=\"--smush-placeholder-width: 337px; --smush-placeholder-aspect-ratio: 337\/600;width:414px;height:auto\" data-srcset=\"https:\/\/www.realclimate.org\/images\/spencer_fig2_annotated-337x600.png 337w, https:\/\/www.realclimate.org\/images\/spencer_fig2_annotated-169x300.png 169w, https:\/\/www.realclimate.org\/images\/spencer_fig2_annotated.png 734w\" data-sizes=\"(max-width: 337px) 100vw, 337px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/><\/figure>\n<\/div>\n\n\n<p>In this comparison, it suits Spencer&#8217;s purpose to include individual models, basically because he&#8217;s skewed the playing field. Why is this only showing summer data, for 12 US states (I think Iowa, Illinois, Indiana, Michigan, Ohio, Nebraska, Kansas, Minnesota, Missouri, South Dakota, North Dakota, and Wisconsin) and for the odd time period of 1973-2022? What about other seasons and regions? [Curiously, 14 out of the 36 models shown would have been screened out by the approach discussed in our Nature commentary]. We can perhaps gain some insight by plotting the global summer trends from GISTEMP (though it doesn&#8217;t really matter which observational data set you use). In that figure, you can see that there is minimum in the warming just to the south and west of the Great Lakes &#8211; corresponding pretty exactly to the  region Spencer selected. The warming rate there (around 0.12\u00baC\/dec) is close to the minimum trend for northern mid-latitudes and and half of what you would have got for the Pacific North West, or the South West, let alone anywhere in Europe! Therefore it&#8217;s the spot most conducive to showing the models overstating warming &#8211; anywhere else would not have had the same impact. <strong>Reality 2: Spencer 0.<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" width=\"600\" height=\"396\" data-src=\"https:\/\/www.realclimate.org\/images\/\/gistemp_jja_1973-2022-2-600x396.png\" alt=\"Spatial pattern of GISTEMP trends from 1973-2022 showing a minimum warming in the US Corn Belt region. \" class=\"wp-image-25450 lazyload\" data-srcset=\"https:\/\/www.realclimate.org\/images\/gistemp_jja_1973-2022-2-600x396.png 600w, https:\/\/www.realclimate.org\/images\/gistemp_jja_1973-2022-2-300x198.png 300w, https:\/\/www.realclimate.org\/images\/gistemp_jja_1973-2022-2.png 896w\" data-sizes=\"(max-width: 600px) 100vw, 600px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/396;\" \/><\/figure>\n<\/div>\n\n\n<p><strong>Back to the future<\/strong><\/p>\n\n\n\n<p>Spencer&#8217;s third figure is a variation on an <a href=\"https:\/\/www.realclimate.org\/index.php\/archives\/2016\/05\/comparing-models-to-the-satellite-datasets\/\" title=\"Comparing models to the satellite datasets\">old theme<\/a>. Again, there is no indication that there is a spread in the models, only limited spread in the observations, and no indication that there is an appropriate selection to be made. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" width=\"600\" height=\"514\" data-src=\"https:\/\/www.realclimate.org\/images\/\/spencer_fig3_annotated-600x514.png\" alt=\"Annotated version of Spencer's Fig 3, commenting on the incorrect description of the y axis for the models, the lack of model and observational spread, and lack of screening. \" class=\"wp-image-25451 lazyload\" data-srcset=\"https:\/\/www.realclimate.org\/images\/spencer_fig3_annotated-600x514.png 600w, https:\/\/www.realclimate.org\/images\/spencer_fig3_annotated-300x257.png 300w, https:\/\/www.realclimate.org\/images\/spencer_fig3_annotated.png 884w\" data-sizes=\"(max-width: 600px) 100vw, 600px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/514;\" \/><\/figure>\n<\/div>\n\n\n<p>A better comparison would show the model spread, have a less distorting baseline, and show the separate TLT datasets. Something like this perhaps:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" width=\"600\" height=\"419\" data-src=\"https:\/\/www.realclimate.org\/images\/\/cmp_global_tlt-600x419.png\" alt=\"Comparison of global mean TLT from multiple observations and the CMIP6 models, means and spreads, including a subselection based on TCR. \" class=\"wp-image-25453 lazyload\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/419;width:712px;height:auto\" data-srcset=\"https:\/\/www.realclimate.org\/images\/cmp_global_tlt-600x419.png 600w, https:\/\/www.realclimate.org\/images\/cmp_global_tlt-300x209.png 300w, https:\/\/www.realclimate.org\/images\/cmp_global_tlt-1536x1072.png 1536w, https:\/\/www.realclimate.org\/images\/cmp_global_tlt-2048x1429.png 2048w\" data-sizes=\"(max-width: 600px) 100vw, 600px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/><figcaption class=\"wp-element-caption\">Observed trends: 0.21, 0.14, and 0.14\u00baC\/dec. Model trends: 0.29\u00baC (all models) [0.20,0.46] 95% spread, 0.27\u00baC\/dec (screened) [0.20,0.34].<\/figcaption><\/figure>\n<\/div>\n\n\n<p>Now, this is the exact same model data that Spencer is using (from <span id=\"cite_ITEM-25443-1\" name=\"citation\"><a href=\"#ITEM-25443-1\">McKitrick and Christy (2020)<\/a><\/span> (though the screening uses the TCR from our paper), and updated TLT satellite data. This does show a larger discrepancy than at the surface (and only a minor improvement from the screening) suggesting that there is something a bit different about the TLT metric &#8211; but far less than Spencer implies. So, <strong>Reality 3: Spencer 0.<\/strong><\/p>\n\n\n\n<p><strong>Bottom lines<\/strong><\/p>\n\n\n\n<p>One final point. I don&#8217;t criticize Spencer (and Christy before him) because of any tribal or personal animosity, but rather it is because appropriate comparisons between models and observations are the <em>only <\/em>way to see what we need to work on and where there are remaining problems. The key word is &#8216;appropriate&#8217; &#8211; if that isn&#8217;t done we risk overfitting on poorly constrained observations, or looking in the wrong places for where the issues may lie. Readers may recall that <a href=\"https:\/\/www.realclimate.org\/index.php\/archives\/2023\/02\/2022-updates-to-model-observation-comparisons\/\" title=\"we showed\">we showed<\/a> that a broader exploration of the structural variations in the models (including better representations of the stratosphere and ozone effects, not included in the McKtrick and Christy selection), can make a big difference to these metrics <span id=\"cite_ITEM-25443-2\" name=\"citation\"><a href=\"#ITEM-25443-2\">(Casas et al., 2022)<\/a><\/span>. <\/p>\n\n\n\n<p>Spencer&#8217;s shenanigans are designed to mislead readers about the likely sources of any discrepancies and to imply that climate modelers are uninterested in such comparisons &#8211; and he is wrong on both counts.<\/p>\n\n\n\n<p><strong>Postscript [1\/31\/2024]<\/strong> Spencer has <a href=\"https:\/\/www.drroyspencer.com\/2024\/01\/spencer-vs-schmidt-my-response-to-realclimate-org-criticisms\/\" title=\"responded\">responded<\/a> on his blog and seems disappointed that I didn&#8217;t criticize every single claim that he made, but only focused on the figures. What can I say? Time is precious! But lest someone claim that these points are implicitly correct because I didn&#8217;t refute them, here&#8217;s a quick rundown of why the ones he now highlights are wrong as well. (Note that there is far more that is wrong in his article, but <a href=\"https:\/\/en.wikipedia.org\/wiki\/Brandolini%27s_law\" title=\"Brandolini's law\">Brandolini&#8217;s law<\/a> applies, and I just don&#8217;t have the energy). Here goes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>1.1 He agrees with me.<\/li>\n\n\n\n<li>1.2 Spencer&#8217;s new graph shows that the observations are not distinguishable from the screened model ensemble. Which is what I said.<\/li>\n\n\n\n<li>1.3 Spencer is backtracking from his original claim that models overpredict warming to now saying that only that the SAT observations are near the lower end of the model spread. Sure. But the <a href=\"https:\/\/www.realclimate.org\/images\/cmip6_sst_trends-1536x1536.png\" title=\"SST observations \">SST observations <\/a>are nearer the higher end. Does that mean that the models underpredict? Or does it mean that there is noise in the comparisons from multiple sources and expecting free running<br>models with their own internal variability to perfectly match all observations would be overfitting?<\/li>\n\n\n\n<li>1.4 Quantitative trends don&#8217;t depend on baselines of course, but aligning the curves so that the trends to all have the same starting point in 1979 <a href=\"https:\/\/www.realclimate.org\/images\/christy_baseline.png\" title=\"maximises the visual discrepancy.\">maximises the visual discrepancy.<\/a> This leads to an incoherent y-axis (bet you can&#8217;t describe it succinctly!) and errors like in point 1.1. If Spencer just wanted to show the trends, he should just show the trends (and their uncertainty)!<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" data-src=\"https:\/\/www.realclimate.org\/images\/\/sat_cmip6_trends-600x468.png\" alt=\"Histogram of SAT trends in CMIP6 models, showing the full ensemble and the TCR-screened subset, along with the the trends from GISTEMP. \" class=\"wp-image-25464 lazyload\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/468;width:490px;height:auto\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/><\/figure>\n<\/div>\n\n\n<ul class=\"wp-block-list\">\n<li>2.1 Spencer is pretending here to be uniquely concerned about agriculture to justify his cherry-picking. I&#8217;ll happily withdraw my suggestion that this is just a cover for finding somewhere with lower warming when he does a weighted average of all soy or corn growing regions worldwide. I&#8217;ll wait.<\/li>\n\n\n\n<li>3.1 After claiming that baselines don&#8217;t matter in 1.4, my choice in fig 3 above is &#8216;untrustworthy&#8217; because it contextualizes the discrepancy Spencer wants to exaggerate. But again, if you want to just show trends, just show trends.<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" width=\"600\" height=\"468\" data-src=\"https:\/\/www.realclimate.org\/images\/\/tlt_cmip6_trends-600x468.png\" alt=\"Histogram of TLT trends in CMIP6 models, showing the full ensemble and the TCR-screened subset, along with the the trends from RSS, UAH and NOAA STAR.\" class=\"wp-image-25463 lazyload\" style=\"--smush-placeholder-width: 600px; --smush-placeholder-aspect-ratio: 600\/468;width:490px;height:auto\" data-srcset=\"https:\/\/www.realclimate.org\/images\/tlt_cmip6_trends-600x468.png 600w, https:\/\/www.realclimate.org\/images\/tlt_cmip6_trends-300x234.png 300w, https:\/\/www.realclimate.org\/images\/tlt_cmip6_trends-1536x1198.png 1536w, https:\/\/www.realclimate.org\/images\/tlt_cmip6_trends-2048x1598.png 2048w\" data-sizes=\"(max-width: 600px) 100vw, 600px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" \/><\/figure>\n<\/div>\n\n\n<ul class=\"wp-block-list\">\n<li>4. A claim that the observed EEI could be natural (without any actual evidence) is just nonsense on stilts. The <a href=\"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00376-024-3378-5.pdf\" title=\"current energy imbalance\">current energy imbalance<\/a> is clear (via the increases in ocean heat content) and accelerating, and is totally incompatible with internal variability. It additionally cannot be due to solar or other natural forcings because of the <a href=\"https:\/\/svs.gsfc.nasa.gov\/4908\/\" title=\"fingerprint of changes\">fingerprint of changes<\/a> in the <a href=\"https:\/\/www.realclimate.org\/images\/cmp_cmip6_ssu-1536x1536.png\" title=\"stratosphere\">stratosphere<\/a>.<\/li>\n\n\n\n<li>5. Constraints on climate sensitivity are not determined from what the models do, but rather on multiple independent lines of observational evidence (historical, process-based and via paleo-climate). We even <a href=\"https:\/\/agupubs.onlinelibrary.wiley.com\/doi\/10.1029\/2019RG000678\" title=\"wrote a paper\">wrote a paper<\/a> about it.<\/li>\n\n\n\n<li>6. Do climate models conserve mass and energy? Yes. I know this is be a fact for the GISS model since I personally spent a lot of time making sure of it. I can&#8217;t vouch for every single other model, but I will note that the CMIP diagnostics are often not sufficient to test this to a suitable precision &#8211; due to slight mispecifications, incompleteness, interpolation etc. Additionally, people often confuse non-conservation with the <em>drift<\/em> in, say, the deep ocean or soil carbon, (because of the very long timescales involved) but these things are not the same. Drift can occur even with perfect conservation since full equilibrium takes thousands of years of runtime and sometimes pre-industrial control runs are not that long. The claim in the <a href=\"https:\/\/journals.ametsoc.org\/view\/journals\/clim\/34\/8\/JCLI-D-20-0281.1.xml\" title=\"paper Spencer cited \">paper Spencer cited <\/a>that no model has a closed water cycle in the atmosphere is simply unbelievable (and it might be worth exploring why they get this result). To be fair, energy conservation is actually <a href=\"https:\/\/agupubs.onlinelibrary.wiley.com\/doi\/10.1029\/2022MS003117\" title=\"quite complicated \">quite complicated <\/a>and there are multiple efforts to improve the specification of the thermodynamics so that the models&#8217; conserved quantities can get closer to those in the real world, but these are all second order or smaller effects.<\/li>\n<\/ul>\n\n\n\n<p>Hopefully Roy is happy now. <\/p>\n\n\n\n<p>   <\/p>\n<h2>References<\/h2>\n    <ol>\n    <li><a name='ITEM-25443-0'><\/a>\nZ. Hausfather, K. Marvel, G.A. Schmidt, J.W. Nielsen-Gammon, and M. Zelinka, \"Climate simulations: recognize the \u2018hot model\u2019 problem\", <i>Nature<\/i>, vol. 605, pp. 26-29, 2022. <a href=\"http:\/\/dx.doi.org\/10.1038\/d41586-022-01192-2\">http:\/\/dx.doi.org\/10.1038\/d41586-022-01192-2<\/a>\n\n\n<\/li>\n<li><a name='ITEM-25443-1'><\/a>\nR. McKitrick, and J. Christy, \"Pervasive Warming Bias in CMIP6 Tropospheric Layers\", <i>Earth and Space Science<\/i>, vol. 7, 2020. <a href=\"http:\/\/dx.doi.org\/10.1029\/2020EA001281\">http:\/\/dx.doi.org\/10.1029\/2020EA001281<\/a>\n\n\n<\/li>\n<li><a name='ITEM-25443-2'><\/a>\nM.C. Casas, G.A. Schmidt, R.L. Miller, C. Orbe, K. Tsigaridis, L.S. Nazarenko, S.E. Bauer, and D.T. Shindell, \"Understanding Model\u2010Observation Discrepancies in Satellite Retrievals of Atmospheric Temperature Using GISS ModelE\", <i>Journal of Geophysical Research: Atmospheres<\/i>, vol. 128, 2022. <a href=\"http:\/\/dx.doi.org\/10.1029\/2022JD037523\">http:\/\/dx.doi.org\/10.1029\/2022JD037523<\/a>\n\n\n<\/li>\n<\/ol>\n\n<\/div> <!-- kcite-section 25443 -->","protected":false},"excerpt":{"rendered":"<p>A recent sensible-sounding piece by Roy Spencer for the Heritage foundation is full of misrepresentations. Let&#8217;s play spot the fallacy.<\/p>\n","protected":false},"author":2,"featured_media":25445,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[5,1,75,9,23],"tags":[90,122,140],"class_list":{"0":"post-25443","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-climate-modelling","8":"category-climate-science","9":"category-featured-story","10":"category-instrumental-record","11":"category-ipcc","12":"tag-cmip6","13":"tag-msu","14":"tag-roy-spencer","15":"entry"},"aioseo_notices":[],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/25443","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/comments?post=25443"}],"version-history":[{"count":4,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/25443\/revisions"}],"predecessor-version":[{"id":25465,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/25443\/revisions\/25465"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/media\/25445"}],"wp:attachment":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/media?parent=25443"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/categories?post=25443"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/tags?post=25443"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}