{"id":19675,"date":"2016-10-30T07:38:10","date_gmt":"2016-10-30T12:38:10","guid":{"rendered":"http:\/\/www.realclimate.org\/?p=19675"},"modified":"2016-10-30T07:38:10","modified_gmt":"2016-10-30T12:38:10","slug":"tuning-in-to-climate-models","status":"publish","type":"post","link":"https:\/\/www.realclimate.org\/index.php\/archives\/2016\/10\/tuning-in-to-climate-models\/","title":{"rendered":"Tuning in to climate models"},"content":{"rendered":"<div class=\"kcite-section\" kcite-section-id=\"19675\">\n<p>There is an interesting <a href=\"http:\/\/science.sciencemag.org\/content\/354\/6311\/401\">news article ($)<\/a> in <em>Science<\/em> this week by Paul Voosen on the increasing amount of transparency on climate model tuning. (Full disclosure, I spoke to him a couple of times for this article and I&#8217;m working on tuning description paper for the US climate modeling centers). The main points of the article are worth highlighting here, even if a few of the characterizations are slightly off. <\/p>\n<p><!--more--><\/p>\n<p>The basic thrust of the article is that climate modeling groups are making significant efforts to increase the transparency and availability of model tuning processes for the next round of intercomparisons (<a href=\"https:\/\/www.wcrp-climate.org\/wgcm-cmip\/wgcm-cmip6\">CMIP6<\/a>). This partly stems from a paper from the MPI-Hamburg group (<span id=\"cite_ITEM-19675-0\" name=\"citation\"><a href=\"#ITEM-19675-0\">Mauritsen et al, 2012<\/a><\/span>), which was perhaps the first article to concentrate solely on the tuning process and the impact that it has on important behaviour of the model (such as it&#8217;s sensitivity to increasing CO<sub>2<\/sub>). That isn&#8217;t to say that details of tunings were not discussed previously, but the tendency was to describe them briefly in the model description papers (such as <span id=\"cite_ITEM-19675-1\" name=\"citation\"><a href=\"#ITEM-19675-1\">Schmidt et al. (2006)<\/a><\/span> for the GISS model). Some discussion has appeared in <a href=\"https:\/\/www.ipcc.ch\/publications_and_data\/ar4\/wg1\/en\/ch8s8-1-3.html\">IPCC reports<\/a> too (h\/t Gareth Jones), but not in much depth. Thus useful information was hard to collate and compare across all model groups, and it turns out that matters. <\/p>\n<p>For instance, if some analyses of the model ensemble tries to weight models based on some their skill compared to observations, it is obviously important to know whether a model group tuned their model to achieve a good result or whether it arose naturally from the the basic physics. In a more general sense this relates to whether &#8220;data accommodation&#8221; improves a model predictive skill or not. This is quite subtle though &#8211; weather forecast models obviously do better if they have initial conditions that are closer to the observations, and one might argue that for particular climate model predictions that are strongly dependent on the base climatology (such as for Arctic sea ice) tuning to the climatology will be worthwhile. The nature of the tuning also matters: allowing an uncertain parameter to vary within reasonable bounds and picking the value that gives the best result, is quite different to inserting completely artificial fluxes to correct for biases. Both have been done historically, but the latter is now much rarer. <\/p>\n<p>A recent summary paper in BAMS <span id=\"cite_ITEM-19675-2\" name=\"citation\"><a href=\"#ITEM-19675-2\">(Hourdin et al., 2016)<\/a><\/span> discussed current practices and gave results from a survey of the modeling groups. In that survey, it was almost universal that groups tuned for radiation balance at the top of the atmosphere (usually by adjusting uncertain cloud parameters), but there is a split on pratices like using flux corrections (2\/3rds of groups disagreed with that). This figure gives some more details: <\/p>\n<p><center><br \/>\n<a href=\"\/images\/hourdin_S7.png\"><img decoding=\"async\" data-src=\"\/images\/hourdin_S7.png\" width=\"80%\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 1062px; --smush-placeholder-aspect-ratio: 1062\/1215;\" \/><\/a><br \/>\n<small><i>Summary results on tuning practices from the survey of CMIP5 modeling groups published in Hourdin et al. (2016).<\/i><\/small><br \/>\n<\/center><\/p>\n<p>The <em>Science<\/em> article though does make some claims that I don&#8217;t think are correct. I assume these are statements that are paraphrases from scientists that the writer talked to, but they would have been better as quotes, as opposed to generalisations. For instance, the article claims that <\/p>\n<blockquote><p>&#8220;&#8230; climate modelers [will now] openly discuss and document tuning in ways that they had long avoided, fearing criticism by climate skeptics.<br \/>\n&#8230;<br \/>\nThe taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models\u2014 and, by extension, the reality of human driven warming. \u201cThe community became defensive,\u201d [Bjorn] Stevens says. \u201cIt was afraid of talking about things that they thought could be unfairly used against them.\u201d\n<\/p><\/blockquote>\n<p>This is, I think, demonstrably untrue, since tuning has been discussed widely in papers including here on <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2008\/11\/faq-on-climate-models\/\">RealClimate<\/a>. Perhaps it does reflect some people&#8217;s opinion, but it is not true generally. <\/p>\n<p>The targets for tuning are vary across groups, and again, it matters which you pick. Tuning to the seasonal cycle, or to the climatological average, or to the variance of some field &#8211; which can be well characterised from observations, is different to tuning to a transient change of over time &#8211; which is often less well known. Indeed, many groups specifically leave transient changes out of their tuning procedures in order to maintain those trends for out-of-sample evaluation of the model (approximately half the groups according to the Hourdin et al survey). <\/p>\n<p>The article says something a little ambiguous on this:<\/p>\n<blockquote><p>\nIndeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records\u2014otherwise it would have ended up in the trash. \u201cIt\u2019s fair to say all models have tuned it,\u201d says Isaac Held.\n<\/p><\/blockquote>\n<p>Does that mean the global mean surface temperature trends over the 20th Century, or just that some 20th Century data is used? And what does &#8216;precisely&#8217; mean in this context? The spread of 20th Century trends (1900-1999) in the CMIP5 simulations [0.25,1.17]\u00baC is clearly too broad to be the result of precisely tuning anything! On a similar issue, the article contains an example of the MPI-Hamburg model being tuned to avoid a 7\u00baC sensitivity. That is probably justified since there is plenty of evidence to rule out such a high value, but tuning to a specific value (albeit within the nominal range of 2 to 4.5\u00baC) is not justified. My experience is that most groups do not &#8216;precisely&#8217; tune their models to 20th Century trends or climate sensitivity, but given this example and the Hourdin results, more clarity on exactly what is done (whether explicitly or implicitly) is needed. <\/p>\n<p>One odd comment relates the UK Met Office\/Hadley Centre models:<\/p>\n<blockquote><p>\nProprietary concerns also get in the way. For example, the United Kingdom\u2019s Met Office sells weather forecasts driven by its climate model. Disclosing too much about its code could encourage copycats and jeopardize its business.\n<\/p><\/blockquote>\n<p>It would be worrying if the centers didn&#8217;t discuss tuning in the science literature through fear of commercial rivals, and I don&#8217;t think this really characterises the Hadley Centre position. Some groups code&#8217;s (incl. the Hadley Center) are however restricted for various reasons, though I personally see that as an unsustainable position in the long-term if groups want to partake in international model intercomparisons that will be used for public policy. <\/p>\n<p>The article ends up on an interesting note:<\/p>\n<blockquote><p>Daniel Williamson, a statistician at the University of Exeter in the United Kingdom, says that centers should submit multiple versions of their models for comparison, each representing a different tuning strategy. The current method obscures uncertainty and inhibits improvement, he says. \u201cOnce people start being open, we can do it better.\u201d\n<\/p><\/blockquote>\n<p>I think this is exactly right. We should be using alternate tunings to expand the representation of structural uncertainty in the ensemble, and I hope many of the groups will take this opportunity to do so.<\/p>\n<h2>References<\/h2>\n    <ol>\n    <li><a name='ITEM-19675-0'><\/a>\nT. Mauritsen, B. Stevens, E. Roeckner, T. Crueger, M. Esch, M. Giorgetta, H. Haak, J. Jungclaus, D. Klocke, D. Matei, U. Mikolajewicz, D. Notz, R. Pincus, H. Schmidt, and L. Tomassini, \"Tuning the climate of a global model\", <i>Journal of Advances in Modeling Earth Systems<\/i>, vol. 4, 2012. <a href=\"http:\/\/dx.doi.org\/10.1029\/2012MS000154\">http:\/\/dx.doi.org\/10.1029\/2012MS000154<\/a>\n\n\n<\/li>\n<li><a name='ITEM-19675-1'><\/a>\nG.A. Schmidt, R. Ruedy, J.E. Hansen, I. Aleinov, N. Bell, M. Bauer, S. Bauer, B. Cairns, V. Canuto, Y. Cheng, A. Del Genio, G. Faluvegi, A.D. Friend, T.M. Hall, Y. Hu, M. Kelley, N.Y. Kiang, D. Koch, A.A. Lacis, J. Lerner, K.K. Lo, R.L. Miller, L. Nazarenko, V. Oinas, J. Perlwitz, J. Perlwitz, D. Rind, A. Romanou, G.L. Russell, M. Sato, D.T. Shindell, P.H. Stone, S. Sun, N. Tausnev, D. Thresher, and M. Yao, \"Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data\", <i>Journal of Climate<\/i>, vol. 19, pp. 153-192, 2006. <a href=\"http:\/\/dx.doi.org\/10.1175\/JCLI3612.1\">http:\/\/dx.doi.org\/10.1175\/JCLI3612.1<\/a>\n\n\n<\/li>\n<li><a name='ITEM-19675-2'><\/a>\nF. Hourdin, T. Mauritsen, A. Gettelman, J. Golaz, V. Balaji, Q. Duan, D. Folini, D. Ji, D. Klocke, Y. Qian, F. Rauser, C. Rio, L. Tomassini, M. Watanabe, and D. Williamson, \"The Art and Science of Climate Model Tuning\", <i>Bulletin of the American Meteorological Society<\/i>, vol. 98, pp. 589-602, 2017. <a href=\"http:\/\/dx.doi.org\/10.1175\/BAMS-D-15-00135.1\">http:\/\/dx.doi.org\/10.1175\/BAMS-D-15-00135.1<\/a>\n\n\n<\/li>\n<\/ol>\n\n<\/div> <!-- kcite-section 19675 -->","protected":false},"excerpt":{"rendered":"<p>There is an interesting news article ($) in Science this week by Paul Voosen on the increasing amount of transparency on climate model tuning. (Full disclosure, I spoke to him a couple of times for this article and I&#8217;m working on tuning description paper for the US climate modeling centers). The main points of the [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[5,1,24],"tags":[],"class_list":{"0":"post-19675","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-climate-modelling","7":"category-climate-science","8":"category-reporting-on-climate","9":"entry"},"aioseo_notices":[],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/19675","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/comments?post=19675"}],"version-history":[{"count":11,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/19675\/revisions"}],"predecessor-version":[{"id":19687,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/19675\/revisions\/19687"}],"wp:attachment":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/media?parent=19675"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/categories?post=19675"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/tags?post=19675"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}