{"id":587,"date":"2008-08-10T09:29:36","date_gmt":"2008-08-10T14:29:36","guid":{"rendered":"http:\/\/www.realclimate.org\/index.php\/archives\/2008\/08\/hypothesis-testing-and-long-term-memory\/langswitch_lang\/in"},"modified":"2008-12-01T10:20:37","modified_gmt":"2008-12-01T15:20:37","slug":"hypothesis-testing-and-long-term-memory","status":"publish","type":"post","link":"https:\/\/www.realclimate.org\/index.php\/archives\/2008\/08\/hypothesis-testing-and-long-term-memory\/","title":{"rendered":"Hypothesis testing and long range memory"},"content":{"rendered":"<div class=\"kcite-section\" kcite-section-id=\"587\">\n<p>What is the actual hypothesis you are testing when you compare a model to an observation? It is not a simple as &#8216;is the model any good&#8217; &#8211; though many casual readers might assume so. Instead, it is a test of a whole set of assumptions that went into building the model, the forces driving it, and the assumptions that went in to what is presented as the observations. A mismatch between them can arise from a mis-specification of any of these components and climate science is full of examples where reported mismatches ended up being due to problems in the observations or forcing functions rather than the models (ice age tropical ocean temperatures, the MSU records etc.). Conversely of course, there are clear cases where the models are wrong (the double ITCZ problem) and where the search for which assumptions in the model are responsible is ongoing.<\/p>\n<p><!--more--><\/p>\n<p>As we have <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2008\/01\/uncertainty-noise-and-the-art-of-model-data-comparison\/\">discussed<\/a>, there is a skill required in comparing models to observations in ways that are most productive, and that requires a certain familiarity with the history of climate and weather models. For instance, it is well known that the individual trajectory of the weather is chaotic (in models this is provable; in the real world, just very likely) and unpredictable after a couple of weeks. So comparing the real weather at a point with a model simulation outside of a weather forecast context is not going to be useful. You can see this by specifying exactly what the hypothesis is you are testing in performing such a comparison in a climate model &#8211; i.e. &#8220;is a model&#8217;s individual weather correlated to the weather in the real world (given the assumptions of the model and no input of actual weather data)&#8221;. There will be a mismatch between model and observation, but nothing of interest will have been learnt because we already know that the weather in the model is chaotic. <\/p>\n<p>Hypotheses are much more useful if you expect that there will be a match; a mismatch is then much more surprising. Your expectations are driven by past experience and are informed by a basic understanding of the physics. For instance, given the physics of sulphate aerosols in the stratosphere (short wave reflectors, long wave absorbers), it would be surprising if putting in the aerosols seen during the Pinatubo eruption did not reduce the planetary temperature while warming the stratosphere in the model. Which it does. Doing such an experiments is much more a test of the quantitative impacts then, rather than the qualitative response.<\/p>\n<p>With that in mind, I now turn to the <a href=\"http:\/\/www.itia.ntua.gr\/getfile\/864\/2\/documents\/2008HSJClimPredictions.pdf\">latest paper<\/a> that is getting the <a href=\"http:\/\/www.google.com\/search?hl=en&#038;safe=off&#038;client=firefox-a&#038;rls=org.mozilla%3Aen-US%3Aofficial&#038;q=Koutsoyiannis+2008+climate&#038;btnG=Search\">inactivists<\/a> excited by Demetris Koutsoyiannis and colleagues. There are very clearly two parts to this paper &#8211; the first is a poor summary of the practice of climate modelling &#8211; touching all the recent contrarian talking points (global cooling, Douglass et al, Karl Popper etc.) but is not worth dealing with in detail (the reviewers of the paper include Willie Soon, Pat Frank and Larry Gould (of <a href=\"http:\/\/bigcitylib.blogspot.com\/2008\/07\/how-aps-was-infiltrated-by-deniers.html\">Monckton\/APS fame<\/a>) &#8211; so no guessing needed for where they get their misconceptions). This is however just a distraction (though I&#8217;d recommend to the authors to leave out this kind of nonsense in future if they want to be taken seriously in the wider field). The second part is their actual analysis, the results of which lead them to conclude that &#8220;models perform poorly&#8221;, and is more interesting in conception, if not in execution.<\/p>\n<p>Koutsoyiannis and his colleagues are hydrologists by background and have an interest in what is called long term persistence (LTP or long term memory) in time series (discussed previously <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2005\/12\/naturally-trendy\">here<\/a>). This is often quantified by the Hurst parameter (nicely explained by <a href=\"http:\/\/tamino.wordpress.com\/2008\/06\/10\/hurst\/\">tamino recently<\/a>). A Hurst value of greater than 0.5 is indicative of &#8216;long range persistence&#8217; and complicates issues of calculating trend uncertainties and the like. Many natural time series do show more persistent &#8216;memory&#8217; than a simple auto-regression (AR) process &#8211; in particularly (and classically) river outflows. This makes physical sense because a watershed is much more complicated than just a damper of higher frequency inputs. Soil moisture can have an impact from year to year, as can various groundwater reservoirs and their interactions. <\/p>\n<p>It&#8217;s important to realise that there is nothing magic about processes with long term persistence. This is simply a property that complex systems &#8211; like the climate &#8211; will exhibit in certain circumstances. However, like all statistical models that do not reflect the real underlying physics of a situation, assuming a form of LTP &#8211; a constant Hurst parameter for instance, is simply an assumption that may or may not be useful. Much more interesting is whether there is a match between the kinds of statistical properties seen in the real world and what is seen in the models (see below). <\/p>\n<p>So what did Koutsoyiannis et al do? They took a small number of long station records and compared them to co-located grid points in single realisations of a few models and correlate their annual and longer term means. Returning to the question we asked at the top, what hypothesis is being tested here? They are using single realisations of model runs, and so they are not testing the forced component of the response (which can only be determined using ensembles or very long simulations). By correlating at the annual and other short term periods they are effectively comparing the weather in the real world with that in a model. Even without looking at their results, it is obvious that this is not going to match (since weather is uncorrelated in one realisation to another, let alone in the real world). Furthermore, by using only one to four grid boxes for their comparisons, even the longer term (30 year) forced trends are not going to come out of the noise. <\/p>\n<p>Remember that the magnitude of annual, interannual and decadal variability increases substantially as spatial scales go from global, hemispheric, continental, regional to local. The IPCC report for instance is very clear in stating that the detection and attribution of climate changes is only clearly possible at continental scales and above. Note also that K et al compare absolute temperatures rather than anomalies. This isn&#8217;t a terrible idea, but single grid points have offsets to a co-located station for any number of reasons &#8211; mean altitude, un-resolved micro-climate effects, systematic but stable biases in planetary wave patterns etc. &#8211; and anomaly comparison are generally preferred since they can correct for these oft-times irrelevant effects. Finally (and surprisingly given the attention being paid to it in various circles), K et al do not consider whether any of their selected stations might have any artifacts within them that might effect their statistical properties.<\/p>\n<p>Therefore, it comes as no surprise at all that K and colleagues find poor matches in their comparisons. The answer to their effective question &#8211; are very local single realisations of weather coherent across observations and models? &#8211; is no, as anyone would have concluded from reading the IPCC report or the existing literature. This is why no one uses (or should be using) single grid points from single models in any kind of future impact study. Indeed, it is the reason why <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2007\/08\/regional-climate-projections\/\">regional downscaling<\/a> approaches exist at all. The most effective downscaling approaches use the statistical correlations of local weather to larger scale patterns and use model projections for those patterns to estimate changes in local weather regimes. Alternatively, one can use a regional model embedded within a global model. Either way, no-one uses single grid boxes. <\/p>\n<p>What might K et al have done that would have been more interesting and still relevant to their stated concerns? Well, as we stated above, comparing statistical properties in the models to the real world is very relevant. Do the models exhibit LTP? Is there spatial structure to the derived Hurst coefficients? What is the predictability of Hurst at single grid boxes even within models? Of course, some work has already been done on this.<\/p>\n<p><img decoding=\"async\" data-src=\"\/images\/kiraly.jpg\" align=\"right\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 565px; --smush-placeholder-aspect-ratio: 565\/268;\" \/> For instance, Kiraly et al (2006, Tellus) calculated Hurst exponents for the entire database of weather stations and show that there is indeed significant structure (and some uncertainty in the estimates) in different climate regimes. In the US, there is a clear difference between the West Coast, Mountain States, and Eastern half. Areas downstream of the North Atlantic appear to have particular high Hurst values. <\/p>\n<p><img decoding=\"async\" data-src=\"\/images\/FB03_fig2.jpg\" align=\"right\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 461px; --smush-placeholder-aspect-ratio: 461\/313;\" \/> Other analyses show similar patterns (in this case, from Fraedrich and Blender (2003) who used the gridded datasets from 1900 onwards), though there is enough differences with the first picture that it&#8217;s probably worth investigating methodological issues in these calculations. What do you get in models? Well in very long simulations that provide enough data to estimate Hurst exponents quite accurately, the answer is mostly something similar. <\/p>\n<p><img decoding=\"async\" data-src=\"\/images\/B06_fig2a.jpg\" align=\"right\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" style=\"--smush-placeholder-width: 473px; --smush-placeholder-aspect-ratio: 473\/268;\" \/> The precise patterns do vary as a function of frequency ranges (i.e. the exponents in the interannual to multi-decadal band are different to those over longer periods), and there are differences between models. This is one example from Blender et al (2006, GRL) which shows the basic pattern though. Very high Hurst exponents over the parts of the ocean with known multi-decadal variability (North Atlantic for instance), and smaller values over land.<\/p>\n<p>However, I&#8217;m not aware of any analyses of these issues for models in the AR4 database, and so that would certainly be an interesting study. Given the short period of the records are the observational estimates of the Hurst exponents stable enough to be used as a test for the models? Do the models suggest that 100-year estimates of these parameters are robust? (this is testable using different realisations in an ensemble). Are there sufficient differences between the models to allow us to say something about the realism of their multi-decadal variability?<\/p>\n<p>Answering any of these questions would have moved the science forward &#8211; it&#8217;s a shame Koutsoyiannis et al addressed a question whose answer was obvious and well known ahead of time instead.  <\/p>\n<!-- kcite active, but no citations found -->\n<\/div> <!-- kcite-section 587 -->","protected":false},"excerpt":{"rendered":"<p>What is the actual hypothesis you are testing when you compare a model to an observation? It is not a simple as &#8216;is the model any good&#8217; &#8211; though many casual readers might assume so. Instead, it is a test of a whole set of assumptions that went into building the model, the forces driving [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[1,9],"tags":[],"class_list":{"0":"post-587","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-climate-science","7":"category-instrumental-record","8":"entry"},"aioseo_notices":[],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/587","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/comments?post=587"}],"version-history":[{"count":0,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/587\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/media?parent=587"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/categories?post=587"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/tags?post=587"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}