{"id":520,"date":"2008-02-04T11:30:45","date_gmt":"2008-02-04T16:30:45","guid":{"rendered":"http:\/\/www.realclimate.org\/index.php\/archives\/2008\/02\/ipcc-archive\/"},"modified":"2008-05-04T16:39:10","modified_gmt":"2008-05-04T21:39:10","slug":"ipcc-archive","status":"publish","type":"post","link":"https:\/\/www.realclimate.org\/index.php\/archives\/2008\/02\/ipcc-archive\/","title":{"rendered":"The IPCC model simulation archive"},"content":{"rendered":"<div class=\"kcite-section\" kcite-section-id=\"520\">\n<p>In the lead up to the 4th Assessment Report, all the main climate modelling groups (17 of them at last count) made a series of coordinated simulations for the 20th Century and various scenarios for the future. All of this output is publicly available in the <a href=\"http:\/\/www-pcmdi.llnl.gov\/ipcc\/about_ipcc.php\">PCMDI IPCC AR4<\/a> archive (now officially called the CMIP3 archive, in recognition of the two previous, though less comprehensive, collections). We&#8217;ve mentioned this archive before in <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2007\/12\/tropical-troposphere-trends\/\">passing<\/a>, but we&#8217;ve never really discussed what it is, how it came to be, how it is being used and how it is (or should be) radically transforming the comparisons of model output and observational data.<br \/>\n<!--more--><br \/>\nFirst off, it&#8217;s important to note that this effort was not organised by IPCC itself. Instead, it was coordinated by the Working Group on Coupled Modelling (<a href=\"http:\/\/www.clivar.org\/organization\/wgcm\/wgcm.php\">WGCM<\/a>), an unpaid committee that is part of an alphabet soup of committees, nominally run by the WMO, that try to coordinate all aspects of climate-related research. In the lead up to AR4, WGCM took up the task of deciding what the key experiments would be, what would be requested from the modelling groups and how the data archive would be organised. This was highly non-trivial, and adjustments to the data requirements were still being made right up until the last minute.  While this may seem arcane, or even boring, the point I&#8217;d like to leave is that just &#8216;making data available&#8217; is the least of the problems in making data useful. There was a <a href=\"http:\/\/www.gfdl.noaa.gov\/reference\/bibliography\/2007\/gam0701.pdf\">good summary<\/a> of the process in Bulletin of the American Meteorological Society last month.<\/p>\n<p>Previous efforts to coordinate model simulations had come up against two main barriers: getting the modelling groups to participate and making sure enough data was saved that useful work could be done. <\/p>\n<p>Modelling groups tend to work in cycles. That is, there will be a period of a few years of development of a new model then a year or two of analysis and use of that model, until there is enough momentum and new ideas to upgrade the model and starting a new round of development. These cycles can be driven by purchasing policies for new computers, staff turnover, general enthusiasm, developmental delays etc. and until recently were unique to each modelling group. When new initiatives are announced (and they come roughly once every six months), the decision of the modelling group to participate depends on where they are in their cycle. If they are in the middle of the development phase, they will likely not want to use their last model (because the new one will almost certainly be better), but they might not be able to use the new one either because it just isn&#8217;t ready. These phasing issues definitely impacted earlier attempts to produce model output archives. <\/p>\n<p>What was different this time round is that the IPCC timetable has, after almost 20 years, managed to synchronise development cycles such that, with only a couple of notable exceptions, most groups were ready with their new models early in 2004 &#8211; which is when these simulations needed to start if the analysis was going to be available for the AR4 report being written in 2005\/6. (It&#8217;s interesting to compare this with nonlinear phase synchronisation in, for instance, <a href=\"http:\/\/en.wikipedia.org\/wiki\/Phase_synchronization\">fireflies<\/a>).<\/p>\n<p>The other big change this time around was the amount of data requested. The diagnostics in previous archives had been relatively sparse &#8211; the main atmospheric variables (temperature, precipitation, winds etc.) but not huge amounts extra, and generally only at monthly resolution. This had limited the usefulness of the previous archives because if something interesting was seen, it was almost impossible to diagnose why it had happened without having access to more information. This time, the diagnostic requests for the atmospheric, ocean, land and ice were much more extensive and a significant amount of high-frequency data was asked for as well (i.e. 6 hourly fields). For the first time, this meant that outsiders could really look at the &#8216;weather&#8217; regimes of the climate models.<\/p>\n<p>The work involved in these experiments was significant and unfunded. At GISS, the simulations took about a year to do. That includes a few partial do-overs to fix small problems (like an inadvertent mis-specification of the ozone depletion trend), the processing of the data, the transfer to PCMDI and the ongoing checking to make sure that the data was what it was supposed to be. The amount of data was so large &#8211; about a dozen different experiments, a few ensemble members for most experiments, large amounts of high-frequency data &#8211; that transferring it to PCMDI over the internet would have taken years. Thus, all the data was shipped on terabyte hard drives. <\/p>\n<p>Once the data was available from all the modelling groups (all in consistent netcdf files with standardised names and formatting), a few groups were given some seed money from NSF\/NOAA\/NASA to get cracking on various important comparisons. However, the number of people who have registered to use the data (more than 1000) far exceeded the number of people who were actually being paid to look at it. Although some of the people who were looking at the data were from the modelling groups, the vast majority were from the wider academic community and for many it was the first time that they&#8217;d had direct access to raw GCM output.<\/p>\n<p>With that influx of new talent, many innovative diagnostics were examined. Many, indeed, that hadn&#8217;t been looked at  by the modelling groups themselves, even internally. It is possibly under-appreciated that the number of possible model-data comparisons far exceeds the capacity of any one modelling center to examine them.<\/p>\n<p>The advantages of the database is the ability to address a number of different kinds of uncertainty, not everything of course, but certainly more than was available before. Specifically, the uncertainty in distinguishing forced and unforced variability and the uncertainty due to model imperfections.<\/p>\n<p>When comparing climate models to reality the first problem to confront is the &#8216;weather&#8217;, defined loosely as the unforced variability (that exists on multiple timescales). Any particular realisation of a climate model simulation, say of the 20th Century, will have a different sequence of weather &#8211; that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing (increases in greenhouse gases, volcanoes etc.). There is no expectation that the weather in any one model will be correlated to that in the real world either. So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer). However, in the model archive it is relatively easy to distinguish.<\/p>\n<p>The standard trick is to look at the ensemble of model runs. If each run has different, uncorrelated weather, then averaging over the different simulations (the ensemble mean) gives an estimate of the underlying forced change. Normally this is done for one single model and for metrics like the global mean temperature, only a few ensemble members are needed to reduce the noise. For other metrics &#8211; like regional diagnostics &#8211; more ensemble members are required. There is another standard way to reduce weather noise, and that is to average over time, or over specific events. If you are interested in the impact of volcanic eruptions, it is basically equivalent to run the same eruption 20 times with different starting points, or collect together the response of 20 different eruptions. The same can be done with the response to El Ni\u00f1o for instance.<\/p>\n<p>With the new archive though, people have tried something new &#8211; averaging the results of all the different models. This is termed a meta-ensemble, and at first thought it doesn&#8217;t seem very sensible. Unlike the weather noise, the difference between models is not drawn from a nicely behaved distribution, the models are not independent in any solidly statistical sense, and no-one really thinks they are all equally valid. Thus many of the pre-requisites for making this mathematically sound are missing, or at best, unquantified. Expectations from a meta-ensemble are therefore low. But, and this is a curious thing, it turns out that the meta-ensemble of all the IPCC simulations actually outperforms any single model when compared to the real world. That implies that at least some part of the model differences is in fact random and can be cancelled out. Of course, many systematic problems remain even in a meta-ensemble. <\/p>\n<p>There are lots of ongoing attempts to refine this. What happens if you try and exclude some models that don&#8217;t pass an initial screening? Can you weight the models in an optimum way to improve forecasts? Unfortunately, there doesn&#8217;t seem to be any universal way to do this despite a few successful attempts. More research on this question is definitely needed.<\/p>\n<p>Note however that the ensemble or meta-ensemble only gives a measure of the central tendency or forced component. They do not help answer the question of whether the models are consistent with any observed change. For that, one needs to look at the spread of the model simulations, noting that each simulation is a potential realisation of the underlying assumptions in the models. Do not &#8211; for instance, <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2007\/12\/tropical-troposphere-trends\/\">confuse<\/a> the uncertainty in the estimate of the ensemble mean with the spread!<\/p>\n<p>Particularly important simulations for model-data comparisons are the forced coupled-model runs for the 20th Century, and &#8216;AMIP&#8217;-style runs for the late 20th Century. &#8216;AMIP&#8217; runs are atmospheric model runs that impose the observed sea surface temperature conditions instead of calculating them with an ocean model, optionally using other forcings as well and are particularly useful if it matters that you get the timing and amplitude of El Ni\u00f1o correct in a comparison. No more need the question be asked &#8216;what do the models say?&#8217; &#8211; you can ask them directly.<\/p>\n<p>The usefulness of any comparison is whether it really provides a constraint on the models and there are plenty of good examples of this. What is ideal are diagnostics that are robust in the models, not too affected by weather, and can be estimated in the real world e.g Ben Santer&#8217;s <a href=\"http:\/\/pubs.giss.nasa.gov\/abstracts\/2005\/Santer_etal.html\">paper<\/a> on tropospheric trends, the discussion we had on <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2007\/11\/global-dimming-and-global-warming\/\">global dimming trends<\/a>, and the AR4 report is full of more examples. What isn&#8217;t useful are short period and\/or limited area diagnostics for which the ensemble spread is enormous. <\/p>\n<p><strong>CMIP3 2.0?<\/strong><\/p>\n<p>In such a large endeavor, it&#8217;s inevitable that not everything is done to everyone&#8217;s satisfaction and that in hindsight some opportunities were missed. The following items should therefore be read as suggestions for next time around, and not as criticisms of the organisation this time. <\/p>\n<p>Initially the model output was only accessible to people who had registered and had a specific proposal to study the data. While this makes some sense in discouraging needless duplication of effort, it isn&#8217;t necessary and discourages the kind of casual browsing that is useful for getting a feel for the output or spotting something unexpected. However, the archive will soon be available with no restrictions and hopefully that setup can be maintained for other archives in future.<\/p>\n<p>Another issue with access is the sheer amount amount of data and the relative slowness of downloading data over the internet. Here some lessons could be taken from more popular high-bandwidth applications. Reducing time-to-download for videos or music has relied on distributed access to the data. Applications like BitTorrent manage download speeds that are hugely faster than direct downloads because you end up getting data from dozens of locations at the same time, from people who&#8217;d downloaded the same thing as you. Therefore the more popular an item, the quicker it is to download. There is much that could be learned from this data model. <\/p>\n<p>The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn&#8217;t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time. <\/p>\n<p>Finally, the essence of the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Web_2\">Web 2.0<\/a> movement is interactivity &#8211; consumers can also be producers. In the current CMIP3 setup, the modelling groups are the producers but the return flow of information is rather limited. People who analyse the data have published <a href=\"http:\/\/www-pcmdi.llnl.gov\/ipcc\/subproject_publications.php\">many interesting papers<\/a> (over 380 and counting) but their analyses have not been &#8216;mainstreamed&#8217; into model development efforts. For instance, there is a great paper by <a href=\"http:\/\/pubs.giss.nasa.gov\/docs\/2006\/2006_Lin_etal.pdf\">Lin et al<\/a> on tropical intra-seasonal variability (such as the Madden-Julian Oscillation) in the models. Their analysis was quite complex and would be a useful addition to the suite of diagnostics regularly tested in model development, but it is impractical to expect Dr. Lin to just redo his analysis every time the models change. A better model would be for the archive to host the analysis scripts as well so that they could be accessed as easily as the data. There are of course issues of citation with such an idea, but it needn&#8217;t be insuperable. In a similar way, how many times did different people calculate the NAO or Ni\u00f1o 3.4 indices in the models? Having some organised user-generated content could have saved a lot of time there.<\/p>\n<p>Maybe some of these ideas (and any others readers might care to suggest), could even be tried out relatively soon&#8230;<\/p>\n<p><strong>Conclusion<\/strong><\/p>\n<p>The diagnoses of the archive done so far are really only the tip of the iceberg compared to what could be done and it is very likely that the archive will be providing an invaluable resource for researchers for years. It is beyond question that the organisers deserve a great deal of gratitude from the community for having spearheaded this.<\/p>\n<!-- kcite active, but no citations found -->\n<\/div> <!-- kcite-section 520 -->","protected":false},"excerpt":{"rendered":"<p>In the lead up to the 4th Assessment Report, all the main climate modelling groups (17 of them at last count) made a series of coordinated simulations for the 20th Century and various scenarios for the future. All of this output is publicly available in the PCMDI IPCC AR4 archive (now officially called the CMIP3 [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[5,1,23],"tags":[],"class_list":{"0":"post-520","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-climate-modelling","7":"category-climate-science","8":"category-ipcc","9":"entry"},"aioseo_notices":[],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/520","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/comments?post=520"}],"version-history":[{"count":0,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/520\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/media?parent=520"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/categories?post=520"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/tags?post=520"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}