RealClimate logo

Note 3/23/2021: we had a few hiccups with comments after moving the site to https/SSL. Hopefully they're fixed now. Please let us know if there are remaining issues.

The IPCC model simulation archive

Filed under: — gavin @ 4 February 2008

In the lead up to the 4th Assessment Report, all the main climate modelling groups (17 of them at last count) made a series of coordinated simulations for the 20th Century and various scenarios for the future. All of this output is publicly available in the PCMDI IPCC AR4 archive (now officially called the CMIP3 archive, in recognition of the two previous, though less comprehensive, collections). We’ve mentioned this archive before in passing, but we’ve never really discussed what it is, how it came to be, how it is being used and how it is (or should be) radically transforming the comparisons of model output and observational data.

First off, it’s important to note that this effort was not organised by IPCC itself. Instead, it was coordinated by the Working Group on Coupled Modelling (WGCM), an unpaid committee that is part of an alphabet soup of committees, nominally run by the WMO, that try to coordinate all aspects of climate-related research. In the lead up to AR4, WGCM took up the task of deciding what the key experiments would be, what would be requested from the modelling groups and how the data archive would be organised. This was highly non-trivial, and adjustments to the data requirements were still being made right up until the last minute. While this may seem arcane, or even boring, the point I’d like to leave is that just ‘making data available’ is the least of the problems in making data useful. There was a good summary of the process in Bulletin of the American Meteorological Society last month.

Previous efforts to coordinate model simulations had come up against two main barriers: getting the modelling groups to participate and making sure enough data was saved that useful work could be done.

Modelling groups tend to work in cycles. That is, there will be a period of a few years of development of a new model then a year or two of analysis and use of that model, until there is enough momentum and new ideas to upgrade the model and starting a new round of development. These cycles can be driven by purchasing policies for new computers, staff turnover, general enthusiasm, developmental delays etc. and until recently were unique to each modelling group. When new initiatives are announced (and they come roughly once every six months), the decision of the modelling group to participate depends on where they are in their cycle. If they are in the middle of the development phase, they will likely not want to use their last model (because the new one will almost certainly be better), but they might not be able to use the new one either because it just isn’t ready. These phasing issues definitely impacted earlier attempts to produce model output archives.

What was different this time round is that the IPCC timetable has, after almost 20 years, managed to synchronise development cycles such that, with only a couple of notable exceptions, most groups were ready with their new models early in 2004 – which is when these simulations needed to start if the analysis was going to be available for the AR4 report being written in 2005/6. (It’s interesting to compare this with nonlinear phase synchronisation in, for instance, fireflies).

The other big change this time around was the amount of data requested. The diagnostics in previous archives had been relatively sparse – the main atmospheric variables (temperature, precipitation, winds etc.) but not huge amounts extra, and generally only at monthly resolution. This had limited the usefulness of the previous archives because if something interesting was seen, it was almost impossible to diagnose why it had happened without having access to more information. This time, the diagnostic requests for the atmospheric, ocean, land and ice were much more extensive and a significant amount of high-frequency data was asked for as well (i.e. 6 hourly fields). For the first time, this meant that outsiders could really look at the ‘weather’ regimes of the climate models.

The work involved in these experiments was significant and unfunded. At GISS, the simulations took about a year to do. That includes a few partial do-overs to fix small problems (like an inadvertent mis-specification of the ozone depletion trend), the processing of the data, the transfer to PCMDI and the ongoing checking to make sure that the data was what it was supposed to be. The amount of data was so large – about a dozen different experiments, a few ensemble members for most experiments, large amounts of high-frequency data – that transferring it to PCMDI over the internet would have taken years. Thus, all the data was shipped on terabyte hard drives.

Once the data was available from all the modelling groups (all in consistent netcdf files with standardised names and formatting), a few groups were given some seed money from NSF/NOAA/NASA to get cracking on various important comparisons. However, the number of people who have registered to use the data (more than 1000) far exceeded the number of people who were actually being paid to look at it. Although some of the people who were looking at the data were from the modelling groups, the vast majority were from the wider academic community and for many it was the first time that they’d had direct access to raw GCM output.

With that influx of new talent, many innovative diagnostics were examined. Many, indeed, that hadn’t been looked at by the modelling groups themselves, even internally. It is possibly under-appreciated that the number of possible model-data comparisons far exceeds the capacity of any one modelling center to examine them.

The advantages of the database is the ability to address a number of different kinds of uncertainty, not everything of course, but certainly more than was available before. Specifically, the uncertainty in distinguishing forced and unforced variability and the uncertainty due to model imperfections.

When comparing climate models to reality the first problem to confront is the ‘weather’, defined loosely as the unforced variability (that exists on multiple timescales). Any particular realisation of a climate model simulation, say of the 20th Century, will have a different sequence of weather – that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing (increases in greenhouse gases, volcanoes etc.). There is no expectation that the weather in any one model will be correlated to that in the real world either. So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer). However, in the model archive it is relatively easy to distinguish.

The standard trick is to look at the ensemble of model runs. If each run has different, uncorrelated weather, then averaging over the different simulations (the ensemble mean) gives an estimate of the underlying forced change. Normally this is done for one single model and for metrics like the global mean temperature, only a few ensemble members are needed to reduce the noise. For other metrics – like regional diagnostics – more ensemble members are required. There is another standard way to reduce weather noise, and that is to average over time, or over specific events. If you are interested in the impact of volcanic eruptions, it is basically equivalent to run the same eruption 20 times with different starting points, or collect together the response of 20 different eruptions. The same can be done with the response to El Niño for instance.

With the new archive though, people have tried something new – averaging the results of all the different models. This is termed a meta-ensemble, and at first thought it doesn’t seem very sensible. Unlike the weather noise, the difference between models is not drawn from a nicely behaved distribution, the models are not independent in any solidly statistical sense, and no-one really thinks they are all equally valid. Thus many of the pre-requisites for making this mathematically sound are missing, or at best, unquantified. Expectations from a meta-ensemble are therefore low. But, and this is a curious thing, it turns out that the meta-ensemble of all the IPCC simulations actually outperforms any single model when compared to the real world. That implies that at least some part of the model differences is in fact random and can be cancelled out. Of course, many systematic problems remain even in a meta-ensemble.

There are lots of ongoing attempts to refine this. What happens if you try and exclude some models that don’t pass an initial screening? Can you weight the models in an optimum way to improve forecasts? Unfortunately, there doesn’t seem to be any universal way to do this despite a few successful attempts. More research on this question is definitely needed.

Note however that the ensemble or meta-ensemble only gives a measure of the central tendency or forced component. They do not help answer the question of whether the models are consistent with any observed change. For that, one needs to look at the spread of the model simulations, noting that each simulation is a potential realisation of the underlying assumptions in the models. Do not – for instance, confuse the uncertainty in the estimate of the ensemble mean with the spread!

Particularly important simulations for model-data comparisons are the forced coupled-model runs for the 20th Century, and ‘AMIP’-style runs for the late 20th Century. ‘AMIP’ runs are atmospheric model runs that impose the observed sea surface temperature conditions instead of calculating them with an ocean model, optionally using other forcings as well and are particularly useful if it matters that you get the timing and amplitude of El Niño correct in a comparison. No more need the question be asked ‘what do the models say?’ – you can ask them directly.

The usefulness of any comparison is whether it really provides a constraint on the models and there are plenty of good examples of this. What is ideal are diagnostics that are robust in the models, not too affected by weather, and can be estimated in the real world e.g Ben Santer’s paper on tropospheric trends, the discussion we had on global dimming trends, and the AR4 report is full of more examples. What isn’t useful are short period and/or limited area diagnostics for which the ensemble spread is enormous.

CMIP3 2.0?

In such a large endeavor, it’s inevitable that not everything is done to everyone’s satisfaction and that in hindsight some opportunities were missed. The following items should therefore be read as suggestions for next time around, and not as criticisms of the organisation this time.

Initially the model output was only accessible to people who had registered and had a specific proposal to study the data. While this makes some sense in discouraging needless duplication of effort, it isn’t necessary and discourages the kind of casual browsing that is useful for getting a feel for the output or spotting something unexpected. However, the archive will soon be available with no restrictions and hopefully that setup can be maintained for other archives in future.

Another issue with access is the sheer amount amount of data and the relative slowness of downloading data over the internet. Here some lessons could be taken from more popular high-bandwidth applications. Reducing time-to-download for videos or music has relied on distributed access to the data. Applications like BitTorrent manage download speeds that are hugely faster than direct downloads because you end up getting data from dozens of locations at the same time, from people who’d downloaded the same thing as you. Therefore the more popular an item, the quicker it is to download. There is much that could be learned from this data model.

The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.

Finally, the essence of the Web 2.0 movement is interactivity – consumers can also be producers. In the current CMIP3 setup, the modelling groups are the producers but the return flow of information is rather limited. People who analyse the data have published many interesting papers (over 380 and counting) but their analyses have not been ‘mainstreamed’ into model development efforts. For instance, there is a great paper by Lin et al on tropical intra-seasonal variability (such as the Madden-Julian Oscillation) in the models. Their analysis was quite complex and would be a useful addition to the suite of diagnostics regularly tested in model development, but it is impractical to expect Dr. Lin to just redo his analysis every time the models change. A better model would be for the archive to host the analysis scripts as well so that they could be accessed as easily as the data. There are of course issues of citation with such an idea, but it needn’t be insuperable. In a similar way, how many times did different people calculate the NAO or Niño 3.4 indices in the models? Having some organised user-generated content could have saved a lot of time there.

Maybe some of these ideas (and any others readers might care to suggest), could even be tried out relatively soon…


The diagnoses of the archive done so far are really only the tip of the iceberg compared to what could be done and it is very likely that the archive will be providing an invaluable resource for researchers for years. It is beyond question that the organisers deserve a great deal of gratitude from the community for having spearheaded this.

169 Responses to “The IPCC model simulation archive”

  1. 51
    Jim Eager says:

    Re Steve Case @ 39 “If the production of CO2 is going to change the climate, then we ought to prepare for it. It makes more sense to get off the tracks then it does trying to stop the train.”

    You “it’s inevitable and we can adapt” guys keep forgetting that a warmer climate is only one result of increased CO2. It will also result in a more acidic ocean and a greatly altered marine food chain, so “adapting” also means that part of the Human population that relies on seafood for its protein source will have to find something else or do without.

  2. 52
    Ray Ladbury says:

    Gaelan, I will take the best guess of the climate scientists–since it is based on evidence–over, say, yours, which is not. The fact of the matter is that if you have a climate sensitivity much less than 3 degrees per doubling, there are a lot of paleoclimate and other results (e.g. effects of volcanos, the “cooling” of the mid 40s to mid 70s) become quite difficult to explain. Perhaps you are unfamiliar with the way a scientist “guesses”.

  3. 53
    John Mashey says:

    Regarding train analogies, I suggest that the old movie “Runaway Train” might be appropriately cautionary.

    But, given this is about archiving, I applaud those who are doing it: it’s hard work, and is a (mostly) thankless task.

  4. 54
    Steve L says:

    Trying not to be too off-topic, but I’ve been wondering this for awhile and this most recent effort by Gavin includes some potentially relevant material:
    “that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing …. … there is certainly a degree of unforced variability even at decadal scales (and possibly longer).”
    This makes sense to me. But relating it to a nagging question about initial conditions, etc, after we obtain enough observations subsequent to Jan 31, 1967, is it possible to infer the initial conditions? That is, is it possible to get enough historical data such that the unforced variability can be incorporated and all models of future climate change can have the same starting point?
    Now you know why I haven’t asked this question before. Perhaps I’ll find a way to ask it intelligibly in the future.

  5. 55
    Ray Ladbury says:

    Hi Nick, Overfitted is probably a better way to say it as “overfit” conjures a picture of a model on steroids.

    The weighting procedure works for “statistical models” more than dynamical models, but you can infer parameters in dynamical models. Here’s an interesting reference:

    I’ve been looking at some of this stuff in assessing reliability of microelectronics in high-radiation environments of space.

  6. 56
    Rafael Gomez-Sjoberg says:

    Re: #49

    Russell says: “… ice coverage for the Arctic Ocean is back to normal. …”

    Russell, the ice area has obviously increased during the winter, but it’s still 1 million square kilometers lower than the 1979-2000 average. And, more importantly, the new ice is much thinner than the old ice (the thick ice that had been there for many years), so that next summer the melting will be faster and deeper. We are nowhere back to normal, even if the area had gone back to the 1979-2000 average, and you shouldn’t feel so good about it. Even if we had an average area of very thin ice, that would still not be good enough, because this young ice would melt very quickly next summer. You can only feel good about the polar bears if we go back to 79-00 average coverages for many years in a row to guarantee that the thick, multi-year ice covers the same area it did before.

  7. 57
    dhogaza says:

    I thought I would let you know that the ice coverage for the Arctic Ocean is back to normal. I knew you were concerned about it last month, but now we can all sleep soundly, knowing the polar bears will last a few more months, at least.

    Someone else has discovered winter. It’s amazing how many people were unaware of it until this year. Even more amazing is how so many denialists seem convinced that CLIMATE SCIENTISTS are unaware of winter.

  8. 58

    #49, It is not “normal” , one word demonstrates this: volume. Another is plainly visible to anyone with remote sensing knowledge, there are more leads amongst a vastly wider area of first year ice. “First year” ice is vulnerable to re-melting. Much more so than ice twice or thrice thicker. The big question is whether immediately under ice biological and inorganic gaseous feedback is unleashed right now with little effect in darkness, or whether there will be a massive release this spring when sunshine triggers photochemical cloud and fog coverage, reducing the impact of sunlight. The latter scenario will be the only saving grace from a greater melt.

    High Arctic sunrise from the long night just happened, in very cold surface air, but aloft there is a thick layer of warm air, giving a weak refraction boost, from over the ice of Barrow straight. Its too early to say how strong and pervasive this warming is, but similar weak refraction starts were 2005 and 2007.

  9. 59
    tharanga says:

    Thank you; Meehl, et al (Science 307, 1769) gives exactly the comparison of transient responses I was looking for.

    Along similar lines, could somebody refer me to a good paper on the observed radiation imbalance? I’m seeing that it’s on the order of 1 W/m^2; this is relatively small. Can it be measured with confidence?

  10. 60
    Nick Barnes says:

    wayne, how does it look from your high vantage point? The recent satellite photos of ice breaking up in the Beaufort sea were quite unsettling.

  11. 61
    pete best says:

    Re #49

    I thought that climate models predicted the loss of summer sea ice and not winter sea ice for many years to come. Any loss of winter sea ice would be worrying, after all it is dark for 3/6 months, ice is boudn to form in open water.

  12. 62
    David Warkentin says:

    Re #57

    Careful where you point that sarcasm! The recent improvement in the NH sea ice anomaly is not just typical winter re-growth, because the 1978-2000 baseline for the anomaly includes seasonal variation – it’s not just a constant value. That’s why, if you look at the “tale of the tape” plot, you don’t see the huge seasonal of ~ 10 million square kilometers.

    (That said, the arguments about thin ice being at risk next summer seem quite plausible to me. I’ve noticed river basins can appear to have completely frozen over in a 24 hr span, but that ice can be lost a lot faster than cover that’s a result of many days or weeks of cold.)

  13. 63
    Hans Kiesewetter says:

    And an other comment on ice extend in winter. Of course the extend is back to ‘normal’. There is an other reason why the anomaly in winter extend is less then in summer. The extend is limited by land. Without Siberia and Canada maybe the extend would have been 19 mln km2 in the old days, and now 15 mln in a warmer climate. But due to the land mass these extends are cut off to less then 14. Have a look at the ‘regional graphs’ at cryosphere.

  14. 64
    Cobblyworlds says:

    #61 Pete Best,

    “I thought that climate models predicted the loss of summer sea ice and not winter sea ice for many years to come.”

    You’re right, Micheal Winton of GFDL has done an interesting paper on the stability of a seasonal Arctic ice cap in models. From his homepage: it’s the pdf under “New Publication” – Sea ice-albedo feedback and nonlinear Arctic climate change.

    As an aside…
    Cecilia Bitz has some conference slides here:
    Bearing in mind I’m working from the pdf – I have not heard the lecture…

    Bitz argues that next year’s minima will be around the established linear trend for 1979-2006 (page 12), in other words last year (2007) will not be the start of a new trend.

    On page 11, she shows a negative correlation for a 1 year lag.
    What’s being done here is that the data series for detrended September extent minima is correlated (checked for matches) against itself, with a lag applied. So where there’s no lag (lag=0) you get a correlation of 1 because you’re correlating the sequence with itself, but for a 1 year lag you get almost -0.2, i.e. no match.
    This can be seen on page 9 where successive record minima (circled in blue) since 1979 have always had more than 1 year between them.

    Dr Bitz concludes that:
    1) “PDF of anomalies becomes wider as ice thins, so expect bigger anomalies in future.”
    2) “Zero one-lag autocorrelation, so no reason to expect big anomaly in 2008 (in my opinion)”
    Which seems reasonable given her argument.

    However as I’m looking at perennial ice as a damping factor on ice albedo feedback, I find myself wondering if the perennial ice is now so reduced that such reference to past behaviour is no guide to the future. The QuikScat images seem to show a significant reduction in thick ice between this year and the previous 2. e.g. bottom of this page, 4 Jan 2006, 2007 & 2008.
    If I had to bet, I’d go for a summer minima of below 5 million km^2, either way we’ll know by October 2008.

    That said,
    Cecilia Bitz is an accomplished physicist, expert in modelling in the polar environment.
    I’m just a bloke with a degree in electronics…

  15. 65
    JCH says:

    “I thought that climate models predicted the loss of summer sea ice and not winter sea ice for many years to come. Any loss of winter sea ice would be worrying, after all it is dark for 3/6 months, ice is boudn to form in open water. …”

    It’s called winter for a reason. I guess I’m not getting it, but pointing out how much ice reforms in the Arctic in the wintertime seems just plain silliness.

    Want a man-sized bet, bet on whether perennial ice recovers or declines.

  16. 66
    Lawrence Brown says:

    In the subject post, Gavin states:

    “……… So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer).”

    Should the daily, seasonal, or annual weather be divorced from the forcing in this way? After all the the daily, monthly and annual averages and extremes of weather data, like temperature, precipitation, ice and snow accrual or lessening, taken over many tens of thousands of days are what climate is composed of.
    Many of the weather events we’ve been experiencing, especially in the last few decades likely been so intertwined with forcing, that it appears that it’s not only difficult to separate natual variability from forcing but something beyond our present grasp.

  17. 67
    Russell says:

    re: #57 & #58

    My understanding of an average means that some examples will be less and some more. If an element is less than average, yet still well within the bounds of “normal”, it is not a cause for concern.
    My point to gavin was that we have a poor understanding of what constitutes an “abnormal situation” and what constitutes “less than average, yet still within historical norms”. If we had 5000 years of arctic sea ice records, it would be pretty obvious. We have about 40 years, with no records pre-AGW.
    A reduction in the average over time, is more where the truth lies, but that will require some patience. Something in very short supply.

  18. 68
    tamino says:

    Re: Sea Ice

    In fact we do have data on sea ice prior to the satellite era; it support the hypothesis that it’s been in serious decline in the northern hemisphere, and in decline in the southern hemisphere as well. I posted on the topic here.

  19. 69
    Mike Donald says:

    Sorry Gavin,

    Multitasking day at the office. Could you expand on your reply to the doubting Thomas? Thanks.

    Re: Only if your definition of normal is 1 million km2 less than climatology.

    I’m thinking of coining a few more bon mots to Chris Booker.

    Best regards


  20. 70
    lucia says:


    I think the information on radiative forcings you want is here:

    I used those, modified to included monthly data on stratospheric aerosols after volcanic eruptions, to fit to a simple energy zero dimensional, transient energy balance model to data. I get plots like the one shown here.

    It’s not entirely clear what my fits mean since the zero-dimensional model is a huge simplification. (It’s the same model Schwartz 2007 used in his paper.)

    Still, assuming something can be learned from this fit, (and I don’t find any blunders) my current results suggest the climate time constant for the climate based on temperature measurements at the planet’s surface, is about 8-10 years. (No errors analysis done, yet. I’m also doing some consistency checks.)

    If correct, this would suggest that if we stopped adding CO2 or other GHG’s to the planet now, and there were no major volcanic eruptions or other dramatic events, the surface would reach 80% of the equilibrium value in roughly 16 years. (I can’t remember off hand how high it climbs; it’s enough we’d notice. )

  21. 71

    #67 Your understanding is flat wrong, what is there not to comprehend? Less ice volume, means more open water in the summer. What is normal was dictated well beyond modern surveying days, the Bowhead whales of the Atlantic and Pacific are genetically distinct, meaning that the ice barrier between the North Atlantic and Pacific spans multiple thousands of years. Shorter time span, brings us back to the 16th century, when first known attempts to find the North East and North West passage were done, none successful till a very smart Norwegian, Amudsen on the Fram slowly found the Northwest passage in the early 1900’s, after 2 years froth with danger and boredom. Now a days the same path can be done in a couple of weeks.

  22. 72
    CobblyWorlds says:

    #69 Mike Donald,

    If you’ll forgive me for butting in to take the load off someone busier than I…

    1) From Cryosphere Today:
    Looking at that graph shows a preponderance around 14million Km^2 with a sag in winter maximum to around 13million Km^2 since 2003. i.e. 1million km^2 below the previously typical extent.

    2) Seasonal trends are here:

    Chris Booker….
    After the day I’ve had at work I needed a good giggle, thanks for that link Mike. :)

    Cryosphere Today Homepage: source of the other links in this post.

  23. 73
    Jim Cripwell says:

    In 62 David writes “(That said, the arguments about thin ice being at risk next summer seem quite plausible to me. I’ve noticed river basins can appear to have completely frozen over in a 24 hr span, but that ice can be lost a lot faster than cover that’s a result of many days or weeks of cold.)” This has nothing to do with climate change, but may be of interest. If I have interpreted what David means, let me offer an explanation. When fresh water ice freezes, the ice crystals are vertical. When the ice thaws, some of these vertical crystals tend to stay intact. If you examine fresh water lake ice just before break up, the ice can be up to a foot thick, and at a distance appears to be solid. However, on close examination the ice is “rotten”; a whole series of vertical crystals with nothing much holding them together. In Canada, the lake where I have my cottage typically is completely ice covered one day. Then for some reason a small break in the ice occurs. The wind unsettles the water, and in a matter of a few hours, the entire lake is ice free, with a few chips of ice around the shore. The very rapid break up of fresh water ice in the spring is a very well known phenomenon. HTH.

  24. 74
    Dodo says:

    Re 56. Is there any reference for the assertion that “new ice” melts faster than old? One could also think that, as old ice is dirtier and more porous than new, it would melt faster. (As we know, the old ice is on top, so it is first exposed to the sun.)

    It is also possible that the age of the ice is not as important as the density of the ice, which is largely determined by the speed of freezing and wind conditions during freezing. But of course, the decisive factor is total thickness: the thicker the cover, the better it withstands summer.

    But I may be quite wrong. Could somebody from the RC group could tell us something about how the models handle these variables?

  25. 75
    Harold Pierce Jr says:

    Re #41 There are about 15 trilion barrels of oil equivent in heavy and extra heavy crude oils, oil shale, and tar sands. Coal can be readly converted to liquid hydrocarbons. South Africa gets about 40% of its liquid hydrocarbons from coal. Google SASOL.

    World supply of oxygen: Over every square foot there is a column of oxygen that weighs about 440 lbs. There is additional oxygen dissolved in water. We will never ever run out of oxygen, which by the way is a renewable resource.

    To all skeptics and deniers of fossil fuel facts and reality, I stand by my post (#33).

  26. 76
    Aaron Lewis says:

    Re Gavin’s comment in 23
    I consider SLR from thermal expansion to be De minimis. We can engineer our way around a meter/century SLR. It would not be cheap or pretty, but it is doable.

    The real questions are about sea level rise from permafrost melt and ice sheet dynamics. How fast and how soon could these occur?

    A seasonally ice free Arctic Ocean can be expected to provide more latent heat for rain in permafrost areas which could accelerate permafrost melt. Moreover, other Arctic surface water could drain to the ocean through pseudo karst formed by melting permafrost. In a warming Arctic, this could be a rapid process. And, as late as last November, broad swaths of Greenland received rain. That astonished (and worried me) much more than the sea ice melt last summer. I have done a bit of ice climbing, and our rule of thumb was always, “Ice that gets rained on, falls apart.”

    I am starting to see Arctic storms that sit on boundaries between (warm) open water and ice, I conclude that these storms are driven by the temperature differential between that water and the ice. This provides an effective (but local) mechanism for the transfer of heat from the ocean to the ice. It is good physics, but I do not see it in the GCM. Then, that heat can be rapidly advected into the ice. Given the complex phase transitions of ice under high pressure as it approaches 273K, there is the potential for rapid, progressive, structural failure of big ice by mechanisms not contemplated in the GCM.

    In terms of percentage of volume, how much of the Arctic sea ice has melted in the last decade? Show me a calculation that demonstrates that a combination of unusual weather and changing ocean currents (as in the recent Arctic) could not produce similar percentage of volume change in WAIS in the next decade. Only a fraction of a degree of warming would be required to move from “stabile” to melting. A hint of the current situation is at (, which shows water in the area warmer than it was in 1979-2000. Such a change in WAIS volume would change sea level.

    I worry when I see models (i.e. University of Maine) that suggest that much of the base of WAIS is at 0C. I know that under pressure, ice melts at less than 0 C. What is required to raise the temperature of some fault plane in the ice some fraction of a degree and have a chunk of ice crumble, and flow into the sea? At some point, as the ice warms, the potential energy of the ice itself will provide the activation energy. It will be sudden. Will it be a surprise?
    ( )

    Gavin, would you swear by your FORTRAN manual that these issues have been investigated, and a progressive collapse of any ice sheet could not occur in the next 30 years? Would you like to show me a good model of “pseudo karst formation in permafrost structures” in your Archive? Would you say that NOW, we have a much better understanding of ice sheet dynamics than we had of Arctic Sea Ice melting in the fall of 2004? All known factors considered, I do not think we have a good estimate of how fast sea level change is likely to occur. The fact that the literature on the topic is De minimis does not give me confidence. I would love for you to remind me what an ignorant amateur I am by pointing me to the literature. Come on Hank, tell me my Google is brain dead, and that there is in fact, a big pile of recent literature on dynamic ice sheet modeling that I am just not finding. Please tell me that I am miss-reading ( )

    “The period of continued warming and thinning appears to have primed these glaciers for a step-change in dynamics not included in current models. We should expect further Greenland outlet glaciers to follow suit.”

    (And, that does not even allow for the kind of serious rain that Greenland has seen in the last couple of years. My Google still finds a few weather reports.)

    In short, unless the summer rains in Greenland cease promptly, and the Arctic Ocean returns to its old levels of sea ice, and the waters around Antarctica cool rapidly, we must consider the possibility of rapid (more than 10 cm/decade) sea level rise in the near future (30 years.) Frankly, I expect very rapid SLR, and sooner rather than latter. Absence of prediction in the literature does not mean it cannot happen.

  27. 77
    Hank Roberts says:

    Aaron, the modelers say, quite credibly, that they can’t model this area without actual physical observations, and the International Polar Year scientists are out there on the ice and oceans right now, frantically busy collecting observations.

    The old assumption that nothing was going to change is clearly wrong, but I doubt that lets the modelers change anything in particular in a useful way.

    I’d love to be surprised, I’m sure work’s going on that amateur readers like me don’t know about. I’m sure Dr. De Quere et al, and others working on the ecology/biology/climate models, have a lot of input that depends on sea ice seasonal changes, for example

  28. 78
  29. 79
    Ray Ladbury says:

    Harold Pierce,
    No one denies the usefulness of fossil fuels. In fact, one could quite easily argue that they are too valuable to burn. However, I take issue with your contention that things must always be the way they are now. Brazil has come a long way with biofuels, for instance. In the ’50s one would have argued that farming in the desert was fantasy, but Israel and other countries have made the desert bloom with desalinized water.
    If ever increasing fossil fuel consumption equates to the end of civilization, one would expect rational people to work toward alternatives. That we may fail in that effort is not indicitave of the impossibility of that task, but rather of the lack of rationality in our species.

  30. 80
    Ryan says:

    “GISS received no additional money over our standard model development grants to do simulations for AR4, and most groups were in the same boat.”

    I’m interested in some further clarification of this concept of “unfunded” work on simulations for AR4.

    Perhaps this question is totally naive, but what is US funding of GCM development for, if not for contribution to IPCC assessment reports? Don’t US science agencies who fund this activity fully expect these models to be used for AR4? And if they do not, why should they tolerate the IPCC dictating how their expensive computing facilities will be used?


    [Response: Funding is for general research into climate forcings, responses, processes etc. The US groups were not mandated to do simulations in support of AR4, but decided to do so because they felt that would further the research. I think that was definitely the right decision. If AR4 hadn’t come along, we would have probably done similar kinds of things, so adapting to their specifications wasn’t completely orthogonal to our research goals. – gavin]

  31. 81
    Lawrence Coleman says:

    Very interesting article..scientists by nature are a very pedantic lot.that’s their strength but also their weakness as well. By the time we get enough data that statistically show to all and sundry that the level of forcing as apposed to natural variablty is beyond doubt the party’s over, the spilt beer and broken champagne glasses has been cleaned from the floor the last of the partier’s has dozed off in the corner and the cockroaches are out to play. It’s the same ol’ boiling frog scenario over and over again. It’s up to the climate scientists to take an avanced course in assertiveness and scare the pants off their respective ministers and senators into immediate action. Also to keep the pressure on the world media to keep reporting climate change stories. What do you guys think?

  32. 82
    pete best says:

    re #79

    Its not the end of civilization that matters Ray but the warming of the globe that seems to be the issue. In fact it is not doubted anymore that the end of cheap oil is upon us (its peak) and that we must seek out the more expensive heavy oils now in order to meet world demand.

    By heavy oil I mean the Athabasca oil sands of Alberta, a collossal resource of upto 2.5 trillion barrels of which current some 200 billion barrels is known to be recoverable, after that it needs new technology and its get more expensive to deliver it. The same goes for the potential 1.2 trillion barrels of heavy oils in Veneuzla and there are some more in Brazil and the USA. If these resources come online then expect much more warming of the planet beyond 2C.

    Projected oil consumption of 115 million barrels per day is required by 2030 (2% annual increase) and the likelihood of this being met by conventional oil resources of not good as it peak is here. Therefore only unconventional heavy oils can potentially meet this demand. Biofuels possibly can help out but not the way they do it in Brazil from growing sugar cane, it will not scale to global proportions. Switch grass and algae methods of the current second generation leaders but they will not come online any time soon in any great quantities to stop the oil companies from attempting to dig up Alaska, Iraq, persuading OPEC to pump more etc.

  33. 83
    Mike Donald says:

    #72 CobblyWorlds

    Ta for those excellent links. I especially like the side by side comparison of present and previous ice coverage. Anyrate the key parameter should be ice mass – not areal extent. I reckon a remake of Ice Station Zebra will very very boring. [smiley face]

  34. 84
    pete best says:

    Re #75, 15 trillion, that is just a fugure you have pulled out of the air regarding oil sands and heavy oils. Its 5 trillion at most, 2.5 in Athabasca, 1.2 trillion in Venuezla (orinoco belt) and 1.5 else where in the world unless you know some more massive deposits unknown at the present time.

  35. 85
    Julian Flood says:

    Re ice stability: a quick and dirty calculation* seems to indicate that thinner, one year ice will allow a larger phyto plankton population beneath it. Breaks in the thinner ice will allow DMS produced by that higher population to leak out and form more fogs and low level cloud, increasing albedo and preventing excessive melting. Very Gaia.

    I hope someone is checking DMS levels.

    *= guess

  36. 86


    You wrote – “It’s up to the climate scientists to take an advanced course in assertiveness and scare the pants off their respective ministers and senators into immediate action. Also to keep the pressure on the world media to keep reporting climate change stories. What do you guys think?”

    I certainly agree that the climate scientists should take an advanced course in assertiveness, but trying to scare the politicians has been tried and failed. The politicians only answer to their electors. It is the public that has to be convinced. When the voters speak then the politicians will jump.

    The IPCC has been producing reports for nearly 20 years now but what effect have they had? In 1990 when the first IPCC report was issued oil consumption was 66.6 million barrels per day. It is estimated that in 2010 it will be 91.6. Yet the AR4 is still begins with a “Summary for Policymakers.” The scientists just don’t seem to realise that it is not the politicians but the public who decide how much oil they is burnt, even if they are not conscious of it. The politicians cannot stop their constituents flying off on vacation nor jet skiing when they get there, and still get re-elected.

    As long ago as 1989 Stephen Schneider wrote:

    “On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.” (Quoted in Discover, pp. 45–48, Oct. 1989; for the original, together with Schneider’s commentary on it misrepresentation see also American Physical Society, APS News August/September 1996. [my emphasis]

    In the most recent IPCC AR4 the true extent of the sea level rise has been suppressed. The rise due to melting ice has been omitted, with a note saying this was because of uncertainty of how much it would be. What they should have done is published the largest estimate, with a note saying this might be an over-estimate since the amount of melted ice was uncertain. That is being perfectly truthful, not being misleading as the AR4 version is.

    But it won’t happen. The scientists won’t admit that their strategy is incorrect. Then they would have to admit that they were wrong, and no-one will do that because it means losing face.

    They will continue to play safe, but endanger the world!

    Don’t you agree Gavin?

  37. 87

    I very much appreciate the summary of the collected models and the discussion that follows. Two issues ocurred to me:
    1. Nowhere was the actual size of data sets specified, except for ‘very large’ and ‘Terrabytes’. I’d like to know the size of an individual data set, group of same model sets, and whole ensemble of open access sets.
    2. The reason for question one is that a number of people, myself included, have > 1T local storage and non-trivial amounts of compute capacity, and are already involved in, about 50,000 active hosts currently. What is the possibility that this cross-model analysis could be written to run under BOINC control and distributed to many individual systems for a substantial increase in run capacity?
    3. Have you considered asking Amazon or Google to host the open part of the datasets for no charge to assist in this process?
    Three issues ocurred to me, that’s three issues. :-}

    Note: For those interested, I created the PacificNorthwest CPDN team in 2004 as part of my ‘spreading the word’ effort. Currently with 79 members, the team is #15 worldwide. I have given climate presentations at a number of SF conferences and am in the process of updating my out of date web site to include more info.

  38. 88
    Dodo says:

    Re 85. It looks like we are all just guessing how thick “the ice” might be. (There was no answer from the RC crew to my earlier Q74.) But it is not certain at all that “new ice” is “thinner” than old. It all depends on the severity of the winter.

    Take this Arctic winter as an example: the starting point was very little ice, but freezing in early winter was quite rapid, so one could assume that average thickness by March, when melting starts, is not exceptional, although the ice is “new”.

    Sorry about all those quotation marks – I have no idea what they are supposed to convey. But let’s return to the topic next September.

    [Response: New ice is always thinner than multi-year ice. You can get ~1 meter or so of sea ice growth in a season, which then ridges and compresses to give multi-meter ice by the end of the winter. Multi-year ice is generally thicker – up to 4m – higher in some pressure ridges, but it takes time to build that up. Most of the ice area that melts in summer is thin first year ice. – gavin]

  39. 89
    Pekka Kostamo says:

    A very interesting discussions and links, thank you.

    Anyone in the know, please:

    Does the “Northern Hemisphere sea ice” statistic include areas outside of the Arctic Sea proper and the Northern Atlantic? Such as the Bering sea, coastal seas off the Asian eastern shores and the Baltic sea.

    Northern Atlantic has been warm this winter, and a constant southwesterly flow has kept the Northern Europe readings some 5 degrees warmer than usual, all winter long. In fact, one of the recent features is the complete absence of colder spells over here. The Baltic remains ice free, which is quite unusual. There is very little ground frost, so I presume our bacterial flora has been busy manufacturing more CO2 all winter long.

    My sporadic checks on a few Siberian sites were mostly warm as well, except Verkhoiansk which has shown the usual -50 – -60 degC range. More to the west, Norilsk for instance did not live up to its reputation (only -25 – -30 degC).

    Arctic seems primed for another large summer melt if the seasonal weather patterns are nearly the same as last year. Most important is the observation that winds have continued to push multi-year ice out to the Atlantic, which means still more first-year ice cover. In addition to being thinner, I believe first-year ice is rather brittle as it includes more occlusions of saturated brine than the slowly formed multi-year ice.

  40. 90
    JCH says:

    ” … Most of the ice area that melts in summer is thin first year ice.” – gavin

    No doubt that is true, but the way it reads sort of masks something.

    When was the last accrual year for perennial ice? From whatever year scientists consider its modern zenith, what percentage of perennial ice has been lost?

  41. 91
    Ray Ladbury says:

    Lawrence Coleman asks: “It’s up to the climate scientists to take an avanced course in assertiveness and scare the pants off their respective ministers and senators into immediate action. Also to keep the pressure on the world media to keep reporting climate change stories. What do you guys think?”

    Under no circumstances should it ever be to “scare” anyone. While I am also frustrated with the inability of policy makers to at threats that extend beyond the expiration of their term in office, but the only weapon science has is the credibility of its practitioners. It is not inappropriate to do “how bad can it be” analyses, but our emphasis should be on preparing for credible threats rather than just getting people off their fat, complacent asses.

    As Mark Twain said, “If you tell the truth, you’ll eventually be found out.”

  42. 92
    Hank Roberts says:

    A quick thought in reply to Robert Rohde’s comment about the limits on use — seems to me the Globalwarmingart pages are ideal material for a ‘research project intended for academic publication’ specifically on how information can be presented so as to be understood.

    There are plenty of college first year students each year who have little exposure to scientific information.

    Certainly enough to divide into groups and test comprehension of basic understanding of how the world works. Start with Piaget’s basics (grin).

    Robert, I suggest circulating a memo to the psychology/sociology or even (gasp!) political science and economics departments.

    Someone must be working on — or would like to work on — studying how information can be presented graphically so as to be understood.

    To prepare that material you’d certainly be justified in drawing on the current database, and want to compare new information to prior to see if a graphic can be created that gets across what’s different.

    The very simple questions would include
    — left to right or right to left?
    — changing scales (geologic time, then “last 200 years”) on a chart — do people see it?

    Just saying. It’s the great question of the age, can people comprehend science. It’s got to be done with pictures. You’re it.

  43. 93


    You wrote ‘As Mark Twain said, “If you tell the truth, you’ll eventually be found out.”’

    No doubt you will agree that the non-sceptic scientists are telling the truth, but when they are found out it is going to be too late!

  44. 94
    Aaron Lewis says:

    Re # 64

    Traditionally, Arctic sea ice was insulated from the warmer, saltier water below it by several meters of very cold, fresh water that floated on the surface of the salt water below. (see
    As the multiyear ice melts, this insulating layer is subject to storm mixing, i.e.,the chaotic wave systems of polar cyclones driven by a temperature differential between open water and extant sea ice.

    The open water absorbs heat, and cyclones transfer this heat to the sea ice as rain. Note; last summer there were periods when research activities on the Arctic sea ice were suspended because of rain. Rain in the high Arctic? That was a 6 sigma event. It changes the rules of the game. That should have made headlines everywhere. It did not. The researchers were so busy with their “science” that they did not make a big deal of the really interesting event going on all around them. (After ice is rained on, it absorbs more sunlight and melts faster.)

    Loss of the freshwater insulating layer from the surface of the Arctic Ocean means that the maximum refreeze temperature for new sea ice is (-1.8C) rather than the (0C) when the surface fresh water is retained. Thus, it must get colder to freeze new sea ice. It also means that the first year ice is saltier and has a lower melting point. Thus, first year ice frozen from saltier water will melt much faster than first year ice frozen from less salty water.

    Since 1998 (at least), warm, salty water has been flowing into the Arctic basin from the North Atlantic, and more recently from the North Pacific. This additional heat facilitates Arctic sea ice melt. This additional salt facilitates sea ice melt. According to the Navy guys in Monterey, some of the multiyear Arctic Sea ice has been melting twice as fast from the bottom up as from the top down.

    Measures of ice area do not reflect ice volume that have melted. Much of the extant “multiyear” ice is thinner that it ever was before, and if it gets a bit of rain water on it, it will melt fast. If chaotic wave systems “pump” the fresh water out from under this ice, so that it is floating in warm salt water, it will melt fast. Moreover, from overhead, ice floating in salt water will appear thicker than ice floating in fresh water. We may not have a good measure of the volume of the remaining ice.

    Last, but not least, we have accumulating soot on the sea ice reducing its albedo.

    Once we lose a significant amount of our Arctic sea ice, the system will tend to toward the loss of all summer sea ice in a highly non-linear fashion. Many of the effects listed above are not in the standard sea ice models, ocean current models, or atmospheric circulation modes. (At least I do not see them in there, but then, I do not work with these models and do not know their details.) They are small scale, local effects that occur on boundaries, and they never made it into the GCM. (Again, I do not spend my days with these models.) However, I think that cumulatively, they have a significant effect. With less sea ice and without its insulating freshwater, the Arctic sea is a new system, and statistical trends based on the old system are not likely to be valid.

    A new day dawns in the Arctic, and a lot of ice is about to melt. Unless there are ice flows from the Greenland Ice Sheet, 2×10^6 km^2 is my estimate for 2008 Arctic Sea Ice minimum with general loss of Arctic (summer) sea ice by the end of the 2010 melt season.

  45. 95
    Jim Cripwell says:

    Ref 89 Pekka writes “Does the “Northern Hemisphere sea ice” statistic include areas outside of the Arctic Sea proper and the Northern Atlantic? Such as the Bering sea, coastal seas off the Asian eastern shores and the Baltic sea.” So far as I am aware, ALL sea ice in the northern hemisphere is included. But, for example, I dont think it includes ice on the Great Lakes in the USA and Canada. As to what will happen next September, we are all very much “staying tuned”.

  46. 96
    JCH says:

    ” As to what will happen next September, we are all very much “staying tuned”. …” – Jim Cripwell

    Tell me why I’m wrong in believing the reality of what will happen in September is going to be either a new record, significant, or a nonevent, insignificant?

  47. 97

    #85 Julian, Old ice is like a house, many micro and larger beings live in or just by it, I am not sure if new ice is more fertile than old, it may be a combination of both which is ideal. The coming fog extent will tell.

    #94 Thanks Aaron, really good summary of some of the complexities involved,
    winds play an important role as well. Remains also the question as to whether multi-year ice can rebuild to former expanses, especially when top old ice itself is a source of fresh insulating water, and also makes excellent tea….

  48. 98
    sidd says:

    Aaron Lewis comment #76 6 February 2008 5:51 PM wrote:
    “I see models (i.e. University of Maine) that suggest that much of the base of WAIS is at 0C.”

    please may i have a link, or a citation for these studies ?

  49. 99
    scipio says:

    #89 & #95

    Indeed the sea ice situation here by the Baltic Sea is unusual this winter. I have lived my whole life on the south coast of Finland, and I can’t remember a year with this little ice. The situation varies from year to year, but what I find remarkable is that even Gulf of Riga, Gulf of Finland and most parts of Gulf of Bothnia are still open. Winter lasts another month or so, but according to the weather forecast no further freezing is expected in the coming days, it remains to be seen if there will be any at all.

    Finnish Institute of Maritime Research updates ice situation on their website

    There’s also interesting comparison to average (or “normal” as they call it) winter.

  50. 100
    Jim Cripwell says:

    In 96 JCH writes “Tell me why I’m wrong in believing the reality of what will happen in September is going to be either a new record, significant, or a nonevent, insignificant?” You are absoultely right that in the great scheme of things, the amount of ice in the Arctic this September is a completely non-event. However, it is important because the proponents of AGW have said it is important; something about a “tipping point”. I am afraid I dont understand tipping points.