Over the last couple of months there has been much blog-viating about what the models used in the IPCC 4th Assessment Report (AR4) do and do not predict about natural variability in the presence of a long-term greenhouse gas related trend. Unfortunately, much of the discussion has been based on graphics, energy-balance models and descriptions of what the forced component is, rather than the full ensemble from the coupled models. That has lead to some rather excitable but ill-informed buzz about very short time scale tendencies. We have already discussed how short term analysis of the data can be misleading, and we have previously commented on the use of the uncertainty in the ensemble mean being confused with the envelope of possible trajectories (here). The actual model outputs have been available for a long time, and it is somewhat surprising that no-one has looked specifically at it given the attention the subject has garnered. So in this post we will examine directly what the individual model simulations actually show.
Back to the future
A few weeks ago I was at a meeting in Cambridge that discussed how (or whether) paleo-climate information can reduce the known uncertainties in future climate simulations.
The uncertainties in the impacts of rising greenhouse gases on multiple systems are significant: the potential impact on ENSO or the overturning circulation in the North Atlantic, probable feedbacks on atmospheric composition (CO2, CH4, N2O, aerosols), the predictability of decadal climate change, global climate sensitivity itself, and perhaps most importantly, what will happen to ice sheets and regional rainfall in a warming climate.
The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.
Target CO2
What is the long term sensitivity to increasing CO2? What, indeed, does long term sensitivity even mean? Jim Hansen and some colleagues (not including me) have a preprint available that claims that it is around 6ºC based on paleo-climate evidence. Since that is significantly larger than the ‘standard’ climate sensitivity we’ve often talked about, it’s worth looking at in more detail.
Blogs and peer-review
Nature Geoscience has two commentaries this month on science blogging – one from me and another from Myles Allen (see also these blog posts on the subject). My piece tries to make the point that most of what scientists know is “tacit” (i.e. not explicitly or often written down in the technical literature) and it is that knowledge that allows them to quickly distinguish (with reasonable accuracy) what new papers are worth looking at in detail and which are not. This context is what provides RC (and other science sites) with the confidence to comment both on new scientific papers and on the media coverage they receive.
Myles’ piece stresses that criticism of papers in the peer-reviewed literature needs to be in the peer-reviewed literature and suggests that informal criticism (such as on a blog) might undermine that.
We actually agree that there is a real tension between a quick and dirty pointing out of obvious problems in a published paper (such as the Douglass et al paper last December) and doing the much more substantial work and extra analysis that would merit a peer-reviewed response. The approaches are not however necessarily opposed (for instance, our response to the Schwartz paper last year, which has also lead to a submitted comment). But given everyone’s limited time (and the journals’ limited space), there are fewer official rebuttals submitted and published than there are actual complaints. Furthermore, it is exceedingly rare to write a formal comment on an particularly exceptional paper, with the results that complaints are more common in the peer reviewed literature than applause. In fact, there is much to applaud in modern science, and we like to think that RC plays a positive role in highlighting some of the more important and exciting results that appear.
Myles’ piece, while ending up on a worthwhile point of discussion, illustrates it (in my opinion) with a rather misplaced example that involves RC – a post and follow-up on the Stainforth et al (2005) paper and the media coverage it got. The original post dealt in part with how the new climateprediction.net model runs affected our existing expectation for what climate sensitivity is and whether they justified a revision of any projections into the future. The second post came in the aftermath of a rather poor piece of journalism on BBC Radio 4 that implied (completely unjustifiably) that the CPDN team were deliberately misleading the public about the importance of their work. We discussed then (as we have in many other cases) whether some of the responsibility for overheated or inaccurate press actually belongs to the press release itself and whether we (as a community) could do better at providing more context in such cases. The reason why this isn’t really germane to Myles’ point is that we didn’t criticise the paper itself at all. We thought then (and think now) that the CPDN effort is extremely worthwhile and that lessons from it will be informing model simulations some time into the future. Our criticisms (such as they were) were mainly associated instead with the perception of the paper in parts of the media and wider community – something that is not at all appropriate for a peer-reviewed comment.
This isn’t the place to rehash the climate sensitivity issue (I promise a new post on that shortly), so that will be deemed off-topic. However, we’d be very interested in any comments on the fundamental issue raised – how do (or should) science blogs and traditional peer-review intersect and whether Myles’ perception that they are in conflict is widely shared.
536 AD and all that
“during this year a most dread portent took place. For the sun gave forth its light without brightness… and it seemed exceedingly like the sun in eclipse, for the beams it shed were not clear.”
This quote from Procopius of Caesarea is matched by other sources from around the world pointing to something – often described as a ‘dry fog’ – and accompanied by a cold summer, crop failures and a host of other problems. There’s been a TV special, books and much newsprint speculating on its cause – volcanoes, comets and other catastrophes have been suggested. But this week there comes a new paper in GRL (Larsen et al, 2008) which may provide a definitive answer….
The IPCC model simulation archive
In the lead up to the 4th Assessment Report, all the main climate modelling groups (17 of them at last count) made a series of coordinated simulations for the 20th Century and various scenarios for the future. All of this output is publicly available in the PCMDI IPCC AR4 archive (now officially called the CMIP3 archive, in recognition of the two previous, though less comprehensive, collections). We’ve mentioned this archive before in passing, but we’ve never really discussed what it is, how it came to be, how it is being used and how it is (or should be) radically transforming the comparisons of model output and observational data.
[Read more…] about The IPCC model simulation archive
Uncertainty, noise and the art of model-data comparison
Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
[Read more…] about Uncertainty, noise and the art of model-data comparison
New rule for high profile papers
New rule: When declaring that climate models are misleading in a high profile paper, maybe looking at some model output first would be a good idea.
[Read more…] about New rule for high profile papers
A barrier to understanding?
People don’t seem to embrace global measures of temperature rise (~0.2ºC/decade) or sea level rise (> 3mm/yr) very strongly. They much prefer more iconic signs – The National Park formerly-known-as-Glacier, No-snows of Kilimanjaro, Frost Fairs on the Thames etc. As has been discussed here on many occasions, any single example often has any number of complicating factors, but seen as part of a pattern (Kilimanjaro as an example of the other receding tropical glaciers), they can be useful for making a general point. However, the use of an icon as an example of change runs into difficulty if it is then interpreted to be proof of that change.
With respect to sea level, the Thames Barrier is a concrete example that has been frequently raised.
[Read more…] about A barrier to understanding?
Books ’07
We have a minor tradition of doing a climate-related book review in the lead up to the holidays and this year shouldn’t be an exception. So here is a round-up of a number of new books that have crossed our desks, some of which might be interesting to readers here.
[Read more…] about Books ’07