RealClimate logo

Regional Climate Projections

Filed under: — rasmus @ 27 August 2007

Regional Climate Projections in the IPCC AR4

glasses How does anthropogenic global warming (AGW) affect me? The answer to this question will perhaps be one of the most relevant concerns in the future, and is discussed in chapter 11 of the IPCC assessment report 4 (AR4) working group 1 (WG1) (the chapter also has some supplementary material). The problem of obtaining regional information from GCMs is not trivial, and has been discussed in a previous post here at RC and the IPCC third assessment report (TAR) also provided a good background on this topic.

The climate projections presented in the IPCC AR4 are from the latest set of coordinated GCM simulations, archived at the Program for Climate Model Diagnosis and Intercomparison (PCMDI). This is the most important new information that AR4 contains concerning the future projections. These climate model simulations (the multi-model data set, or just ‘MMD’) are often referred to as the AR4 simulations, but they are now officially being referred to as CMIP3.

One of the most challenging and uncertain aspects of present-day climate research is associated with the prediction of a regional response to a global forcing. Although the science of regional climate projections has progressed significantly since last IPCC report, slight displacement in circulation characteristics, systematic errors in energy/moisture transport, coarse representation of ocean currents/processes, crude parameterisation of sub-grid- and land surface processes, and overly simplified topography used in present-day climate models, make accurate and detailed analysis difficult.

I think that the authors of chapter 11 over-all have done a very thorough job, although there are a few points which I believe could be improved. Chapter 11 of the IPCC AR4 working group I (WGI) divides the world into different continents or types of regions (e.g. ‘Small islands’ and ‘Polar regions’), and then discusses these separately. It provides a nice overview of the key climate characteristics for each region. Each section also provides a short round up of the evaluations of the performance of the climate models, discussing their weaknesses in terms of reproducing regional and local climate characteristics.

More »

Friday roundup

Filed under: — group @ 24 August 2007

A few items of interest this week:

Katrina Report Card:
The National Wildlife Federation (NWF, not to be confused with the ‘National Wrestling Federation’, which has no stated position on the matter) has issued a report card evaluating the U.S. government response in the wake of the Katrina disaster. We’re neither agreeing nor disagreeing with their position, but it should be grist for an interesting discussion.

An Insensitive Climate?:
A paper by Stephen Schwartz of Brookhaven National Laboratory accepted for publication in the AGU Journal of Geophysical Research is already getting quite a bit of attention in the blogosphere. It argues for a CO2-doubling climate sensitivity of about 1 degree C, markedly lower than just about any other published estimate, well below the low end of the range cited by recent scientific assessments (e.g. the IPCC AR4 report) and inconsistent with any number of other estimates. Why are Schwartz’s calculations wrong? The early scientific reviews suggest a couple of reasons: firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification; secondly, that a similar analysis in a GCM with a known sensitivity would likely give incorrect results, and finally, that his estimate of the error bars on his calculation are very optimistic. We’ll likely have a more thorough analysis of this soon…

It’s the Sun (not) (again!):

The solar cyclists are back on the track. And, to nobody’s surprise, Fox News is doing the announcing. The Schwartz paper gets an honorable mention even though a low climate sensitivity makes it even harder to understand how solar cycle forcing can be significant. Combining the two critiques is therefore a little incoherent. No matter!

Who ya gonna call?

Filed under: — gavin @ 22 August 2007

Gavin Schmidt and Michael Mann

Scientific theories gain credence from successful predictions. Similarly, scientific commentators should gain credibility from whether their comments on new studies hold up over time. Back in 2005 we commented on the Bryden et al study on a possible ongoing slowdown in the North Atlantic overturning circulation. In our standard, scientifically cautious, way we said:

… it might be premature to assert that the circulation definitely has changed.

Our conclusion that the Bryden et al result ‘might be premature’ was based on a balance of evidence argument (or, since we discussed this a few days ago, our Bayesian priors) for what the consequences of such a slowdown would be (a (unobserved) cooling in the North Atlantic). We also reported last year on some data that would likely help assess the uncertainty.

Well, now that data has been properly published (reported here) and it confirms what we thought all along. The sampling variability in the kind of snapshot surveys that Bryden et al had used was too large for the apparent trends that they saw to be significant (which the authors had correctly anticipated in the original paper though).

Score one for Bayesian priors.

Musings about models

Filed under: — gavin @ 20 August 2007

With the blogosphere all a-flutter with discussions of hundredths of degrees adjustments to the surface temperature record, you probably missed a couple of actually interesting stories last week.

Tipping points

Oft-discussed and frequently abused, tipping points are very rarely actually defined. Tim Lenton does a good job in this recent article. A tipping ‘element’ for climate purposes is defined as

The parameters controlling the system can be transparently combined into a single control, and there exists a critical value of this control from which a small perturbation leads to a qualitative change in a crucial feature of the system, after some observation time.

and the examples that he thinks have the potential to be large scale tipping elements are: Arctic sea-ice, a reorganisation of the Atlantic thermohaline circulation, melt of the Greenland or West Antarctic Ice Sheets, dieback of the Amazon rainforest, a greening of the Sahara, Indian summer monsoon collapse, boreal forest dieback and ocean methane hydrates.

To that list, we’d probably add any number of ecosystems where small changes can have cascading effects – such as fisheries. It’s interesting to note that most of these elements include physics that modellers are least confident about – hydrology, ice sheets and vegetation dynamics.

Prediction vs. Projections

As we discussed recently in connection with climate ‘forecasting‘, the kinds of simulations used in AR4 are all ‘projections’ i.e. runs that attempt to estimate the forced response of the climate to emission changes, but that don’t attempt to estimate the trajectory of the unforced ‘weather’. As we mentioned briefly, that leads to a ‘sweet spot’ for forecasting of a couple of decades into the future where the initial condition uncertainty dies away, but the uncertainty in the emission scenario is not yet so large as to be dominating. Last week there was a paper by Smith and colleagues in Science that tried to fill in those early years, using a model that initialises the heat content from the upper ocean – with the idea that the structure of those anomalies control the ‘weather’ progression over the next few years.

They find that their initialisation makes a difference for a about a decade, but that at longer timescales the results look like the standard projections (i.e. 0.2 to 0.3ºC per decade warming). One big caveat is that they aren’t able to predict El Niño events, and since they account for a great deal of the interannual global temperature anomaly, that is a limitation. Nonetheless, this is a good step forward and people should be looking out for whether their predictions – for a plateau until 2009 and then a big ramp up – materialise over the next few years.

Model ensembles as probabilities

A rather esoteric point of discussion concerning ‘Bayesian priors’ got a mainstream outing this week in the Economist. The very narrow point in question is to what extent model ensembles are probability distributions. i.e. if only 10% of models show a particular behaviour, does this mean that the likelihood of this happening is 10%?

The answer is no. The other 90% could all be missing some key piece of physics.

However, there has been a bit of confusion generated though through the work of – the multi-thousand member perturbed parameter ensembles that, notoriously, suggested that climate sensitivity could be as high as 11 ºC in a paper a couple of years back. The very specific issue is whether the histograms generated through that process could be considered a probability distribution function or not. (‘Not’ is the correct answer).

The point in the Economist article is that one can demonstrate that very clearly by changing the variables you are perturbing (in the example they use an inverse). If you evenly sample X, or evenly sample 1/X (or any other function of X) you will get a different distribution of results. Then instead of (in one case) getting 10% of models runs to show behaviour X, now maybe 30% of models will. And all this is completely independent of any change to the physics.

My only complaint about the Economist piece is the conclusion that, because of this inherent ambiguity, dealing with it becomes a ‘logistical nightmare’ – that’s is incorrect. What should happen is that people should stop trying to think that counting finite samples of model ensembles can give a probability. Nothing else changes.

1934 and all that

Filed under: — gavin @ 10 August 2007

Another week, another ado over nothing.

Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.

This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.

The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).

There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:

The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.

More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).

In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.

Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).

However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.

But hey, maybe the Arctic will get the memo.

Arctic sea ice watch

Filed under: — group @ 10 August 2007

A few people have already remarked on some pretty surprising numbers in Arctic sea ice extent this year (the New York Times has also noticed). The minimum extent is usually in early to mid September, but this year, conditions by Aug 9 had already beaten all previous record minima. Given that there is at least a few more weeks of melting to go, it looks like the record set in 2005 will be unequivocally surpassed. It could be interesting to follow especially in light of model predictions discussed previously.

There are a number of places to go to get Arctic sea ice information. Cryosphere Today has good anomaly plots. The Naval Sea ice center has a few different algorithms (different ways of processing the data) that give some sense of the observational uncertainty, and the National Snow and Ice Data Center give monthly updates. All of them show pretty much the same thing.

Just to give a sense of how dramatic the changes have been over the last 28 years, the figures below show the minimum ice extent in September 1979, and the situation today (Aug 9, 2007).

Sep 05 1979Aug 09 2007

The reduction is around 1.2 million square km of ice, a little bit larger than the size of California and Texas combined.

Update: As noted by Andy Revkin below, some of the discussion is about ice extent and some is about ice area. The Cryosphere Today numbers are for area. The difference is whether you count ‘leads’ (the small amounts of water between ice floes) as being ice or water – for the area calculation they are not included with the ice, for the extent calculation they are.

Update: From the comments: NSIDC will now be tracking this on a weekly basis.

Transparency of the IPCC process

Filed under: — rasmus @ 9 August 2007

Recently, a Financial Times op-ed criticised the IPCC for having contributors and peers drawn from a narrow professional circle. I don’t think this is fair, unless one regards a whole discipline as ‘narrow’. Furthermore, recent public disclosure of both comments and response suggests a different story to the allegations in the FT op-ed of ‘refusing to disclose data and methods’. The IPCC has no control over the independent publication, but the disclosure of the comments and response at least enhances the openness for the synthesis of the report.

More »

The CO2 problem in 6 easy steps

Filed under: — gavin @ 6 August 2007

We often get requests to provide an easy-to-understand explanation for why increasing CO2 is a significant problem without relying on climate models and we are generally happy to oblige. The explanation has a number of separate steps which tend to sometimes get confused and so we will try to break it down carefully.
More »