Reanalyses ‘R’ Us

There is an interesting new wiki site, Reanalyses.org, that has been developed by a number of groups dedicated to documenting the various reanalysis products for atmosphere and ocean that are increasingly being made available.

For those that don’t know, a ‘reanalysis’ is a climate or weather model simulation of the past that includes data assimilation of historical observations. The observations can be very comprehensive (satellite, in situ, multiple variables) or relatively sparse (say, sea level pressure only), and the models themselves are quite varied. Generally these models are drawn from the weather forecasting community (at least for the atmospheric components) which explains the odd terminology. An ‘analysis’ from a weather forecasting model is the 6 hour (say) forecast from the time of observations. Weather forecasting groups realised a decade or so ago that the time series of their weather forecasts (the analyses) could not be used to track long term changes because their models had been updated many times over the decades. Thus the idea arose to ‘re-analyse’ the historical observations with a single consistent model. These sets of 6 hour forecasts using the data available at each point are then more consistent in time (and presumably more accurate) that the original analyses were.

The first two reanalysis projects (NCEP1 and ERA-40) were groundbreaking and allowed a lot of analysis of the historical climate (around 1958 or 1948 onwards) that had not been possible before. Essentially, the models are being used to interpolate between observations in a (hopefully) physically consistent manner providing a gridded and complete data set. However, there are noted problems with this approach that need to be borne in mind.

The most important issue is that the amount and quality of the assimilated data has changed enormously over time. Particularly in the pre-satellite era (around 1979), data is relatively sparse and reliant on networks of in-situ measurements. After 1979 the amount of data being brought in increases by orders of magnitude. It is also important to consider how even continuous measurement series have changed. For instance, the response time for sensors in radiosondes (that are used to track atmospheric profiles of temperature and humidity) has steadily improved which, if uncorrected for in the reanalyses, would lead to an erroneous drying in the upper troposphere that has nothing to do with any actual climate trend. In fact it is hard to correct for such problems in data coverage and accuracy, and so trend analysis in the reanalyses have to be treated very carefully (and sometimes avoided altogether).

A further problem is that different outputs from the reanalyses are differently constrained by observations. Where observations are plentiful and span the variability, the reanalysis field is close to what actually happened (for instance, horizontal components of the wind), but where the output field is only indirectly related to the assimilated observations (rainfall, cloudiness etc.), the changes and variability are much more of a product of the model.

The more modern products are substantially improved (NCEP-2, ERA-Interim, MERRA and others) over the first set, and new approaches are also being tried. The ‘20th Century Reanalysis‘ is a new product that only uses (plentiful) surface pressure measurements to constrain dynamics and although it uses less data than other products, it can go back much earlier (to the 19th Century) and still produce meaningful results. Other new products are the ocean reanalyses (ECCO for instance) that tries to take the same approach with ocean temperature and salinity measurements.

Page 1 of 2 | Next page