RealClimate logo


Technical Note: Sorry for the recent unanticipated down-time, we had to perform some necessary updates. Please let us know if you have any problems.

Uncertainty, noise and the art of model-data comparison

Filed under: — gavin @ 11 January 2008 - (Español) (Chinese (simplified))

Gavin Schmidt and Stefan Rahmstorf

John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.

This becomes immediately clear when looking at the following graph:

The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.

So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.

The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.

If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.

An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.

In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.

Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.

A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.

What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:

  • Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
  • Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
  • Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).

It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).

Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.

This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.

Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.

If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.

Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.

Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.

Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:

- UK Met Office data (instead of GISS data) with 8-year trend lines
- GISS data with 7-year trend lines (instead of 8-year).
- GISS data with 15-year trend lines

These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.

And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.


624 Responses to “Uncertainty, noise and the art of model-data comparison”

  1. 51
    Jack Roesler says:

    I’m relatively new here, but I watched PBS NOVA’s “Dimming the Sun” documentary last night, for about the third time. I find it very powerful, and scientifically accurate. Regarding modeling, are the effects of pollution, aerosols, and aircraft contrails accounted for in climate models? In the comment sections of USA Today’s articles on global warming, there are many skeptics that say NASA’s models just don’t reflect reality. If that’s right, it might have to do with the “global dimming” effect.

    Dr. Hansen says if we didn’t have that pollution in the atmosphere, global temperatures would be about 2 F. higher than they are now.

    How many here have seen that documentary, and what do you think of it?

    [Response: We discussed it when it was first broadcast in the UK and later on in the US. - gavin]

  2. 52
    Figen Mekik says:

    #50. Jim Cripwell, I don’t understand your argument. Can you plot your analysis ona graph and post it on the net somewhere so we can see it?

  3. 53
    Hank Roberts says:

    Jon, have you used the “Start here” link at the top of the page?

    Or read anything at http://www.globalwarmingart.com/ yet?
    Have you read the discussion on the page there along with this, which is at least half of what you’re wishing for I think:

    http://www.globalwarmingart.com/images/7/7e/Satellite_Temperatures.png

    If you will give us some idea what you’re starting from, where your questions arise, where you’ve looked, what you believe or know so far, it will help us (most like me are just fellow readers here) point to answers.

  4. 54
    Roger Pielke. Jr. says:

    Gavin- On #40 above. You want to use scenario IS92e or IS92f, rather than IS92a that you found in Figure Ax.2 (which says something about temperature change under different assumptions of climate sensitivity for IS92a). As I explained in my blog post on this, the proper figure to use is Figure Ax.3 to determine these values.

    You write “Once you include an adjustment for the too-large forcings” — sorry but in conducting a verification you are not allowed to go back and change inputs/assumptions that were made at the time based on what you learn afterwards, that is cheating. Predicting what happened after 1990 from the perspective of 2007 is easy;-)

    [Response: The IS92 scenarios do not diverge significantly until after 2010 - so assessing model response to 2007 is independent of exactly which scenario is used. I don't see that your figure uses the data from fig Ax.3 - that figure has all models on the same trajectory until 2000 and only a small amount of divergence in the years following. That is nothing like what you have used. And finally, when it comes to projection verification, ideally you would only do it (as I did for Hansen et al 1988) if the scenarios matched up with reality - that gives a test of the model. If the scenarios are off, then the model response is also off and the model-data comparison not particularly useful. If your claim is that the IS92 scenarios were high, I'm fine with that. But don't confuse that with model verification. - gavin]

  5. 55
    Roger Pielke. Jr. says:

    Gavin- Good. With this exercise I am not interested in model verification and never have claimed to be, and as I’ve stated all along my interest is in forecast verification. You can find out more about the various scenarios in that same report beginning at p. 69, Figure A3.1 for instance shows how dramatically the scenarios diverge quite early. If you spend a bit more time with it you’ll also see that my 1990 IPCC prediction matches just about exactly with the used by IPCC AR4 in their figure TS.26, so if I’m wrong, so too is IPCC. More next Monday.

    [Response: The only thing that matters in those simple box models is the net forcing, which is in fig Ax.1 - which clearly shows that the different scenarios have not significantly diverged by 2010. It's not clear to me what the FAR range in fig 1.1 of AR4 represents and it isn't clearly stated. Plus they reference it to IPCC 1990, not the 1992 supplement. I invite anyone who knows what's plotted there to let me know. - gavin]

  6. 56
    VirgilM says:

    It will be interesting to see if this temperature slowdown/fall since 1999 (depending on the dataset used) continues in the future and become statistically significant.

    It is my observation that skeptics are only doing what the other side has been doing for years. That is every record high temperature recorded in the United States is promoted in the media via climate scientists as “proof” that humans are causing global warming. In Montana, recent bad fire seasons are promoted in the media via climate scientists as “proof” that humans are causing global warming. It is refreshing to read that RealClimate has taken a stand against temperature chasing, drought chasing, and fire season chasing.

    [Response: As we always have. - gavin]

  7. 57
    Jon Pemberton says:

    Hank,

    Thank you, I looked at the links but they are not current.

    What I am exactly looking for is a comparison of all satellite and GISS temp data compared on the same graph through 2007.

    Beliefs… Warming? yes Global? not convinced, CO2 as main forcing? not convinced.

    I read RC, Open Mind, Climate Audit, Accuweather, Lubus Motl, Eli Rabbet, Anthony Watts, and Pielke sites/blogs. And probably some random others.

    I also like looking at sea ice extents from Artcic and Antartic.

    I see the “forts” with “high walls” being bulit between various camps and usually one piece of data posted on and then the “fur flies”. Recent example, GISS data reveals 1998 and 2007 are tied as 2nd warmest year, but RSS data differs (as noted on Lubos’s site).

    Looking for a post that discusses and shows all data on one graph. I selected 1979 as the starting point as that is when satellite data became available, correct me if my assumption on this is wrong.

    Another example, Eli has running post on Artic Sea ice extent, but not Antartic. Antartic melting appears to be behind last years rate. Maybe a long term comparison of both of these.

    I can read the data and understand it, but I am in no position to validate it. For that I have to rely on others and it is pretty tough doing so when you are between the “forts”..

    Jon P

  8. 58
    David B. Benson says:

    Jon Pemeberton (57) — Have you read The Discovery of Global Warming, linked in the Science section of the sidebar?

  9. 59
    Hank Roberts says:

    >forecast
    >prediction
    These are used to plan picnics and political decisions.

    >scenario
    >model
    Scenarios can be run for the deep past, recent past, and near future. The better a model’s range of outcomes matches the known real climate, the more interesting its outcomes when it’s run through into the near future.

    Because when I read Dr. Pielke write
    > that is cheating
    > my interest is in forecast verification

    That’s saying the work done with an original Cray supercomputer wasn’t so good, so redoing that work today is cheating.

    Cheating? It’s showing politicians how much better work can be done now — and that runs can better match what did happen, so they may better match what _will_ happen.

    That will scare those who want to say nobody knows enough to decide.

    Remember, the original Cray supercomputer could not have run Windows 95, it didn’t have enough memory. Your doorstop Win95 machine is more powerful than the Cray was then.

    Gavin describes how one can improve a model, and run it again starting at some past point, and if the model is better, the range of outcomes when it’s run on into the future may also be better.

    That’s not cheating, for science, it’s how it’s done.

    20 years ago Dr. Hansen was saying it’d take about til now to have an idea whether the models then were useful, because the climate signal would take that long to emerge from the noisy background.

    I’m sure he was referring to statistical and measurement noise, not to political noise. Over 20 years, the statistical noise decreases.

    Political scientists might study whether statistical noise is inverse to political noise, on issues like this.

  10. 60
    VirgilM says:

    Has anyone tried to calculate the heat content of the entire atmospheric and oceanic system? Wouldn’t that be a better metric to verify instead of surface based averages of temperatures? Wouldn’t this take ENSO and lags because of ocean storage out of the equation?

    [Response: You find this analysis in the IPCC report. The heat content change is completely dominated by the change in ocean heat content, because of the large heat capacity of water this is the only component of the climate system that stores a significant amount - see Fig. 5.4 in chapter 5. So this tells you how much ocean heat storage has delayed surface warming, i.e. what portion of the anthropogenic forcing is soaked up by the ocean rather than being balanced by radiation back into space (the latter implies a surface warming, so the portion going into the ocean does not lead to immediate surface warming). - stefan]

  11. 61
    Bob Ward says:

    RE#8 Try not to misrepresent the UK Met Office. Here is a link to its most recent media release on trends in global average temperature: http://www.metoffice.gov.uk/corporate/pressoffice/2008/pr20080103.html.

    I think climate scientists in the UK have given up correcting the misrepresentationa and misuses of global temperature data in the media. I can understand why – it is a seemingly endless task. Unfortunately, some newspaper editorial teams are apparently not up to the task of spotting arguments based on dodgy statistics and are publishing them, misleading millions of people in the process. I am glad that RC has not given up the task of challenging attempts to mislead the public about climate change science.

  12. 62
    Hank Roberts says:

    VirgilM — are you the VirgilM from CA? You know about Triana, right?

  13. 63
    Ken Fabos says:

    One only has to read the comments for the “Global warming has stopped” article to see how pervasive the denialist take on this issue is with the public – or at least with those who chose to comment. Even though the essential flaws in Dr Whitehouse’s opinion piece were glaring even to someone without deep knowledge of the subject such as myself, pointing them out merely ended up lost within a plethora of unsubstantiated claims of biased science, misrepresentations and repetitions of repeatedly debunked denialist myths. The attempts to inform by some commenters probably changed no-one’s mind.
    Has anyone with the in depth knowledge and solid scientific arguments taken Dr Whitehouse to task? I hope some of you do – he ought to have the education and intelligence to be engaged by what real science tells us, but I wouldn’t count on it. I think too much of the media are primarily about entertainment and his career is a media career. Controversy, even when it has little sound basis, attracts readers/viewers and Dr Whitehouse’s career in media is probably strengthened by the kind of writing in his “Has Global Warming Stopped?” article.
    I for one am pleased when this Blog does take people like Dr Whitehouse to task. Letting them and the organisations get away with it uncriticised leaves the public free to believe what they say is true ie ill-informed on an issue of critical importance.

  14. 64
    Hank Roberts says:

    Skip the trailing period to get Bob Ward’s link to work:
    http://www.metoffice.gov.uk/corporate/pressoffice/2008/pr20080103.html

    See also (would Dr. Pielke say this is ‘cheating’ by improving a model and then showing that it better matches what happened recently?)

    http://www.metoffice.gov.uk/corporate/pressoffice/2007/pr20070810.html

    “… Dr Doug Smith said: “Occurrences of El Nino, for example, have a significant effect on shorter-term predictions. By including such internal variability, we have shown a substantial improvement in predictions of surface temperature.” Dr Smith continues: “Observed relative cooling in the Southern Ocean and tropical Pacific over the last couple of years was correctly predicted by the new system, giving us greater confidence in the model’s performance”

  15. 65
    John Wegner says:

    The December, 2007 RSS anomaly for the lower atmosphere is -0.046C.

    The December, 1979 RSS anomaly is +0.022C

    Cherrypicking for sure, but a change in temperatures of -0.068C over 28 years should be taken into account I imagine.

  16. 66

    A decade,or less, does not a climate era make. What would we call this period- the tiny little ice age? Climate eras have lasted centuries and millenia, while a dominant forcing(GHGs)governs which is the case at present.

    It should be a given that the more data available, the more accurately projections regarding future scenarios can be made. To reduce this to an absurdity, who ever rolled a single die and came with a 3.5?( the expected average over a large number of throws.)
    At the Tierney Lab site on the right just under the heading About Tierney Lab, it states “John Tierney always wanted to be a scientist,but …..”.

    That says a lot.Beware of wannabes.

  17. 67
    Charles Raguse says:

    Commentary to this point (Post # 62.) seems to indicate that my suggestion (in Post # 1) of using a running mean (average) was misinterpreted, especially since other readers employed the same terminology to quite different measures. A true “running mean” of eight values begins with the first data point and is an arithmetic average of the first 8 values. This becomes the first graph point. The next graph point simply drops the first data point and increments by one. It forms a nicely smoothed, continuous regression line. It’s a much better way to present data such as those plotted in the GISTEMP index, and, I believe, is a better portrayal of reality than the “pick-up-sticks” jumble of “8-year trends”.

  18. 68
    Hank Roberts says:

    Jon Pemberton –

    Seems to me you’re asking others to do an impossible amount of work for you — to get and chart for you exactly what you want, all of what you want, and nothing but what you want. It’s certainly doable. But the person behind you in line will have slightly different demands.

    You can take the chart I pointed you to, look up the next year’s numbers and chart them.

    You can charm the person who maintains the site with intelligent questions and suggestions and perhaps get the charts updated a bit sooner than they’d otherwise find time to do it. I’ve _often_ found that to be true. Sometimes I find I’m the only person all year to thank a scientist for making the time to put such charts up online, they may get lots of pointers and lots of mentions but theys till perk up when someone emails a simple thank-you along with a question about how to find out a bit more, say, to extend such a chart.

    Bystanders like you and I could keep a whole lot of experts very busy, if they tried to respond to all such requests (and many people do start off by insisting that they know exactly what they need to find, to get their certain and final understanding of what puzzles them, or irrefutable proof of some claim they heard, or the like).

    That’s why people refer you to the data sources and programs that let you do your own charting. If you understand the statistics you can do your own error ranges. If you don’t, charts won’t help much.

    My suggestion is to stay away from anything that looks like a “fort” to you, and hang around in the trenches with the people who are doing the actual digging, or watch them as I and so many others do.

    The better you inform yourself, first, the better questions you can ask, and I tell you, I feel really happy if I manage to get a “Good question!” response from one of the climate scientists here once a year myself. They’re making a gift of their time to the rest of us.

    Last thought, I’ve heard this as ‘Rule One of Database Management’ — one data set, many pointers. One reason very few people make the kind of effort you find at globalwarmingart is that there is a vast collection of data sets, some easier to find than other, some very significantly revised when errors are found in work later on.

    People who make copies and then write based on their copies may be doing so based on outdated information. Look for people who give you references rather than put answers together for you — the references will lead you forward in time to better info.

  19. 69
    Walt Bennett says:

    Re: #48

    Hank,

    You misunderstood my post. Also, did you see my followup, with links to the references I made?

    My point was not: “Hey, isn’t it statistically significant that Hadley shows two consecutive years of cooling?” My point was: “Is Hadley right?”

    NASA-GISS seems to think it has kept on warming.

    I’m just trying to understand the Hadley data at face value.

  20. 70
    Hank Roberts says:

    Oh, Jon, you mentioned Eli’s notes on sea ice and said you wished he had something. Did you click his link? The comparison of the poles that you wished for is at the source Eli gives. The charts there that are pulled automagically from the databases are working now; the hand-edited one will be updated in a week or two to fill out the 2007 year, I just asked (nicely) about that yesterday myself (grin).

  21. 71
    Hank Roberts says:

    Charles, a moving average gives you a different look than a trend line.

    There’s an excellent treatment of this here, in a rather famous website: http://www.fourmilab.ch/hackdiet/e4/signalnoise.html

    “… Like most attempts to characterise a complicated system by a single number, a scale throws away a great deal of the subtlety. … The scale responds with a number that means something or other. If only we knew what…. Over time, certainly, the scale will measure the cumulative effect of too much or too little food. But from day to day, the scale gives results that seem contradictory and confusing. We must seek the meaning hidden among the numbers and learn to sift the wisdom from the weight.”

    “… The right way to think about a trend chart is to keep it simple. The trend line can do one of three things:
    * Go up
    * Go down
    * Stay about the same
    That’s it. The moving average guarantees the trend line you plot will obviously behave in one of these ways; the short term fluctuations are averaged out and have little impact on the trend.”

    He addresses the reason that the moving average you ask for may not give a clear picture, on the same page:

    http://www.fourmilab.ch/hackdiet/e4/figures/figure737.png

    “… The familiar moving average trend line is drawn as before. The dashed line is the best fit to all 90 days …. But, obviously, it misses the point. … Short term straight line trend lines … provide accurate estimates ….

    Excel workbooks are provided at the site to try out the different methods of charting. See also everything else, wonderful site.

  22. 72
    PaulM says:

    It is now raining in mid-winter where before it would be snow. I need no other proof or to be told weather is different than climate. Again, in mid-winter, it is now rain, and if it does snow, it is a wet snow and melts a few days later. before, there would be snow on the ground from Thanksgiving until Easter, now, it is raining in January.I am old enought to know the pattern has changed by what I have experienced and what I am experiencing now.

  23. 73
    Joseph O'Sullivan says:

    Re: #31 my comment and #34 Roger Pielke Jr’s reply:

    My tone was unduly harsh. Roger Pielke Jr does like to provoke discussion. To do this he will make controversial statements on his blog. Its common in some academic circles, but its likely to be misunderstood in a public forum like a blog. I did not like see my comment used in a way I thought was inappropriate.

    The misquote did occur, and I submitted comments and several where not admitted. After the posts moved on, one my milder comments made it through. After that I had all but one of my comments admitted on other posts.

    I do not think that Roger Pielke Jr does not respect Dr Curry, but I do think he was trying to stir things up.

    This is the post:
    http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/000904hurricanes_and_globa.html

  24. 74
    Andre Narichi says:

    I am not convinced by much of this.

    You take a data set of 30 years and say that the initial warming observed of less than 20 years is a long term trend and can be relied upon but you say the past 7 years of statistically indistinguishable temperature data is short term climatic fluctuation and can be ignored (note this short term ‘fluctuation’ isn’t fluctuating).

    There are no errors bars in the graph – put them in and you can draw your ‘trend lines’ with much more latitude. And because the recent observed stasis is 7 years you, ignoring errors, chose an 8 year grouping which is bound to drag the stasis back towards the rising section of the graph because you are giving it less weight than the data in the centre of the graph.

    YOU are the denialists – denying data. Finding ways to prove it isn’t what it is and making it conform to your worldview.

  25. 75
    Walt Bennett says:

    I made a chart from the updated NASA-GISS anomaly data and I think it came out fairly well:

    http://bp1.blogger.com/_hb0jssUZaPY/R4hNSaRKHzI/AAAAAAAAABM/Gabp4uz77ag/s1600-h/anom.gif

    I plot gross annual anomalies (a score of 1200 would be a mean anomaly of +1*C per month) along with 5, 10 and 30 year running means.

    I’d appreciate any feedback as to method and conclusions.

  26. 76
    Bob North says:

    First of, let me qualify my post by saying that I come here as a believer, a questioner, and a skeptic. By that I mean that, I am a believer in that I find the evidence for long-term global warming, since at least the 1880s, overwhelming and apparently irrefutable. Secondly, basic physics/thermodynamics dictate that increasing concentrations of greenhouse gasses will have an affect on the overall climate. However, I am a questioner in that I am not as fully convinced of the magnitude of the impact of anthropogenic versus non-anthropogenic causes of the warming trend but readily concede that the current warming trend is at least in part due, if not substantially, due to anthropogenic releases of CO2 and other GHGs. Finally, I am a skeptic that has some experience in much simpler modeling(mostly ground water contaminant fate & transport)in that I have much less confidence in our current ability to accurately model future climatethan is often portrayed in the popular press or even on websites such as this. In other words, I believe we can with near certaintude say that if global temperatures continue to rise, there will be a rise in sea level due to the melting of glaciers and thermal expansion of the oceans, but that projections of drought, hyper-intensive storms, mass extinctions, and other calamities, etc. are somewhat less certain.
    As an educated layman, my take on this is Much Ado about Nothing. I have read and re-read Tierney and Pielke’s posts and what I get out of both of them is that they are saying you can read whatever you want out of the recent (2001-2006) Global temperature estimates. ehy are very clear in stating that the recent Global temperatures neither prove or disprove the overall AGW model. What Pielke did say is that the most recent number will provide “cherry-pickers” with ammunition to quibble about this, that, or the other thing. Tierney correctly noted that there is a wide range of variance in the estimated global temperature anomalies and that, depending on where you fall on the sociological spectrum (denier, questioner/ skeptic, advocate, disciple), will help dictate which estimate you will rely on most. What both Tierny and Pielke seem to be asking for is continued and further refinement of the models as we gather additional climate data. In other words, don’t become defensive and just say short-term perturbations don’t affect the validity of the “MODEL”, continue to try to make the model account for the short-term perturbations. The models are nothing more than our attempts to account for all the variables that do drive climate change.

    Bob North

    Bob North

  27. 77
    Thomas says:

    Gavin, a man with the patience of a saint. My question has to do with how well behaved the psuedo-climate system, as defined by GCM runs is. I work in FEA engineering, and it is quite common for such systems to contain bifurcations, whereby a large collection of runs will demonstrate two (or more) general solutions, superimposed of course with shortterm noise. Have any of the climate models shown such behavior? I.E. do you see situtaions where some fraction of the runs (with the same parameters and forcings, but perturbed initial conditions) show more than a single solution trend?

  28. 78
    John Mashey says:

    Another way to see that data is take the GISTEMP data, compute 8-year regressions (via SLOPE), and put that series into as scatter plot, whihc gives one line that graphs the slopes of the blue lines. The only times the slopes go below zero are those around the volcanoes.

  29. 79
    cce says:

    Re: 57

    Here is the 12 month moving average for NASA GISS, Hadley/CRU, UAH, and RSS temperature analyses from January 1979 to October 2007. NASA GISS and Hadley/CRU are land-ocean instrument data, while UAH and RSS are lower troposphere satellite data.

    http://cce.890m.com/temp-compare.jpg

    Here is the raw data for each analysis, including the linear fit. With both of the instrumental analyses, the slope is 0.17 degrees per decade. In the UAH satellite analysis, it is 0.14 degrees per decade. In the RSS satellite analysis, it is 0.18 degrees per decade.

    In all cases, the anomalies are adjusted up or down so as to give the linear regression for each analysis a y intercept of zero.

    http://cce.890m.com/giss.jpg
    http://cce.890m.com/hadcrutv.jpg
    http://cce.890m.com/uah.jpg
    http://cce.890m.com/rss.jpg

    In short, the anomalies are where you’d expect them to be, given the warming signal, plus natural variability and the fact that each analysis uses different methods.

  30. 80

    Looks like we’re about to have a big volcanic eruption in Ecuador. So temperature will go down by 0.2 K for a couple of years and we’ll have to put up with two more years of deniers saying, “See, global warming stopped!”

  31. 81
    Jim Cripwell says:

    Ref 52. If you want to see the sort of thing I am talking about, I am afraid you need to go to Yahoo Climate Skeptics and download the graphs I uploaded under the title “Rctner”. I fully realize that there a pseudo-infinite number of such graphs, and these three are merely examples.

  32. 82
    Daniel C. Goodwin says:

    Thanks much for the crystal clear diagram and discussion. Contrary to some readers who are growing weary of it, I’m delighted to see RC giving the other side “enough rope” like this.

    Your main point (which you have now repeated 60 zillion times) is irrefutable, as is your observation that the exercise in question violates the simple principle that LIKE SHOULD BE COMPARED WITH LIKE. (You haven’t tried shouting yet, I suppose, but I doubt that would prove any more effective.) In the face of an argument which could hardly be framed more crisply, your dissenter has no more interesting stratagem than feigning deafness. Like a World Wrestling Federation spectacle, this classic thread has been a grossly unfair fight, and a lot of fun!

  33. 83
    Gautam Kalghatgi says:

    I understand that between the 1940s and 1970s, global mean temperature did not change much and this has been ascribed to increased particulates arising from industrialisation. So if you take a 30 year average between say 1945 and 1975 and then again between 1975 and 2005, the latter average should be significantly higher. What is the proper explanation for this? What changed in the 70s? Surely particulate emissions on a world scale did not decrease in the 1970s though it might have, in some industrialised countries? What about increasing use of coal by China and India say, and the consequent increase in particulates in the recent past? Would this not be expected to have a dimming effect and a reduction in global temperatures?

    [Response: Not all aerosols are the same. Some, such as black carbon released by coal burning, actually have a surface warming impact. Sulphate aerosols and secondarily nitrate aeresols, which do have a surface cooling impact, increased substantially in burden from the 1940s through the 1970s, decreasingly markedly with the passage of the Clean Air Acts of the 1970s and 1980s. The various issues you raise have been discussed many times before here and in links provided. Start here, here, and here. -mike]

  34. 84
    lgl says:

    “These comparisons are flawed since they basically compare long term climate change to short term weather variability”

    What is short term weather variability?

    Isn’t it obvious that the warming had to stop after a 5 W/m2 drop in the energy input to the climate system in 2002.

    http://isccp.giss.nasa.gov/zFD/an9090_TOTnet_toa.gif

    It should be equally obvious that the 90s had to get much warmer than the 80s because of a much higher energy input at TOA.

    But the explanation is that there was one type of quite stable weather between 1994 and 2000, and a totally different type of weather between 2002 and 2005 (and probably longer), capable of reducing the radiation by 5 W/m2?
    What is this assumption based on?

    [Response: The ISCCP data is great, but can't be relied on for trends due to difficulties in tying different satellite records together. The implication that albedo has suddenly increased to Pinatubo levels without anyone else noticing or a rapid decrease in temperatures is .... surprising, to say the least. - gavin]

  35. 85
    Ray Ladbury says:

    cce says in #79: “In short, the anomalies are where you’d expect them to be, given the warming signal, plus natural variability and the fact that each analysis uses different methods.”

    Of course, the denialists will say it’s clear the warming stopped in 2000…and in 1998 and in 1995 and in 1991 and in 1987…” At least Pielke and Douglass et al. are sufficiently sophisticated to realize that the only way to attack the anthropogenic hypothesis is to simply deny that warming is occurring. Unfortunately, since all the science and the evidence support a warming trend, they can’t get beyond misusing statistics and saying “No it isn’t.”

  36. 86
    John Lederer says:

    A couple of small comments:

    1. Pielke is right that so long as the four major sources of “global” temperature disagree what one sees is largely a matter of which record one looks at.

    This is quite troubling since the members of each pair, the two surface records, and the two satellite records, essentially have the same raw data. The differences between members of the pairs is thus a difference in after the fact “adjustments”. Surely those can be ironed out and agreement reached on the “better” methods?

    The increased divergence of GISStemp and HadleyCRU, and between UAH and RSS, suggests that rather than being reconciled the differences are growing.

    Very troublesome.

    [Response: Not really. The trends for the most part are similar given the underlying uncertainty, and there are defendable reasons for the various choices made in the different analyses. Reasonable people can disagree on what is 'best' in these cases, and so rather than being troubled, one should simply acknowledge that there are some systematic uncertainties in any large scale climate metric. That uncertainty feeds into assessments of attribution and the like, but is small enough not to be decisive. Of course, assessing the reason for any divergence is important and can lead to the discovery of errors (such as with the UAH MSU record a couple of years ago), but mostly it is due to the different choices made - gavin]

    2. For assessing “global climate change” the absolute trend lines are apt. However, for assessing man caused global warming the “neutral” trend line would not be zero. Since the Little Ice Age we have naturally warmed. Should not this be taken into account? In other words a trend of some figure, say .6 C per century, should be regarded as “neutral” for purpose of assessing a man caused trend.

    [Response: There is no 'neutral trend' that can simply be regarded as 'natural' - volcanic and solar forcing changes are not any different to GHG forcing when it comes to attribution and modeling. If instead you propose that the 20th Century trends are simply some long term intrinsic change then you need to show what is going on. No such theory exists. The trends can however be explained quite adequately from the time-series of forcings (including natural and human caused effects). - gavin]

  37. 87
    Jim Cripwell says:

    In 68 Hank writes “Jon Pemberton Seems to me you’re asking others to do an impossible amount of work for you — to get and chart for you exactly what you want, all of what you want, and nothing but what you want. It’s certainly doable. But the person behind you in line will have slightly different demands.”
    I think you are being a little unkind to Jon. In my computer I have complete sets of temperature data from NASA/GISS, HAD/CRU and RSS/MSU. If anyone can direct me to a site which has similar data for NCDC/NOAA I would be grateful. I have done some simple trend analysis on all four sets of data, and I am convinced that the HAD/CRU, NCDC/NOAA and RSS/MSU are highly correlated and give very similar results. The NASA/GISS data set is different. Notice I say “different”. I have no idea which set is closest to the truth. I have searched, written emails, etc. but I cannot find any study which has compared and contrasted the different data sets, so I have no idea which is “best”. Gavin has, however, used the NASA/GISS set, and seems to claim that any analysis using the other data sets would give the same result. My simple minded analyses indicates this may not be true. So I think it is perfectly legitimate to ask that Gavin either shows that the NASA/GISS data set is the best one available, or he shows that using the other three data sets produces the same answer.

  38. 88
    PeterK says:

    I do not agree. Dr. Pielke and others have a point when comparing data and model output. At least they have a strong point in the public discussion. One can clearly argue that tackling climate change would be easier if recent years would have shown significant warming. Correct, this does not invalidate the models and the time series is too short and error margins are too big to have a scientific argument against them, however, it is disturbing. The sceptics on the other hand have to come up with their own models and theories and support it by data to have a plain scientific battelfield. Criticism is easy. As long as is there is nothing better out there, we better stick with what we have.

  39. 89
    JCH says:

    In 1980 I became the national training director for a carburetor company. I would travel around America training mechanics on how to fine tune using a chassis dynamometer and an exhaust gas analyzer. As part of the demonstration I would disable the then mistrusted emissions systems so mechanics could see that the systems were in fact eliminating large quantities of CO, NOX, and hydrocarbons from the exhaust gas – especially when simulating climbing hills at high speeds (needles pegged). When I would enable the emissions systems the exhaust gas, even under very heavy loads, would be mostly CO2 and H2O vapor – plant food is what I told them.

    So the change the emissions systems made in dramatically reducing aerosols makes perfect sense to me.

    In the mid 1980s there was a widespread resurgence in the use of wood stoves for home heating. Lots of towns, even small ones, had a store that specialized in selling them. Did those make their presence known?

  40. 90
    lgl says:

    #84
    Gavin,

    The ISCCP and ERBS data are in very good agreement, are they also wrong?
    http://lasp.colorado.edu/sorce/news/2006ScienceMeeting/presentations/Day01_Wed/S2_01_Loeb.pdf page 21

    “without anyone else noticing” ?
    So it’s not true that the ocean heat content peaked in 2003 either, after a hugh increase since 1994?
    http://www.iges.org/c20c/workshops/200703/ppt/Ingleby.ppt page 29

    [edit]

    [Response: I don't see any support in Loeb's data for any significant shift in TOA SW absorption in recent years. And on slide 18, he estimates that it would take 15-20 years to be able to detect such a shift, and slide 19 shows no big changes in either ISCCP or CERES. As you should know, ocean heat content data post-2003 are the subject of a lot of investigation right now because of the shifts to the ARGO network and various problems that there have been. Whether OHC peaked in 2003 is therefore very uncertain, but SST records are rising less rapidly than SAT records there is likely an increasing offset in air-sea temperatures which implies that the oceans are still warming (and that OHC is increasing). As in the rest of this post, there is short term variability in all these metrics, and only significant long term trends count. - gavin]

  41. 91
    Thomas says:

    re 83: I started thinking, what if aerosol forcing is now growing fast enough to counterbalance GHG forcing by increasing CO2? Then GW would be temporarily stopped. We know CO2 emmisions have increased rapidly in the past few years prinicapally due to rapid growth in developing economies. These economies tend to burn coal dirtily. A second factor is that depletion of higher grade coal is forcing more and more consumption of lower grade product. Could these twin trends mean that the aerosol load might be rapidly increasing? Perhaps fast enough to counteract GHG forcing? A global temperature metric wouldn’t be the best way to detect such a change. Perhaps some other globally measured metrics could shed some light on this question?

    If this is indeed hapenning, it could make the job of obtaining consensus for mitigation more difficult.

  42. 92
    Hank Roberts says:

    Jim Cripwell, glad you’re volunteering, but I suggest you post pointers to the source data (and note the date when you downloaded the copies you’re using), and describe what you’re doing to compare them. It’s too easy to get the wrong file or outdated copies, elsewise.

  43. 93
    Hank Roberts says:

    Jim Cripwell, was it NASA or NOAA data that you’re looking for?
    NOAA is here:
    http://www.ncdc.noaa.gov/oa/climate/climateinventories.html

    “digital holdings … contain almost 300 terabytes of information, and grow almost 80 terabytes each year”

  44. 94
    Paul Klemencic says:

    I am a newbie here, and would like to pose a question that may be seem a bit stupid to some.

    I read the statistical work at this site, and it claims that a standard deviation for the annual average temperature series is 0.1 deg C.
    http://tamino.wordpress.com/2007/12/16/wiggles/

    The data indicates the standard deviation on the yearly data since 1975 is 0.1 deg C. Any data with +/- 0.2 deg C (two standard deviations) would be in the 95% confidence interval. Instead Dr. Pielke has shown dashed lines only one fourth that interval on his chart.

    Isn’t this standard deviation calculated from the temperature data alone, not from the IPCC models? So wouldn’t the temperature noise variation be used for the confidence intervals in a forecast verification (not model verification, as Dr. Pielke points out)? Wouldn’t any forecast have the same confidence intervals applied to it, regardless of how the forecast was arrived at?

    Is it possible the confidence levels shown on the Pielke chart, are based on a plot of some kind of longer term average, and seem to be overlaid on annual temperature data? Somehow, the large scatter in the data compared to the confidence intervals, seems inconsistent?

    (Disclosure… I am just a chemical engineer, who worked for an oil company once upon a time, and have no experience in climate studies.)

    [Response: The error bars on the forecast for the IPCC models is the uncertainty on the long term trend, not the envelope or distribution for annual anomalies. I think that this is misleading when comparing to real annual data, and is at the heart of my statements above. - gavin]

  45. 95
    Bryan S says:

    Now Gavin and Stefan, changes in upper ocean heat content is a really interesting question, and relates directly to the subject at hand. As you are aware, Roger Pielke Sr. (and others) have been pointing out the need to assess upper ocean heat content changes. Although there have been some well publicized problems with changing from XBTs to the Argo floats, the error bars in assessing global changes in ocean heat content are decreasing dramatically. The conclusion that the upper ocean has warmed over the last 40 years is certainly robust, but the shorter term changes are also interesting, since they are a direct proxy to the current radiative imbalance at the top of the atmosphere. After the Willis and Lyman papers showing short-term cooling, then their correction, we are still left with the more robust conclusion that the upper ocean heat content has been essentially flat the last few years. This is really more informative than a two-dimensional surface temperature analysis, since the ocean heat content goes directly to the question of unrealized “heating in the pipeline”, and also to what the current sum of all the forcing and feedbacks add up to, and how this sum changes on annual and multidecadal scales. What is also interesting is how well the SST changes map over upper ocean heat content, and how quickly these SST changes seem to be realized in the atmospheric volume (ie El Nino). It has been pointed out that the so-called “time constant” to equilibriate to a change in forcing via ocean mixing processes has a direct bearing on climate sensitivity to changes in a forcing.

  46. 96
    Jim Cripwell says:

    Ref 92 and 93. Each month I download the following sites.
    http://lwf.ncdc.noaa.gov/oa/climate/research/2007/dec/global.html#Temp

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt

    ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_0.txt

    The first, ncdc/noaa site only gives data for that month. I have searched the site you quoted, but cannot find anything similar to the other three sites. If you can find it, I would be grateful. As I have noted, I downloaded shareware CurveExpert 1.3, and simply plug in different time scales for all four data sets, call for a specific type of analysis, and look at the results. I do this for my own education; I am sure you dont want to read any non-peer reviewed results.

  47. 97
    Jim Cripwell says:

    In 86 Gavin writes “That uncertainty feeds into assessments of attribution and the like, but is small enough not to be decisive.” I wish I shared your optimism. I think the uncertainly is large enough to be decisive, but have nothing to offer in the way of a reference. Just the results of my playing around with simple analyses. On what do you base your opinion?

    [Response: The fact that the model match to obs doesn't depend on whether you use GISTEMP, NOAA or HADCRU. - gavin]

  48. 98
    Paul Klemencic says:

    Thanks for the quick response, Gavin. Couldn’t Dr. Pielke fix his chart, by simply drawing 95% confidence interval lines, 0.2 deg above and below the IPCC forecast, and then compare with the annual anomaly data? That interval would reflect where the annual data should fall.

    The forecast looks pretty good, if that is the expected range of variation in the actual temperature anomaly data.

    I realize this may not be entirely correct, because the standard deviation probably wasn’t calculated from just one of the temperature measurement systems, but then the chart wouldn’t as misleading as it currently is.

    [Response: I would suggest asking him. - gavin]

  49. 99
    lgl says:

    #90

    The inconsistency between Loeb’s slide 19 and ISCCP’s own web page is strange, I agree. Which one should be the more reliable?

    I find it hard to believe we were able to send man to the moon 40 years ago but now we are unable to measure the temperature of the oceans. There must be some system still in operation which was also in operation in the 90s. Replacing all floats without an overlap would be too stupid.
    So why would that system be correct pre-2003 and wrong post-2003?

  50. 100

    #42 Barton,

    ” 2001-2007] except that the data indicates no change in temperature over that period”

    And Superman might sell Mr Desjardins his fortress of solitude at the North Pole for a good price!

    I don’t think its possible, just yet, to have a single trend based on a couple of years without some deviation from the over all long term path, because the measuring techniques resolution are not at the quantum level. In addition, here is what I don’t see: world wide maximas and minimas anomalies, say from sea level to Bogota’s height above sea level.

    In the Arctic, just the other day, the temperature at surface was about -27 C (….no its warm, should be -35 C), yet aloft at about 900 meters it was -15.4 C. If this station was 900 meters high, the surface record would be different. These thermal heat sources aloft, may be found anywhere in the world, but are particularly strong in the Arctic. If we pick one single Upper air level, we will either miss a cooling or a warming above or below. The idea that adiabatic lapse rates are constant and therefore we can pick a single representative height, will equally mislead.
    But if we search a GW trend, we should find that every year the maximas (1 to 3000 ASL) are pushing upwards relentlessly. Then again I don’t think satellites can find profile maximas, and the world wide radiosonde network is too sparsely located, although its data is state of the art.

    Lacking resolution, we can rely on other measurements: very long term temperature trends, Polar ice melt, world wide glaciers retreating, deep sea temperatures for the most part and other bench mark measurements, until temperature resolution deficiencies have been eliminated, by increasing present network densities, or by finding a different way to measure the weighted temperature of the entire atmosphere, we do that for other planets. As a complement to present techniques, I suggest using the sun as a fixed disk of reference,,, It works!


Switch to our mobile site