RealClimate

Comments

RSS feed for comments on this post.

  1. Thank you for the review.

    Given the strength of the Hurst coefficients – something we all agree on – is it not possible that a large portion of the current warming trend is a product of internal climate variability, as mediated by complex dynamics of ocean circulation? I want to understand better how it is you decide that GHGs are responsible for a deterministic forced trend when you have this powerful but poorly understood stochastic noise rocess operating in the background. How do you estimate the precision of your forcings when the background noise is so poorly understood?

    How well do the GCMs perform at generating suitably high Hurst coefficients?

    Comment by Richard Sycamore — 10 Aug 2008 @ 12:33 PM

  2. Estimating the Hurst parameter from observed data is very tricky business. Clegg concludes that

    The most striking conclusion of this paper is that measuring the Hurst parameter, even in artificial data, is very hit and miss. In the artificial data with no corrupting noise, some estimators performed very poorly indeed. Confidence intervals given should certainly not be taken at face value (indeed should be considered as next to worthless).

    Corrupting noise can affect the measurements badly and different estimators are affected in by different types of noise. In particular, frequency domain estimators (as might be expected) are robust to the addition of sinusoidal noise or a trend. All estimators had problems in some circumstances with the addition of a heavy degree of short-range dependence even though this, in theory, does not change the longrange dependence of the time series.

    When considering real data, researchers are advised to use extreme caution. A researcher relying on the results of any single estimator for the Hurst parameter is likely to be drawing false conclusions, no matter how sound the theoretical backing for the estimator in question. While simple filtering techniques are suggested in the literature for improving the performance of Hurst parameter estimation, they had little or no effect on the data analysed in this paper.

    Essentially, Clegg finds that it’s hard to estimate the Hurst parameter even for artificial time series where the answer is known ahead of time, even when working with 100,000 data points.

    And if the estimates are made from monthly data, then a century of observations gives us only 1200 data points. For accuractly pinning down the Hurst parameter, that can only be called a pittance. Even using daily data, a century only gives 36,525 data points, which frankly is not a lot. If the process is anything other than pure LRD (e.g. corrupted with AR1 or other noise), then an already difficult task is made immensely more so. And if the time series is not stationary, then if the detrending isn’t just right — well, all bets are off.

    So even though I haven’t looked at Kiraly et al. (2006) or Fraedrich and Blender (2003), let alone given them careful study, I have to admit I’m skeptical. They would have to have been extraordinarily thorough and extraordinarily careful in order to get meaningful results. This is not to say they weren’t — just that extreme caution is in order. As for Koutsoyiannis et al. (2008), what you’ve told us of their research convinces me that it’s not worth careful study.

    It is worth noting that LRD has less impact on trend analysis than it does on, say, estimating the series mean. This is due to the impact of autocorrelation on trend analysis (which was one of the subjects of my post). Increasing the Hurst parameter always makes averages more uncertain, but beyond a certain point it actually makes trends less uncertain.

    Comment by tamino — 10 Aug 2008 @ 12:48 PM

  3. So, you’re saying that places with a high Hurst coefficient are more likely to have lasting anomalies – e.g. multi-year droughts or multi-year seasonal flooding? While places with a low Hurst coefficient tend to regress to their “normal” weather patterns faster?

    And you’re pointing out that it’s a reasonable test of a model to ask if the spatial distribution of Hurst coefficients it predicts resemble observed values to date, if the observed data set is sufficient. If I’ve understood that right, I’d agree that it’s a clever and useful test of the models.

    You’re *not* saying that the models are good enough to predict changes in the spatial distribution of Hurst coefficients, right? I suppose a good model could do so, but wouldn’t have to because it would be possible to have all sorts of significant AGW effects without any change to Hurst coefficients over time. Is there any suspicion that Hurst coefficients will change or are changing? From the nature of the definition, I imagine you’d need a really long, detailed data set to say.

    Comment by Greg Wellman — 10 Aug 2008 @ 1:00 PM

  4. Gavin, you give a couple of parenthetical references, but don’t give the full citation at the end. Is this an oversight or an exercise for the reader?

    Comment by S. Molnar — 10 Aug 2008 @ 5:40 PM

  5. What is perfectly clear is that the average bloke with an IQ of 100 and a high school education is 100% lost in this discussion. They have no way of understanding any part of it, no idea who is telling the truth and no time or money to go to school. To rephrase the song: “Only their extinction will tell.” If we actually deserved the name “Homo Sapiens,” they would know who to trust, at least.
    It’s kind of a sinking feeling.:
    I have to get off of this planet. I have to get off of this planet. I have to get off of this planet.

    Comment by Edward Greisch — 10 Aug 2008 @ 7:09 PM

  6. RE: #1
    I think that Richard Sycamore has asked some key questions – “is it not possible that a large portion of the current warming trend is a product of internal climate variability” and “how it is you decide that GHGs are responsible for a deterministic forced trend”. I suspect these are the questions that would be at the root of any doubts about the validity climate models. It would be very helpful if Gavin or someone else could have a serious go at answering them.

    Comment by Svet — 10 Aug 2008 @ 7:15 PM

  7. Re: #1 (Richard Sycamore)

    Neither Kiraly et al. (2006) nor Fraedrich & Blender (2003) establish LRD in temperature time series. They find it in the fluctuations of daily temperature. Furthermore, the methods they use remove the long-term trend from the data, so the temperature trend is already gone by the time they find LRD in the fluctuations. Fraedrich & Blender find persistence up to decades, Kiraly et al. find persistence lasting several years, so even if their analysis applied to temperature time series (which it doesn’t) rather than fluctuations (which it does), those time scales aren’t long enough to explain the trend on a century time scale in observed temperature time series. Fraedrich & Blender did find long-range persistence on century time scales, but only for fluctuations (not for temperature), and only in the output of computer model runs.

    Comment by tamino — 10 Aug 2008 @ 7:19 PM

  8. Edward Greisch wrote in 2:

    It’s kind of a sinking feeling.:

    I have to get off of this planet. I have to get off of this planet. I have to get off of this planet.

    … better than the “skeptic” mantra:

    This is not happening. This is not happening. This is not happening…

    [Best said while smoking a cigarette and rocking back and forth.]

    … I suppose.

    Comment by Timothy Chase — 10 Aug 2008 @ 9:50 PM

  9. Off on a tangent: question about the ITCZ:

    Why is their a double ITCZ (in the Western Pacific, right?)? Is it due to equatorial upwelling? Does anyone know why models have trouble with it? Actually, I vaguely recall seeing a model map that showed the double ITCZ – or maybe I was mistaken?

    Why is the ITCZ at other latitudes generally displaced northward from the equator in the annual average? (I think I read once that it was because greater water area south of the equator allows the trade winds to blow relatively more unimpeded, thus pushing the ITCZ north a bit.)

    If the Hadley cells are to expand polewards with global warming, does that mean the ITCZ will have greater seasonal shifting, and is that an opportunity for greater interannual variability?

    With the greatest warming in the tropics generally being in the mid-to-upper troposphere, does that mean the level of non-divergence in the Hadley cell will rise?

    Should the Hadley cell, monsoons, and Walker circulation be expected to increase in strength due to greater water vapor concentrations (except where aerosol emissions throw a wrench into it)? Would that amplify the low-frequency variability due to SST anomalies?

    Comment by Patrick 027 — 10 Aug 2008 @ 10:42 PM

  10. “Why is the ITCZ at other latitudes generally displaced northward”

    Sorry, I meant “at other longitudes”…

    Comment by Patrick 027 — 10 Aug 2008 @ 10:43 PM

  11. “With the greatest warming in the tropics generally being in the mid-to-upper troposphere, does that mean the level of non-divergence in the Hadley cell will rise?”

    Perhaps I should explain what I was thinking there: it isn’t just that the lapse rate decreases in the tropics, but that it decreases more – the meridional horizontal temperature gradient GENERALLY will decrease in the lower troposphere but will GENERALLY increase in the upper troposphere. One might expect the weaker gradient at lower levels to cause the Hadley cells to weaken or become less organized?? (before accounting for increased water vapor), but with the greater warming at upper levels, … etc. Hence the question.

    Comment by Patrick 027 — 10 Aug 2008 @ 10:49 PM

  12. Please see:

    1) Marković, D., and M. Koch, 2005. Sensitivity of Hurst parameter estimation to periodic signals in time series and filtering approaches. Geophys. Res. Lett., 32, L17401, doi:10.1029/2005GL024069, September 3, 2005

    “…. In summary, our results imply that the first step in a time series long-correlation study should be the separation of the deterministic components from the stochastic ones. Otherwise wrong conclusions concerning possible memory effects may be drawn.”

    2) Hamed, Khaled H., 2007. Improved finite-sample Hurst exponent estimates using rescaled range analysis. Water Resour. Res., 43, W04413, doi:10.1029/2006WR005111, April 10, 2007

    “Rescaled range analysis is one of the classical methods used for detecting and quantifying long-term dependence in time series. However, rescaled range analysis has been shown in several studies to give biased estimates of the Hurst exponent in finite samples…The application of the proposed modified rescaled range estimator to a group of temperature, rainfall, river flow, and tree-ring time series in the Midwest USA demonstrates the extent to which classical rescaled range analysis can give misleading results. ”

    Well, wrong conclusions….temperature and tree-rings included.

    Comment by Timo Hämeranta — 11 Aug 2008 @ 2:10 AM

  13. Re #4 where # S. Molnar Says:

    Gavin, you give a couple of parenthetical references, but don’t give the full citation at the end. Is this an oversight or an exercise for the reader?

    Not specifying to which references you allude makes the exercise even more challenging, and I could not resist :-)

    Kiraly et al (2006, Tellus) is easily found using Gogle at http://www3.interscience.wiley.com/journal/118587932/abstract?CRETRY=1&SRETRY=0

    Blender et al (2006, GRL)required Altavista givng the full reference as Blender, R., K. Fraedrich, and B. Hunt, 2006: Millennial climate variability: GCM-simulation and Greenland ice cores. Geophysical Research Letters, 33, L04710, 10.1029/2005GL024919. The PDF file can be found at http://www.mi.uni-hamburg.de/fileadmin/files/forschung/theomet/docs/pdf/BleFraHuntMill05.pdf .

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 11 Aug 2008 @ 4:10 AM

  14. The comment made by Edward Greisch is very important. It is very difficult for the average bloke (or indeed well qualified blokes if they are not climatologists) to make sense of the discussion. This is why it is vital that papers are submistted to the journal addressing the errors of papers such as this. The “sceptics” can and do ignore criticisms made on a blog, a peer reviewed comment in the journal is much more difficult to dismiss.

    I do hope that someone send in a comment on this paper, just as I hope someone has submitted a formal criticism of Douglass et al.

    Comment by Beaker — 11 Aug 2008 @ 4:20 AM

  15. The links to Kiraly et al, 2006 and Fraedrich and Bender 2003 are here:

    http://www3.interscience.wiley.com/journal/118587932/abstract?CRETRY=1&SRETRY=0
    http://prola.aps.org/abstract/PRL/v90/i10/e108501

    Both paywalled, but the abstracts are already informative.

    The conclusion is as tamino says: you only see Hurst behaviour in the models if you first remove the trend, i.e., including 20th century warming. After that, you just won’t find enough power left in the residual fluctuations to explain this anthropogenic upturn.

    In a way, the models (and note that the one(s) used by Fraedrich and Bender include realistic ocean modelling, as they must to get the longest time scales right) confirm the empirical finding by Mann et al. and many others that there isn’t a whole lot of power in natural variation for these long periods that we don’t know about. The same can be seen in the famous Figure 9.5 of AR4 WG1: the coloured band capturing natural variability is not able to accomodate the antropogenic contribution.

    Comment by Martin Vermeer — 11 Aug 2008 @ 4:54 AM

  16. What is perfectly clear is that the average bloke with an IQ of 100 and a high school education is 100% lost in this discussion.

    Generally the critiques of papers like this are less accessible, there is plenty of other material on the site that your average bloke should be able to understand. A high school understanding of statistics and physics is probably enough to get you started on the subject, and from there you can build up enough knowledge of the basics to be able to either understand or at least follow the key points of the more complicated parts.

    Comment by Stuart — 11 Aug 2008 @ 5:54 AM

  17. Perhaps I’m missing something, but the main point of the paper seems to be that due to the inaccuracy of the models demonstrated for any particular station and time period, they will also have questionable validity for regional or global prediction at longer time scales.

    The reason for that is that weather ultimately becomes climate. The temperature in Albany has an effect on the regional weather which has an effect on the larger scale weather which, past a few weeks or so, is climate. The (IMO) overused argument of chaos does not cover errors such as figure 5. Over the long run, those can only be due to microclimate modeling errors which can be fixed.

    The authors give four explanations in the paper which I believe is a bit of a false dichotomy. The major explanation IMO is (1) the models are poor, but (3) the comparison is invalid, also applies somewhat but not completely for any one station and less so as the number of comparisons is increased. Rejecting explanation (1) is only a recipe for confusion.

    Comment by Eric (skeptic) — 11 Aug 2008 @ 7:27 AM

  18. Gavin, Thanks for this. When I first came across this, my initial reactions was “Huh?” I mean, why the hell would you concentrate on only 4 stations and look for whether climate models could do something that nobody has ever claimed they could do. Upon further reflection, I had to revise my initial reaction to “WTF?”
    Curious, I looked to see if Koutsoyiannis had published anything previously that was of note here. It appears that he did comment on some entries in 2005-2006–rather confused comments at that. Given his latest effort, it would appear that the learning curve doesn’t have a positive slope. This one, to paraphrese Pauli, doesn’t even rise to the level of being wrong.

    Comment by Ray Ladbury — 11 Aug 2008 @ 7:46 AM

  19. About long-term persistence, I have been wondering about Figures 9.7 and 9.8 in the AR4 WG1 report (pp. 686,687).

    Comparing with the similar graph in the TAR, it looks like

    1) these are plots of power spectral (i.e., per unit of frequency) density, while nevertheless being plotted against time scale (i.e., the inverse of frequency);
    2) global temperature appears to be almost a 1/f process; and
    3) the unit stated on the vertical axis, degrees^2 / yr, is wrong; it should be degrees^2 yr.

    Did I get this right?

    Comment by Martin Vermeer — 11 Aug 2008 @ 9:46 AM

  20. S. Molnar:
    Kiraly: just paste what Gavin wrote into Scholar and it pops up.
    Blender: GEOPHYSICAL RESEARCH LETTERS, VOL. 33, L04710, doi:10.1029/2005GL024919, 2006

    Comment by Hank Roberts — 11 Aug 2008 @ 11:02 AM

  21. Eric, Yes, you are indeed missing something. Weather is highly dependent on local intitial conditions. However this dependence damps out over time. Weather does not “become” climate. Rather climate represents long-term, global TRENDS in weather. That’s an important distinction.

    Comment by Ray Ladbury — 11 Aug 2008 @ 11:18 AM

  22. It is easier to get the public to believe something that is easy to understand but wrong, then to get them to accept a correct concept that is complex. K et al, 2008 tells an easy to understand story that will be popular with people that have doubts about global warming. It has “sound bites” that can be recited at parties, over plates of BBQ without putting down your beer to check the details. It is the kind of story that someone like Karl rove would concoct using focus groups. I expect this myth to spread rapidly and widely. It might be countered by a massive and sophisticated education campaign, but no organization is likely to mount such a campaign in the next few months.

    Every university should make climate science a required freshman subject. Climate science should be required in high school. It should be in every elementary science text. The citizenship test has questions about American history. It should also have questions about climate science. There should be questions about automobiles role in global warming on every driver’s license test.

    Unless me make this kind of an effort in climate science, and take a leadership role leading others to make similar efforts, we will pass too many tipping points and lose everything that is American.

    Comment by Aaron Lewis — 11 Aug 2008 @ 12:04 PM

  23. Timo (#12), you’re the resident spec-ialist in wrong conclusions ;-)

    Comment by Martin Vermeer — 11 Aug 2008 @ 12:23 PM

  24. Ray, The climate models may well only model “climate” as global (or regional) and trends. But that is in fact what the authors are comparing, a global set of stations (albeit a relatively small number) and long term trends. Their fault with the models did not depend on my definitions of how weather becomes climate.

    Comment by Eric (skeptic) — 11 Aug 2008 @ 12:57 PM

  25. Perhaps I’m missing something, but the main point of the paper seems to be that due to the inaccuracy of the models demonstrated for any particular station and time period, they will also have questionable validity for regional or global prediction at longer time scales.

    I cannot tell you with any reliability whether the high temperature one week from today will be higher or lower than today’s high. I can tell you with high reliability that the high temperature 6 months from now will be lower than today’s high (where I live, anyway, ymmv).
    It’s true that 6 months from now is composed of 26 weekly periods, so your reasoning would be that I shouldn’t be able to do this (ie have a highly reliable estimate over that period when I can’t provide one for the short term periods that compose it). But I can- the day-to-day trend is chaotic, but the longer-term trend is not.
    This is also true spatially.

    Or, consider sports- I can say with fairly high reliability that the Los Angeles Lakers will have a winning season next year. But I cannot easily predict the outcome of any particular game, My season-long prediction is much more reliable because it’s based on the averaging together of a number of events; the noise (ie chaos) tends to average out, and we’re left with the underlying trend (ie that the Lakers are an above-average team).

    Comment by Carleton Wu — 11 Aug 2008 @ 1:27 PM

  26. Carleton (25), I agree with summer/winter argument, otherwise I wouldn’t be chopping wood. But the authors are showing yearly trends in their figure 5, not short term chaotic effects. Your argument would then reduce to whether the authors evaluated enough stations when they made their reality/model comparison and whether they did it properly or not.

    Comment by Eric (skeptic) — 11 Aug 2008 @ 2:48 PM

  27. Eric, please read the OP, it is clear that either you haven’t read it, or haven’t digested what’s being said.

    If you believe the OP misrepresents the paper, then please state so clearly. If you agree with the OP’s representation of the paper, but disagree with the analysis, then please bring up points refuting those made in the OP.

    Comment by dhogaza — 11 Aug 2008 @ 3:35 PM

  28. dhogaza, IMO this sentence “The IPCC report for instance is very clear in stating that the detection and attribution of climate changes is only clearly possible at continental scales and above.” is not what the K paper is about. The authors obtain their test of falsifiability by testing a series of grid points against the same series of real world points because the real world doesn’t provide a pristine regional or worldwide temperature series.

    The paper is not about detecting and attributing climate change, it is about modeling climate. If climate change can’t be modeled correctly at some number of grid points then I wouldn’t expect accuracy at other grid points. To reiterate my previous post, my explanation is (1) the models are poor, and (3) the models and observed time series are not compatible (I would say inadequately represented due to the limited number of points used).

    Comment by Eric (skeptic) — 11 Aug 2008 @ 4:34 PM

  29. “By correlating at the annual and other short term periods they are effectively comparing the weather in the real world with that in a model. Even without looking at their results, it is obvious that this is not going to match…”

    So if the hypothesis was something you think is reasonable, are there good model based regional projections?

    How well do the best model results for regional hydrology compare with observations?

    Comment by Steve Reynolds — 11 Aug 2008 @ 5:11 PM

  30. Steve Reynolds (29) — I’m an amateur with enough background, by now, to compare some of the ‘regional hydrology’ predictions with paleoclimate at times with warm regional temperatures. For southern South America there seems to be good agreement. I know far less about other regions, but to the extent that I am correctly understanding the paleoclimate, I’ll say that the predictions for the American mid-west and also for central East Africa appear to agree with the past.

    For at least one other region, roughly the Congo River basin, there appears to be some striking differences. But this may just be my misinterpretation.

    Comment by David B. Benson — 11 Aug 2008 @ 6:50 PM

  31. Uh, Eric, the sentence you quote is hardly the meat of the OP.

    Comment by dhogaza — 11 Aug 2008 @ 7:02 PM

  32. David B. Benson: “…there seems to be good agreement.”

    Thanks, but a more quantitative comparison would be more helpful.

    [Response: I agree. K et al would have been much better doing that. Maybe someone else will step up. – gavin]

    Comment by Steve Reynolds — 11 Aug 2008 @ 8:22 PM

  33. Eric: perhaps I misunderstand you, but let me ask you this:

    If it is well-known and expected that testing a series of grid points from one model run against the same series of grid points from _a second model run_ will show wide divergence;

    A) is there then any reason to expect that testing a series of grid points against the same series of real world points would produce a better match?

    B) does this constitute evidence, to your way of thinking, that the models are poor?

    Comment by kevin — 11 Aug 2008 @ 8:23 PM

  34. Dhogaza, how about “Furthermore, by using only one to four grid boxes for their comparisons, even the longer term (30 year) forced trends are not going to come out of the noise.” This seems to attack the meat of the results although it is not the meat of the review. I think the reasoning in the paper for 4 nearest grid points was fairly clear, to falsify or validate the model. I don’t know why the 30 year series of localized part of a model would suffer from noise.

    Comment by Eric (skeptic) — 11 Aug 2008 @ 8:57 PM

  35. Aaron Lewis #22:

    There should be questions about automobiles role in global warming on every driver’s license test.

    There are on the Finnish test… also low-emissions driving is actively taught.

    Comment by Martin Vermeer — 12 Aug 2008 @ 1:49 AM

  36. Gavin – lovely post, especially the first three paragraphs. If “the average bloke” had a better understanding of what climate scientists actually do, that would help tremendously. I’m currently studying how climatologists use computational models. My original goal was to look just at the “software engineering” of models -i.e. how the code is developed and tested. But the more time I spend with climate scientists, the more I’m fascinated by the kind of science they do, and the role of computational models within it. The most striking observation I have is that climate scientists have a deep understanding of the fact that climate models are only approximations of earth system processes, and that most of their effort is devoted to improving our understanding of these processes (cf George Box: “all models are wrong, but some are useful”). They also intuitively understand the core ideas from general systems theory – that you can get good models of system-level processes even when many of the sub-systems are poorly understood, as long as you’re smart about choices of which approximations to use. The computational models have an interesting status in this endeavour: they seem to be used primarily for hypothesis testing, rather than for forecasting. A large part of the time, climate scientists are “tinkering” with the models, probing their weaknesses, measuring uncertainty, identifying which components contribute to errors, looking for ways to improve them, etc. But the public generally only sees the bit where the models are used to make long term IPCC-style predictions.

    I have never witnessed a scientist doing a single run of a model and comparing it against observations. The simplest use of models I have seen is for a controlled experiment with a small change to the model (e.g. a potential improvement to how it implements some piece of the physics), against a control run (typically the previous run without the latest change), and against the observational data. In other words, there is a 3-way comparison: old model vs. new model vs. observational data, where it is explicitly acknowledged that there may be errors in any of the three. I also see more and more effort put into “ensembles” of various kinds: model intercomparison projects, perturbed physics ensembles, varied initial conditions, and so on (in this respect, the science seems to have changed a lot in the last few years, but that’s hard for me to verify).

    It’s a pretty sophisticated science. I think the general public would be much better served by good explanations of how this science works, rather than with explanations of the physics and mathematics of climate systems.

    [Response: I try! – gavin]

    Comment by Steve Easterbrook — 12 Aug 2008 @ 7:20 AM

  37. Here are examples of two papers that don’t seem to have huge gaping flaws, and cover the data and the modeling of hydrologic changes:

    Regonda et al 2005 : “Seasonal Cycle Shifts in Hydroclimatology over the Western United States”
    http://civil.colorado.edu/~balajir/my-papers/regonda-etal-jclim.pdf

    Advancement in the timing of spring temperature spells over the western United States has resulted in the earlier occurrence of peak snowmelt flows in many mountain basins.

    Here, the authors used the timing of maximum spring stream flows as their main dataset. The stream flow meters are accurate and the data doesn’t involve a lot of estimation (consider estimates of total seasonal flow volume, instead – huge uncertainties would be introduced – evaporation, groundwater flow, plant evapotranspiration, stream volume estimates, etc.).

    If you look at the figures in the paper, you’ll see they have far more than 8 locations. Their paper shows that choice of location matters. If Regonda et. al had chosen one subset of their data to look at, they’d have a different result:

    Changes in the timing of snowmelt in high-elevation basins in the interior west are, for the most part, not statistically significant.

    If you look at the paper, you see the data coverage was pretty extensive. Even though the paper is based on statistical analysis, their choice of data to look at is logical – it captures the diversity of the overall western U.S. hydrology over the past 50 years.

    The authors make no specific claim about the cause of the noted spring temperature increase, other than to point to El Nino and global warming. Their paper is simply an analysis of the historical dataset.

    2) For a paper that then uses models to make hydrologic forecasts for the future for the western U.S., see:

    Amplification of streamflow impacts of El Nino by increased atmospheric greenhouse gases
    EP Maurer, S Gibbard, PB Duffy – GEOPHYSICAL RESEARCH LETTERS, 2006

    http://www.engr.scu.edu/~emaurer/papers/maurer_nino_amplification.pdf

    ..We use a high-resolution global model of the atmosphere coupled to a physically-based model of surface hydrology to investigate effects of increased atmospheric CO2 and this type of El Niño, both individually and in combination, on monthly river flows in California. Increased CO2 changes the seasonal timing of river flows and increases their interannual variability. SST anomalies typical of a strong El Niño SST increase monthly-mean flows. The two perturbations together result in increased mean flows and increased interannual variability, raising the possibility of both increased flood risk and water shortages…

    Taken together, these two papers (and several similar ones, at least) should convince anyone that “A fundamental and societally relevant conclusion from these studies is that the use of the IPCC model predictions as a basis for policy making is” a valid and reasonable approach.

    Time series analysis: http://www2.ocean.washington.edu/oc540/lec01-12/

    Comment by Ike Solem — 12 Aug 2008 @ 9:30 AM

  38. #33, Kevin, I’m not sure those are yes or no questions. Are you talking about wide divergence at many grid points on a multi-decade model run? If many points, how many? There seems to be a general belief that all climate is global and all weather is local even over large time periods.

    But IMO over large enough time scales, weather becomes climate so Albany will likely warm over a multi-decade run, perhaps cool, but not vary from run to run or much from reality. Repeat that comparison for enough locales and gain increasing confidence in the model.

    Comment by Eric (skeptic) — 12 Aug 2008 @ 10:21 AM

  39. Eric, you seem to have a fundamental misunderstanding of what the models do. It doesn’t make sense to look at a single model run and compare it to anything in the real world–let along to look at results for 4 gridpoints on a single run. You don’t use models to predict the future; you us them to elucidate the physics. The physics is what tells you the likely future path of the system.

    Comment by Ray Ladbury — 12 Aug 2008 @ 10:49 AM

  40. (something went wrong with Captcha…am trying again…feel free to delete any duplicate)

    Gavin: since Koutsoyiannis et al inspired you at least three questions that could move \the science forward\, I am not sure I understand your criticism. Someone had to start the analysis somewhere, and if the first article is too simple, all more the reason for more articles to appear on the subject

    [Response: The questions aren’t inspired by a paper that adds nothing, but of the papers that already did more that K et al did. That I use the attention that K et al got to address them, is not a support of that publication. Of course, this isn’t the only time an uninteresting paper has been published (the literature is unfortunately full of them), it is however a missed opportunity. That is something to lament, not celebrate. – gavin]

    Comment by Maurizio Morabito — 12 Aug 2008 @ 10:54 AM

  41. @ Eric 38:
    I’m not sure what you mean by “long enough time scales,” and I don’t know all that much about models (so help me out, those who do), but my sense is that you’re wrong about Albany. I think that local (and possibly even regional) variability are sufficiently chaotic that it would not, in fact, be reasonable to expect good agreement at particular locations among single realizations or between single realizations and observations over the course of, say, 30 years. It is one thing to be able to say that North America will warm on average, but there will be a lot of little swirls within that, not necessarily following a stable pattern from run to run, or in reality. Albany could be in a multidecadal cool swirl in one (or have enough cool snaps to bring down the average) and be in a multidecadal warm swirl in another (or have enough heat waves to bring up the average). All this with a model that is perfectly adequate for reliably predicting the aggregate behavior of climate in continent-sized regions. People who know–how far off am I?

    Comment by kevin — 12 Aug 2008 @ 12:33 PM

  42. Eric, let’s try a thought experiment. You are an observational physicist who wants to model absolutely everything involved in boiling a liter of water. You have the best equipment and methodology. After several runs, you will be able to make many reliable macro-pedictions about the process–things like time to boil under specified conditions.

    However, will your ultra-high speed high-resolution video camera detect the same pattern of bubbles each time? And should you be discouraged if none of your model runs produce the same pattern of virtual bubbles, either?

    . . . . pause for thought. . . .

    I hope you said no to both questions. That’s chaos, and it won’t go away just because you don’t like people to use its existence in arguments!

    Comment by Kevin McKinney — 12 Aug 2008 @ 1:33 PM

  43. Eric. A thought experiment:

    You are at the top of a steep hillside. There are rocks, dells, dips, drops and bushes scattered around beneath you. You are at the top. You have a rugby ball (Football for the USians) sitting on its point at the top of this slope.

    Give it a little push down the slope.

    Where will it go?

    In rough terms: down. 100% certain.

    The actual path? Pff. Who knows.

    Will it hit a bush? Well, how many bushes are there and is there any pattern? Did you ask that question while the ball was partway down? Because that changes the probable answer because it isn’t going back uphill to hit a bush over there, so some are excluded.

    The Earth’s Weather: the ball.
    Gravity: climate forcings
    Path taken: Weather
    Hitting a bush: Will Arkansas be warmer/drier/wetter/whatever

    This may help you understand.

    Comment by Mark — 12 Aug 2008 @ 2:50 PM

  44. #42, KevinM: I’m not sure that’s a good analogy of what is being compared. If I were to apply heat to a kettle and take measurements of slight temperature variations within due to heated water circulation, that would indeed not match any simulation of the kettle for those location while the overall temperature in the simulation and real kettle would match quite nicely.

    However, the earth is not a kettle and Albany is not an indistinguishable location, it has specific climate characteristics and weather patterns which can be simulated in climate models, for example http://www.mmm.ucar.edu/mm5/workshop/ws00/Zheng.doc (although their initial conditions do not seem to be random, that fact won’t matter over the course of years or decades as in the K paper figure 5).

    Also the Albany discrepancies are repeated for 7 more locations worldwide, but is that enough locations to say the model is poor? I don’t know.

    Comment by Eric (skeptic) — 12 Aug 2008 @ 2:51 PM

  45. Steve Reynolds (32) — I don’t know about a quantitative assessment, but a more descriptive one is. For example, here are two quotations from page 14 of

    http://www.oecd.org/dataoecd/29/2/36448827.pdf

    “- the Patagonian region (Neuquén, Río Negro, La Pampa, Chubut, Santa Cruz and Tierra del Fuego Provinces): Temperature increase. More frequent intense precipitations; fluvial valley floods. Glacier diminution. Floods. Wood biomass fires. Desertification. Coastal erosion.”

    “For preparing climate change scenarios for Argentina, the Global Model HadCM3 (UK) on IPCC scenarios has been utilized. … A remarkable trend to decrease of precipitation is also observed for the central region of Chile, and the Argentinean Region of Cuyo, Province of Neuquén and the western part of Río Negro and Chubut. These scenarios indicate a continuity of the climatic trends observed during the last decades.”

    The flooding and desertification is observed in the geological record of Rio Neuquen and Rio Negro, I recall (without rechecking). But I don’t know how to be quantitative about comparing these observations to the model studies quoted above.

    Comment by David B. Benson — 12 Aug 2008 @ 5:26 PM

  46. Re:#5 Edward Greisch says:”What is perfectly clear is that the average bloke with an IQ of 100 and a high school education is 100% lost in this discussion. They have no way of understanding any part of it, no idea who is telling the truth…”

    This is heavy going,to be sure, but to paraphrase physicist Leon Lederman,from his book “The God Particle”- Just because I don’t understand it doesn’t mean it’s correct.

    “Dire Predictions” by Mike and Lee Kump arrived in today’s mail and I must say its a lot more user friendly than Hurst coefficients and autoregression analysis.It contains some startling graphs,i.e.,
    page 33 showing the recent spike in three of the GHGs, and (so far) contains good summariess to use as responses to the usual skeptics arguments.

    Comment by Lawrence Brown — 12 Aug 2008 @ 5:31 PM

  47. > Albany

    Eric writes:

    > Albany … can be simulated …

    How the heck big IS Albany, and why don’t they mention it at all in the Zheng et al. document you link to?

    That .DOC file says:
    ” In this study, we investigate the weekly to monthly predictability of clouds and precipitation over the LSA-East, defined roughly by 33 – 430N latitude and 78 – 890W longitude …”

    Is Albany really that large? Where are you getting your beliefs, Eric? Did someone claim this was a study about Albany? Did you somehow read it and decide for yourself it was about Albany?

    Comment by Hank Roberts — 12 Aug 2008 @ 7:15 PM

  48. #43, Mark: Your analogy would continue on the thesis of the paper with a real world and simulated topology in which the bushes are placed. The ball is rolled down the hill every day for 30 years with and yearly averages of how each bush is hit are compared between the simulations and reality, although only eight bushes scattered across the hill are considered (is that enough?)

    Obviously the model granularity and physical fidelity is going to make a big difference. If a rock in front of a bush is not modeled the ball hits it in the model but not in reality. But there are 8 bushes under consideration so a rock or two should not matter.

    To accurately predict climate, the basic topology of the hill must be modeled, some bushes are in gullies, some beside other bushes. Remember that by hitting one bush in a particular angle the ball will tend to hit another particular bush, just as weather affects weather elsewhere.

    I don’t doubt the ability of a climate model to predict climate given a world with no land, or one or two circular land masses with no terrain, or perhaps a more complex shape. But the climate does depend on the irregularities of the planet which must be accounted for accurately in the model. Those irregularities affect weather, which over the 30 years of the test becomes climate.

    Comment by Eric (skeptic) — 12 Aug 2008 @ 8:46 PM

  49. Re 46
    “Those irregularities affect weather, which over the 30 years of the test becomes climate.”

    Yes, in so far as the pattern/texture of weather becomes/is climate.

    But don’t you think there are some underlying patterns in the global climate that can be understood even with greatly simplified models? The Hadley cells, for example. The continents will perturb them from what they would be with a global ocean, and SST anomlies will perturb them further, but you can mathematically break this into a ‘basic state’ plus one or more ‘perturbation’ components (although those terms are more typically applied to perturbations relative to a time and/or zonal average, whereas in this case the ‘basic state’ is not necessarily the average in either time or any space dimensions).

    A single grid cell may have to characterize the average of some number of points within it.

    While colder and warmer air masses are shifting around, cold air and warm are being ‘produced’ at certain rates that globally averaged may not shift around quite so much. My impression is it will be easier to predict the changes in the ‘amount of cold and warm’ then the distribution of it for a given external forcing, though I’d like confirmation on that (although if I took the time I might be able to justify that view based on some physical arguments, maybe?)

    Likewise, there are many ways to distribute precipitation in time and space that would balance a particular global average evaporation rate.

    Comment by Patrick 027 — 12 Aug 2008 @ 10:31 PM

  50. “(although if I took the time I might be able to justify that view based on some physical arguments, maybe?)”

    Well, to start with:

    greenhouse forcing: a global average change in LW radiative forcing (With some spatial variation that can be understood from physics).

    A change in the LW radiative forcing of atmospheric gases will, if applied to the same climate, result in some disequilibrium. On the long term, a climate, with all it’s shorter term fluctuations, may be near equilibrium (in the sense that the patterns can be expected to recur – not exactly like a tesselation, but generally like the same texture (is climate a glass?)). On the shorter term, fluctuations occur because a state would not be in equilibrium if it were constant, but the change in external forcing means that with the same climate, the shorter term imbalances would be changed, so the weather patterns even in the shorter term would evolve differently. Thus the climate must change before a longer term equilibrium can be approached again. In the case of the positive LW forcing, equilibrium requires an increase in temperature. The distribution of this temperature is not specified by the forcing because wind can carry heat and the wind pattern has not been specified. But some basic physical arguments lead to certain expectations about the temperature change distribution, from which certain other arguments can lead to expectations of other changes, and then there are feedbacks, etc… So, for example, I expect more rain. And a greater fraction of rain in intense downpours. Do I know where or in what seasons or how this might correlate with ENSO indices? Not so much (though others might)… And I expect some changes in the midlatitude westerlies. This implies a potential for change in the quasi-stationary planetary wave patterns. From that I expect at least a potential for some change in longitudinal as well as latitudinal distribution of different kinds of weather, including precipitation. But without some number crunching I don’t know where and what. … And with a general increase in SSTs, I expect certain kinds of low-frequency variability, in particular those in which SST anomalies produce a perturbation wave train in the westerlies allowing for global teleconnections, to be more sensitive to the same SST anomalies, because of the exponential temperature dependence of water vapor concentration (involved in latent heating, enhances deep convection, etc.). And there’s the implied physical arguments in my ITCZ questions from earlier. The point I’m trying to make, I guess, is that it is easier to project that there will be more change than otherwise if x,y,z… with some generalized numbers and distances, then it is to say what the change will be in, for example, Albany.

    Comment by Patrick 027 — 12 Aug 2008 @ 10:54 PM

  51. #47 Hank, they used a grid size of 9km so Albany could fit in one cell, however they only compared observed and simulated results for larger regions based on geography. The paper was meant to point out the ability to get matches between simulations and reality at least in regions.

    #49, Patrick, “But don’t you think there are some underlying patterns in the global climate that can be understood even with greatly simplified models? The Hadley cells, for example.” The issue is not developing an understanding, it is testing a model. The greatly simplified or somewhat simplified models cannot be tested because they do not faithfully simulate particular locations that can be compared to the same real world locations.

    I do not believe you will find it easier to predict the “amount” without faithfully simulating the distribution. If there’s no match between the distribution in the simulation and the distribution in reality as measured by 8 locations (is that enough I keep asking?) then I would believe the predicted amount either, because the amount depends on the distribution. Parameterizing rates or amounts of anything regionally or globally does yield testability.

    Comment by Eric (skeptic) — 13 Aug 2008 @ 5:23 AM

  52. Eric, You seem to have some fundamental misunderstandings of how climate models work and even about what climate is. In a climate model, there is a lot of sensitivity to initial conditions and which particular small fluctuations occur in any particular run. You can sort of see this because if Albany starts with a certain energy density, there is no guarantee as to how much energy will stay in Albany and how much will leave that box. If you look at many runs, the importance of the fluctuations tends to diminish, and you are left with the “climate”. You seem to be contending that averaging a single run over a long time will give the same effect, but that is not clear. (Note: Even in statistical mechanics, there is still controversy over whether a time average will yield the same result as an ensemble average. Probably it will in most–but not all–cases.)

    Comment by Ray Ladbury — 13 Aug 2008 @ 7:40 AM

  53. If all one has are historical data, then to tell if the climate is changing you need time records that are long in comparison with the main source of weather variability: the yearly seasonal tilt of the Earth.

    Say you have the temperature record, hour by hour, from one point – Fairbanks Alaska – over a hundred-year period, and nothing else – that’s your data set. Can we use it to answer the following?

    1: What is the average temperature and what is the temperature change trend?

    2: What is the average length of one day and is there any change in the length of days over the hundred years? How about for years?

    First, is there autocorrelation in the data? Yes. Once a day, temperatures hit a peak, and there is stable yearly behavior – but if every consecutive year, those temperatures are a little warmer at the same times, then you have a warming trend superimposed on a cyclic trend.

    Might one see other types of periodicity? It is possible – one might see the effect of quasi-periodic phenomenona like El Nino in the temperature dataset.

    Trying to use statistics to answer the second question is obviously ridiculous – one would look instead to physical calculations or observations to get an estimate of year length, not to statistical models based on temperature data – let alone statistical models based on 8 data points.

    If you used statistical analysis of temperature data from 8 points over a 100-year period to estimate changes in the length of the year, you’d come up with pretty poor predictions of the future behavior. One could still go around using the result to claim that physical models of planetary orbits are unreliable predictors of future behavior, however, due to their poor performance in the statistical test.

    Comment by Ike Solem — 13 Aug 2008 @ 7:57 AM

  54. Re #43

    I think I understand now. The rock model (climate model) is 100% certain that the rock (temperature of the atmosphere) will go downhill (up) because of gravity (greenhouse gases). The other less important forces like a strong upslope wind, kid holding back the rock (PDO, AMO) are not strong enough to counteract gravity (greenhouse gases). Since we don’t care about the path the rock takes (the weather in the climate model) and we are 100% certain of the result, I guess we don’t need a rock model (climate model) anymore. Oh, but wait. The model is still useful because we want to know how far the rock (how high the temperature) goes. It seems that would depend some on the path of the rock (weather in the climate model). Thanks.

    Comment by B.D. — 13 Aug 2008 @ 8:25 AM

  55. #52, Ray, would you agree that Albany NY and Rome Italy would have distinct, reproducible characteristics of energy density changes based on local topography? When I say reproducible, I don’t mean numerically exact, but that there would be a characteristic curve from an anomalous condition.

    I could for example dump 2 feet of snow in Albany NY in January and likewise in Rome (or little north to be exact) and would get predictably different responses back to climatology with normal variations. That is an extreme example I realize.

    I think my mention of averaging was somewhat mistaken. The two sources of error discussed by the authors are high frequency noise and micro-climate modeling errors. I don’t think the latter would affect all 8 sampled locations. The former still seems to be somewhat of an open question: does the HF noise integrate into greater or lesser errors over longer time scales?

    Comment by Eric (skeptic) — 13 Aug 2008 @ 9:06 AM

  56. The area in the model Eric refers to above is huge. In the original paste I got zeros where there should be degree symbols; here is is a bit clearer:

    “… LSA-East, defined roughly by 33 – 43[degrees] N latitude and 78 – 89[degrees] W longitude …”

    That’s the area they’re talking about.

    Think of it as a shotgun pattern on a target, Eric. You can check how the result looks by firing dozens of times. You get roughly the same spread, the same shape, the same accuracy.

    You can’t then conclude that any particular tiny circle on the target area will always either be hit or missed by a shotgun pellet.

    Even if you have 20 patterns and on none of them was that particular little bit of paper hit — the next time, it might be a hole there.

    This analogy brought to you by an ordinary reader, not a climate scientist, and the climatologists may well tell me it’s absurd after they climb back onto their chairs, but it seems to me this is what you’re getting confused about.

    Comment by Hank Roberts — 13 Aug 2008 @ 10:14 AM

  57. #56, Hank, I assume the chaos in your analogy is provided by the turbulence in the explosion and the air that the pellets travel through. Otherwise we would exactly duplicate the physical configuration (cartridge including gunpowder and pellets, etc) for each firing to match the determinism of climate.

    In that case I must (like above) amend the analogy to include a real and modeled topology, perhaps various fixed air currents, plus the interactions between the pellets on their way (or not) to each tiny circle on the paper. There will be a predictable probability that each tiny circle will be hit provided the model contains those details.

    The turbulence will still result in model-reality mismatches but multiple firings will take care of that. The model will have insufficient topological detail for some of the tiny circles, but we will compare a large enough number of them (probably more than 8).

    Comment by Eric (skeptic) — 13 Aug 2008 @ 11:05 AM

  58. Eric, No, the results would not be reproducible, because initial conditions when you dump the snow vary. In many successive runs (ensemble), you would have a majority of outcomes that were different in the two cases. In the majority of years, you would also have different outcomes. However, the average over the ensemble and the average over time need not yield the same result. To blithely assume it will be so is a bit risky. Actually Hank’s shotgun blast pattern is a pretty good analogy. Remember, we’re not trying to predict the weather in Albany, NY 100 years hence. We’re trying to look at how changes in the energy in the system will affect TRENDS in global climate. Regional predictions are a long way off, and long-term weather predictions may well be impossible.

    Comment by Ray Ladbury — 13 Aug 2008 @ 11:07 AM

  59. #58 Ray, but wouldn’t you agree that the two locations (same latitude but very different topography and thus climate) have distinctly different reactions (in Rome the snow would always melt in a day or two)? Wouldn’t that difference be essentially climate?

    Again the paper is trying to validate the models, not use them to determine trends. As the forcings change, will be modeled climate be sufficiently detailed to perform the validation? Will the snow continue to stick around in Albany just as long, or shorter, and how much shorter? Likewise for Rome, the local topography may (for example) create more cooling clouds in winter. If it does, the snow could actually stick around longer.

    Comment by Eric (skeptic) — 13 Aug 2008 @ 11:55 AM

  60. Re 51:
    “The issue is not developing an understanding, it is testing a model. The greatly simplified or somewhat simplified models cannot be tested because they do not faithfully simulate particular locations that can be compared to the same real world locations.”

    But one can look for the changes that occur in the Hadley cell in the model data, and then compare that to changes in the Hadley cell in the real world or in other models.

    Comment by Patrick 027 — 13 Aug 2008 @ 11:59 AM

  61. “#42, KevinM: I’m not sure that’s a good analogy of what is being compared. If I were to apply heat to a kettle and take measurements of slight temperature variations within due to heated water circulation, that would indeed not match any simulation of the kettle for those location while the overall temperature in the simulation and real kettle would match quite nicely.

    “However, the earth is not a kettle and Albany is not an indistinguishable location, it has specific climate characteristics and weather patterns which can be simulated in climate models. . .
    Also the Albany discrepancies are repeated for 7 more locations worldwide, but is that enough locations to say the model is poor? I don’t know.”

    Well, no analogy is perfect. But note that in the analogy I *was* asking about specific locations of bubbles–analogous to Albany (or wherever.)

    As to your final question, if I understand the OP correctly (and as amplified by various posters throughout this thread), the answer would be no, 7 locations are not enough. The reason is that the GCMs are not intended nor expected to have predictive abilities for specific locations. In my boiling water analogy their analogs would predict things such as (perhaps) the average size of bubbles, their density, curves describing how these change over time, etc., etc. A bubble model is a successful predictor if it can replicate the macro-descriptors even if NO single bubble’s trajectory matches a corresponding observed trajectory. So NO number of locations would be enough, because K’s et al.’s research question is inapplicable in the first place.

    To put it very crudely, but maybe usefully, K et al. are criticizing a leaf rake because it makes a very poor salad fork. It is just not true that you need to be able to predict at micro-scales in order to predict at macro-scales. A recent example of this–not highly analagous, but illustrative of the principle–would be the economic model which is able to predict reasonably well national Olympic medal totals without any predictive ability *at all* with regard to any individual sport. (All the model “cares about” is 1) national population, 2) GDP, 3) recent Olympic medal totals, and 4) host team advantage.)

    BTW, one of the really interesting aspects of this discussion, to me, is how it leads into the question of damping in chaotic systems. With no damping, presumably your formula that “weather becomes climate” would be 100% correct, and individual locations would in fact have to be accurately modelled for the GCM to maintain accuracy–you’d need to know about the every Brazilian butterfly flapping its wings. But this doesn’t appear to be the case, as far as this amateur can judge at least.

    Comment by Kevin McKinney — 13 Aug 2008 @ 12:11 PM

  62. Eric,
    Once you are talking about multiple shots, the analogy is multiple runs of the model. And it does not make sense to look at whether a particular point on the target gets obliterated every time, but rather more gross details–average width of the pattern, standard deviation, radius over which target is completely obliterated, etc. It makes no more sense to look at individual points on the target than it does to look at individual stations–from a climatic point of view.

    Comment by Ray Ladbury — 13 Aug 2008 @ 12:17 PM

  63. Eric.

    When you sneeze, do the bogies always hit the same bits of your hand? Are the sizes of the chewy stuff always the same size and taste?

    (this is the closes I get to something like your continued badgering queries).

    Comment by Mark — 13 Aug 2008 @ 12:28 PM

  64. Eric, the early awareness that climate models could be created came from observing that change is being measured at individual points where we happen to have longterm weather stations, but the actual temperature and humidity and wind changes occur across huge areas.

    Watch the weather map — you see reports from point stations but you see the fronts shown and the measurements, wherever they happen to be, change consistently with the large scale event, as the weather front passes.

    You’re thinking that weather (and over longer time spans climate) changes in lots of little tiny areas rather than averaged across very large areas.

    False premise. You’re imagining that order exists ‘all the way down’ but that’s not true here. Look up “emergent property” — the complicated weather and climate emerge from relatively few specific physical observations in the models.

    Yes, a change in a detail will change climate. But the details are things like the Appalachians weathering down from being the biggest mountains, or the continents moving. Those are relatively slow. But if we had a way to remove mountains or move continents, we could cause climate change by doing that rapidly.

    Ditto for the background level of CO2 going on an excursion far outside its range. We’re doing that.

    Comment by Hank Roberts — 13 Aug 2008 @ 12:32 PM

  65. Eric,
    The difference in climate between Rome and Washington, DC, Denver, CO, etc. all at about the same latitude N) has to do with a lot of factors–the Gulf Stream, topography, altitude… However, initial conditions coud dramatically affect the results of any particular trial. It might be very cold in Rome. You could have a Chinook blowing in Denver. Congress could be producing an exceptional amount of hot air in DC. Only by looking at many runs and over time will the climatic TRENDS. You don’t validate the models by reproducing the weather. The trends are the climate.

    Comment by Ray Ladbury — 13 Aug 2008 @ 12:35 PM

  66. #61 KevinM “So NO number of locations would be enough, because K’s et al.’s research question is inapplicable in the first place.” If every point in a model is inaccurate, how would any aggregate statistic from the model have any validity?

    #62, Ray, I agree in the shotgun domain that we require multiple simulation runs at least the way that I hypothesized it. But I still believe that the 30 year time frame of the climate simulation will cause most locales to revert to their climate means. The forcing changes in the model and reality will cause predictable changes in those local climates. Over the 30 year period a single point in the locale will reflect the (changed) climate of the locale.

    Comment by Eric (skeptic) — 13 Aug 2008 @ 12:43 PM

  67. 1. Can we redo the Koutsoyiannis analysis using ensembles, rather than single runs? Can we agree that would be valuable?
    2. I wonder what percentage of individual model runs produces flat temperature trends for 1998-2008? (Using initial conditions fixed at conditions occurring Jan 1 1997.)
    3. How small a sample is too small? Good question. 8 seems small, especially given the free availability of more data. But at least these 8 were chosen at random. Why not redo the analysis with an increasing number of stations and see how the results change? What would it cost to run this analysis? Salary for one research assistant for 6 months?

    Thanks again for the OP.

    [Response: 1) no point. The spatial scale is still too small to see a forced signal come out of the noise. You need a instead to combine Kiraly’s results with an analysis of the CMIP3 20th Century runs. 2) You can see what the AR4 models gave in a previous post. However, there is no equivalent database of initialised models that all start in 1997 (neither the ocean initial conditions, nor the methodology are sufficiently mature). 3) I have no idea how those eight stations were chosen. But you would be better off looking at the gridded products, not individual stations since you don’t want to re-invent the wheel in making a gridded product in the first place – that’s a big enough job on its own. None of this is time-consuming nor expensive. A grad student could have it done in a couple of months. – gavin]

    Comment by Richard Sycamore — 13 Aug 2008 @ 12:53 PM

  68. This is what I get out of this post (without going back for several PhDs): someone can’t see the forest for the trees.

    But I still say Katrina was caused (enhanced) by global warming, until someone can prove to me at 95% confidence (in language I understand) that it was not — the null hypothesis being that GW caused Katrina and the research hypothesis being that it did not.

    Comment by Lynn Vincentnathan — 13 Aug 2008 @ 1:33 PM

  69. Thanks for the rapid response and especially the link (which I’d missed). The more clarity we can have on what the models say and how they work, the better. The longer that temperatures stay flat, the more important it is going to be to understand how this could be consistent with long-term model predictions of forced warming. LTP cuts both ways. Heat flow – and the asbence of heat flow – can be suprisingly persistent due to slow and uncertain deep ocean mixing dynamics.

    Comment by Richard Sycamore — 13 Aug 2008 @ 1:49 PM

  70. #64,65 Hank and Ray, many thanks for your patience. Suppose there was a world without substantial changes in climate forcing except seasons. If we ran a simulation with random initial conditions I would expect that model would match reality in a year or less at a majority of a selected set of points.

    This is obviously a deviation from what is being compared in the paper, so we would have to compare parameters like diurnal temperature, precipitation, etc. not long term changes. (1) Would you expect a match? (2) Would the methodology be invalid once major climate forcings were added to the world and the model?

    Comment by Eric (skeptic) — 13 Aug 2008 @ 1:54 PM

  71. > But I still believe that … Over the 30 year period
    > a single point in the locale will reflect the
    > (changed) climate of the locale.

    What evidence can you point to supporting this belief?

    What size “locale” do you believe has its “(changed) climate” different from a “locale” adjacent to it?

    Watch this:
    http://www.team-6.jp/cc-sim/english/
    (from Bryan Lawrence’s weblog, where he writes:

    Nearly twenty minutes from Seita Emori. The model he’s describing was the highest resolution model in the AR4 archive. That doesn’t make it right, and he probably ought to caveat more the results beyond temperature, but it’s all very plausible. The fact that it is even plausible should cause concern!
    http://home.badc.rl.ac.uk/lawrence/blog
    2008/07/18

    That movie shows you, as it says, “the highest resolution model” — watch Albany or Rome.

    Do you see there what you believe you should see, according to your belief in how the models work?

    Comment by Hank Roberts — 13 Aug 2008 @ 2:19 PM

  72. Lynn, Lynn, Lynn (68), proving a negative is one of the top and common logic fallacies. The default is that the cause does not exist until it is proven otherwise.

    Comment by Rod B — 13 Aug 2008 @ 4:38 PM

  73. Eric, In effect, what you are asking is that if a planet had a climate so simple that it was in effect deterministic, could we predict it? Sure. Mercury comes close. Then there’s the moon. Throw in an atmosphere w/o greenhouse gasses, and things get more complicated, but still mostly tractable. Add ghgs and dust, and you get something like Mars, and that’s pretty difficult if not impossible. Add water in all 3 phases and fuggedaboudit. Now you put all these fossil-fuel burning organisms on it and it’s a wonder Mother Nature herself didn’t throw in the towel… Oh, maybe she did?

    Comment by Ray Ladbury — 13 Aug 2008 @ 5:54 PM

  74. Re #70, after posting I realized that modeling without particular forcings is a regular practice (e.g. http://www.nersc.gov/projects/gcm_data/), but not, as far as I can see, used to compare points to observation point.

    Comment by Eric (skeptic) — 13 Aug 2008 @ 6:40 PM

  75. Eric, no single point measurement is going to show a climate change for a very long time, within the range of accuracy of the instruments.

    A thousand thermometers, each accurate to one degree, can be used to detect a trend of a tenth of a degree over a decade. No one thermometer in the group can do that.

    Would you make clear whether you do or don’t understand how these temperature trends are being determined from the instruments?

    You seem fixated on the idea of measuring at a single point. It can be done but it will take centuries rather than decades to accumulate enough numbers to do the statistics. You understand this?

    Comment by Hank Roberts — 13 Aug 2008 @ 7:03 PM

  76. #74 Hank, the one degree accuracy of the Albany thermometer should not be an issue over the time periods being compared. Not fixated on a single point, but on a set of points since, as postulated in the paper, due to problems with spatial integration of temperatures.

    Comment by Eric (skeptic) — 13 Aug 2008 @ 8:03 PM

  77. RE #72 & “proving a negative is one of the top and common logic fallacies. The default is that the cause does not exist until it is proven otherwise.”

    Well, that may be the case for scientists trying to establish science. But for the rest of us trying to avoid serious problems from climate change, we have to assume GW and its serious effects, until proven at high certainty otherwise, before we stop mitigating.

    And even if it could be scientifically established with high confidence that the world were not warming, and/or our GHG emissions were not contributing to the warming, and/or the warming was not causing serious harms, we still would want to reduce those activities that produce the GHGs, since they have many other negative effects — other enviro problems, inefficiencies (waste of money & resources), harm to the economy, war/military actions/costs.

    Comment by Lynn Vincentnathan — 13 Aug 2008 @ 9:45 PM

  78. The one degree accuracy of the Albany thermometer should not be an issue over the time periods being compared.

    I assume you have some statistical analysis to share to back up your handwave?

    Hmmm, why didn’t you supply the analysis in the post, since such an analysis is the only thing that could back up your assertion?

    Comment by dhogaza — 13 Aug 2008 @ 10:37 PM

  79. Eric: KevinM “So NO number of locations would be enough, because K’s et al.’s research question is inapplicable in the first place.” If every point in a model is inaccurate, how would any aggregate statistic from the model have any validity?

    Just think about fair coin tosses Eric. I cannot predict any individual toss (any predictor will be at best 50% accurate), but I can predict an aggregate statistic (the number of heads per thousand tosses) with great accuracy.

    Comment by Patrick Caldon — 13 Aug 2008 @ 10:37 PM

  80. Eric “should not be an issue” according to what source?
    What are you relying on for these proclamations?
    You have a concept you’re repeating but I don’t see where it comes from.

    Comment by Hank Roberts — 13 Aug 2008 @ 10:42 PM

  81. And to go a little bit further Eric with respect to my last comment; as I’ve thought about it this analogy is not a bad one.

    We could build a very complex simulation (a model) of a coin tossing apparatus. Imagine we build a simulator of a mechanical hand and coin using some kind of rigid-body dynamics simulation package. We model fair coin tosses. It would be very dependent on the initial conditions, slightly more force to the hand, slight changes in the positioning of the coin, etc. would result in a different result to the coin toss. Suppose in our simulation we’re careful to not put the coin in exactly the same place we put it before and we’re careful to slightly vary the force of the toss, and we simulate a great many such tosses in a “model run”. A coin tossing “model run” would be an accurate simulation of reality, measured on a toss-by-toss basis, approximately 50% of the time with respect to any real sequence of coin tosses; i.e. toss #1 of the real sequence will match toss #1 of the simulated sequence 50% of the time. Furthermore any two “model runs” will be correct for any particular toss in the run approximately 50% of the time with respect to each other.

    However all runs (and reality) will exhibit the property of the aggregate statistic of “the number of heads coming up in a run of a particular length” being nearly identical.

    Comment by Patrick Caldon — 14 Aug 2008 @ 2:26 AM

  82. #52 Ray

    “In a climate model, there is a lot of sensitivity to initial conditions and which particular small fluctuations occur in any particular run.”

    Does this mean that a model, if it was built today, would replicate exactly the earth with no known (or unknown) unknowns?

    Does this mean we have a ToE?

    Comment by Alan K — 14 Aug 2008 @ 5:25 AM

  83. Alan K., not sure how what you wrote pertains to what I wrote.

    Comment by Ray Ladbury — 14 Aug 2008 @ 7:18 AM

  84. #78 and #80, dhogaza and Hank, my statement was not precise, I should have said the one degree “precision” of the Albany thermometer. I assume the thermometer was accurate (unbiased, in calibration, etc).

    K and authors note that they used station data which had a monthly time scale and analyzed at monthly, annual, and 30 year moving average time scales. They had no further discussion of the station measurements.

    I assumed that the monthly station reading would be an average of 30 daily readings. From analysis such as http://hadobs.metoffice.com/hadcet/ParkerHorton_CET_IJOC_2005.pdf
    I would turn one degree (F) precision into 0.026 variance and divide it by the 30 readings being averaged in the month. That’s why I didn’t think it would be an issue.

    Comment by Eric (skeptic) — 14 Aug 2008 @ 7:50 AM

  85. Off-topic, but since there’s no “open thread” on this blog, I’m not sure where else to put this.

    I’ve detected another mistake in Miskolczi’s paper. Miskolczi’s equation (4) is:

    AA = SU A = SU(1-TA) = ED

    where

    AA = Amount of flux Absorbed by the Atmosphere
    SU = Upward blackbody longwave flux = sigma Ts^4
    A = “flux absorptance”
    TA = atmospheric flux transmittance
    ED = longwave flux downward

    These are simple identity definitions. I do wonder why Miskolczi used the upward blackbody longwave for the amount emitted by the ground when he should have used the upward graybody longwave — he’s allegedly doing a gray model, after all. Apparently he forgot the emissivity term, which is about 0.95 for longwave for the Earth. One more hint that he doesn’t really understand the distinction between emission and emissivity.

    Note that he seems to be saying the downward flux from the atmosphere (ED) must be the same as the total amount of longwave absorbed by the atmosphere (AA).

    The total inputs to Miskolczi’s atmosphere are AA, K, P and F, which respectively stand for the longwave input from the ground, the nonradiative input (latent and sensible heat) from the ground, the geothermal input from the ground, and the solar input. P is negligible and I don’t know why he even puts it in here unless he’s just trying to be complete. He’s saying, therefore, if you stay with conservation of energy, that

    AA + K + F = EU + ED

    Now, from Kiehl and Trenberth’s 1997 atmospheric energy balance, the values of AA, K, and F would be about 350, 102, and 67 watts per square meter, respectively, for a total of 519 watts per square meter. EU and ED would be 195 and 324, total 519, so the equation balances.

    But for Miskolczi’s equation (4) to be true, since AA = ED, we have

    K + F = EU

    That is, the sum of the nonradiative fluxes and the absorbed sunlight should equal the atmospheric longwave emitted upward. For K&T97, we have 102 + 67 = 195, or 169 = 195, which is an equation that will get you a big red X from the teacher.

    There is no reason K + F should equal EU, therefore Miskolczi’s equation (4) is wrong. Q.E.D.

    Comment by Barton Paul Levenson — 14 Aug 2008 @ 8:09 AM

  86. hi Ray
    my question is suppose you were building a climate model for eg. Albany you need the correct energy density as presumably one of many parameters as an input. You then say that initial conditions are key so you would need not only the correct energy density but the correct everything else. My question is do we know the correct everything else to ensure that initial conditions are replicated in a climate model so that no “loose ends” end up corrupting the model (ie if everything was correct except the energy density value then surely the model would not be as accurate as if the energy density was correctly assessed). So my question about climate models and initial conditions is how close to reality can we get to those initial conditions (=our earth today all of which has an effect on climate)? If we can replicate reality then that is a ToE isn’t it? If we can’t then don’t all those loose ends end up corrupting the model?

    Comment by Alan K — 14 Aug 2008 @ 8:37 AM

  87. They can model the climate at continental scales and that gives a good indication of what might happen should be continye BAU but what about the Biosphere per se? What model has any idea what happens here, not many I would imagine and hence we rely on real world evidence from the geological record, ice cores, organic matter etc to tell us about past climate and the biosphere.

    James Hansen tells us that the models are useful but the weakest link the the earth/climate science chain, real world evidence counts for much more of what we know about future climate change. The models seemingly just back up the real world evicence.

    Comment by pete best — 14 Aug 2008 @ 8:51 AM

  88. Lynn (77), I respect your zeal, but asserting things that are prima facie contraindicated in support of your cause is not a good way to build your credibility viz-a-viz people you may be trying to convince, at least in a logical and scientific environment. (might work well with much of the public as a mass…)

    Comment by Rod B — 14 Aug 2008 @ 8:51 AM

  89. Rod admonishes:

    “Lynn (77), I respect your zeal, but asserting things that are prima facie contraindicated in support of your cause is not a good way to build your credibility viz-a-viz people you may be trying to convince, at least in a logical and scientific environment. (might work well with much of the public as a mass…)”

    Holy cow, Rod, your own argumentation is a bunch of contraindications, lies and wishful thinking. Please save us from your hypocrisy!

    Comment by Petro — 14 Aug 2008 @ 9:37 AM

  90. Alan K., Seeking omniscience is the wrong approach for us mere mortals. Rather, you vary the initial conditions and look at the persistent (or robust) results. Averaging over many runs or over space and time can diminish the dependence on intial conditions. Remember, what we’re interested in is climate–persistent global and regional (not local) trends over time.

    Comment by Ray Ladbury — 14 Aug 2008 @ 9:51 AM

  91. #81, PatrickC, I like your improved analogy and within a model run of successive tosses, we would probably want the coin placement and tossing force to be a complex function of the previous toss, not simply random. This would help model the analogous LTP from the paper.

    One hypothesis in the paper is that the local climate is predictable in some way. I would thus propose that the coins are not equally weighted, that they are each slightly biased but as a set they are not. The question implied by the paper is whether the local climate bias will reflect the global climate changes predictably, or whether that bias will be overwhelmed by weather.

    In your analogy it would be simply whether the real and modeled coins (real coins are never perfect) have enough bias in their weighting to make a predictable difference in the measurements or whether that bias will be overwhelmed by the hand position and tossing force changes (i.e. weather).

    Comment by Eric (skeptic) — 14 Aug 2008 @ 9:53 AM

  92. #90 thanks Ray: omniscience – a “nice to have” :)

    but hold on however; isn’t your reasoning circular? Initial conditions are important (“there is a lot of sensitivity to initial conditions”) but you then see what results you get then you vary the initial conditions until you have the “right” initial conditions to suit the results. So either model results result from initial conditions or, if those initial conditions don’t produce the “right” results initial conditions are changed to suit model results. You do not quite have a predictive model, then, do you?

    Comment by Alan K — 14 Aug 2008 @ 10:46 AM

  93. Petro (89), what on earth are you talking about? And, why?

    Comment by Rod B — 14 Aug 2008 @ 11:32 AM

  94. My 2 cents, @ Lynn and responders:
    It looks to me like Lynn’s using a risk management philosophy, which is IMO absolutely appropriate for policy decisions, but she made the mistake of expressing it in scientific terms, which as Rod B points out is inappropriate.

    Risk management: Once given a credible scenario that a risk exists with probability above some reasonable threshold, that needs to inform policy-making until/unless that risk can be demonstrated to have a low probability, beneath some reasonable threshold.

    Science: null hypothesis should usually be that the independent variable has no effect on the dependent variable.

    So what Lynn *said* was wrong, but what she seemed to *mean* (taking into account her follow-up post) was right. Sometimes you can tell what people mean even when they say it wrong. In that case, people who make too big of a deal out of what was *said*, neglecting what was pretty clearly *meant*, are rather aggravating to interact with. I feel like Rod B probably understands this in a personal way, after that whole back-and-forth about Monckton’s recent paper.

    Comment by kevin — 14 Aug 2008 @ 12:19 PM

  95. Alan K, Actually, you are more interested in the behavior that persists across various initial conditions and in spite of fluctuations. You are also interested in the RANGE of behaviors. The results of any single run are not that interesting. That’s why the paper is so baffling. I think of it as analogous to a thermodynamic system–we know a thermodynamic system will spend most of its time near equilibrium because there are so many more possible states there than far from equilibrium. Because of this, we concern ourselves with the properties of the equilibrium system. It also happens to be what we know how to calculate–nonequilibrium thermo is really \near-eqilibrium\ thermo. Does that make sense?

    Comment by Ray Ladbury — 14 Aug 2008 @ 12:28 PM

  96. Alan K #92,

    You can’t have perfect initial conditions and chaos theory tells us that a small change can make a huge difference. Most people forget the “can” and so we get the butterfly wings meme trotted out which doesn’t explain and actively hinders understanding.

    So what you do is run a model with slightly different inital values that still give an overall answer that is correct (e.g. wind speed 10kts +/- 3kts, you pick 5-15kts not 10-90kts because 90kts doesn’t give you a value you “read” with your instruments as 10kts).

    And you run lots of these.

    Where they all roughly agree on each other is on things that don’t change much on initial conditions. This gives your model sensitivity. And you use that to improve your models.

    However, your model must miss things out because we don’t have the computing power to simulate each molecule in the air and in the oceans. Different models miss out or approximate different things. If none particularly disagree, you have a very predictable weather/climate pattern and have a strong indication that your forecast is very close to the truth.

    So you run all these models and by virtue of you not forcing the values to conform (in which case, why run the models: just make the numbers up on a spreadsheet and go home), they act slightly differently. But their average should be a lot closer to the truth.

    Rather like tossing a coin. The easiest way to see if the coin is weighted is to toss it 100 times. You use the average of all these tosses (which had different throws on them, just like different models have different forces or starting conditions) to decide whether the coin is biased or not.

    The more you toss that coin, the more certain you are that the result you get is the correct one. E.g. toss a coin 100 times and you’re confidence in the result is 90%. Toss it 10,000 times and it’s 99%.

    This is not a circular argument. It’s statistics. you can prove it yourself by taking a coin and weighting it so it is biased to heads or tails and do the experiment. The first throw tells you NOTHING. The next four don’t tell you much unless they are almost all one side or the other. After 10,000 even a every slight bias in the coin could be discovered.

    (Which oddly enough is something Rod B and Monkton seem to have missed out.)

    Take a look at the laplace construction that is always shown in chaos theory books. There are areas where the lines are close together: a small change doesn’t make much change in the output. But “small changes make small changes in the output” does not make good copy.

    Comment by Mark — 14 Aug 2008 @ 12:34 PM

  97. Eric, it’s nuts to refer to just a few weather stations to try to establish anything about longterm climate. You have to RTFM.

    http://scholar.google.com/scholar?sourceid=Mozilla-search&q=how+many+surface+weather+stations+to+determine+climate+change

    First article found:
    http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf
    Citations to that article:
    http://scholar.google.com/scholar?hl=en&lr=&safe=off&cites=7874538422593985630

    Note the number of times they’ve been cited by later work — read some of that to see the science in this field. Speculation and wishes don’t change the instruments nor what you can accomplish with the data.

    Comment by Hank Roberts — 14 Aug 2008 @ 2:04 PM

  98. re #79: “If every point in a model is inaccurate, how would any aggregate statistic from the model have any validity?”

    I’m not sure how I can say this differently, but. . . if my (imaginary) “bubble prediction model” reliably replicates the larger-scale properties of the observed ensemble of bubbles in my (imaginary) liter of boiling water, then we may not feel that an inability to predict single bubble trajectories is a meaningful deficit. That is, if the model consistently captures the aggregate behaviour–the functions describing densities, sizes, durations, and such–then the modelled bubble ensembles would “look just like” the observed ones. And, more importantly, “act just like” them, too. And we would be able to use the models to make some predictions, with error bars, about what might happen should that liter of water be somehow subjected to temperature or pressure conditions that we are not physically able to set up in our lab. And we still might be utterly unable to predict what any one bubble will do.

    I think the problem you are having is in imagining that this is even a possible scenario–your repeated questions all seem to presuppose that prediction on smaller scales is necessary to prediction on larger scales. (“Weather becomes climate.”) Possibly you are imagining causality proceeding “upwards” from small causes to larger ones. But does it really work that way?

    Comment by Kevin McKinney — 14 Aug 2008 @ 2:44 PM

  99. kevin (94), I agree. It’s what I said; you just said it much better.

    Comment by Rod B — 14 Aug 2008 @ 4:01 PM

  100. RE #94, thanks, Kevin. That’s exactly what I meant — a focus on the beta error or avoiding the false negative. I call it the medical model, AKA the precautionary principle — one would be quite nervous if the doctor were to say that they are only 94% confident that one’s lump is cancerous, so no treatment was needed until it gets up to that golden 95% (or .05 p that null is correct).

    I’d think that the level of confidence required for mitigation or prevention of a problem would be inversely related to the seriousness of the problem and positively related to the costs of mitigation and prevention. Since (1) global warming risks are so very high (throwing in everything that could go wrong, such as extreme warming for 100,000 years from positive feedbacks kicking in, and hydrogen sulfide outgassing snuffing out a huge chunk of whatever live survives the warming), and (2) we haven’t even scratched the surface of cost-effective mitigation strategies that actually save us tons of money, I’d say the standard for deciding to mitigate should be exceedingly low, like what science may have reached well before 1990 (I know it reached 95% confidence or .05 prob of null being correct in 1995).

    So basically, the world should have seriously started mitigating this problem before 1990, and we should be at least some 20% below 1990 GHG levels by now, rather than way above them.

    Which means that even if this Koutsoyiannis, et al., study in question had failed to reach 95% certainty on GW and its effects in a way acceptable to the community of climate scientists (which apparently it did not bec it confused the random fluctuating noise of weather with the statistical aggregate of climate), it doesn’t really matter from a policy standpoint. It does nothing to derail the urgency of our need to severely mitigate GW, starting immediately, if not 20 years ago as we should have done.

    Then once we’ve implemented all the cost-effective mitigation strategies (which should keep up very busy for some 20 years or so), we can then revisit whether or not our GHGs are causing GW, and GW is causing harmful effects, and whether or not we should start sacrificing to mitigate the GW disasters.

    So not knowing what the science is here, doesn’t really bother me much; the study in question would not convince me to stop mitigating, even if it did hold some water.

    Comment by Lynn Vincentnathan — 14 Aug 2008 @ 4:54 PM

  101. #98 I tried to explain this on another thread. Uncertainties, errors, small differences in initial conditions propagate, yes – but only up to a point. As unpredictable weather integrates over time to become predictable climate, the law of large numbers kicks in and all those errors in all those grid cells, all those processes, start canceling. Radiative balance severely constrains how the system may evolve, what states it may take on. That is why Eric’s (and Pat Frank’s) argument is incorrect.

    Of course the numerous local departures from equilibrium don’t cancel completely (due to all the temporal and spatial process lags) so you are left with the internal climate variability that is revealed to us as ENSO, PDO, NAO, THC, etc.

    That is why you need 1000s of data “points” (or better, grid cells) – not 8 – to reliably test model predictive power. Because with too small a time and space scale you are dealing with weather, not climate.

    Comment by Richard Sycamore — 14 Aug 2008 @ 5:05 PM

  102. Mark (96) — Actually, the first flip of the coin is worth about 0.03 bits of information.

    Comment by David B. Benson — 14 Aug 2008 @ 5:19 PM

  103. Ray #95 thanks for responding it does make sense I appreciate it’s not a one-run linear model set up – forgive me if I don’t respond further now as it’s late in europeland! I will aggregate the various initial conditions conditions and respond tomorrow…

    Comment by Alan K — 14 Aug 2008 @ 6:08 PM

  104. In comment #36, Steve E. says in part:” The computational models have an interesting status in this endeavour: they seem to be used primarily for hypothesis testing, rather than for forecasting. A large part of the time, climate scientists are “tinkering” with the models, probing their weaknesses, measuring uncertainty, identifying which components contribute to errors, looking for ways to improve them, etc. But the public generally only sees the bit where the models are used to make long term IPCC-style predictions.”

    Unfortunately there’s a general misconception among the public that models make predictions. My understanding is that the digital models make projections of possible future outcomes, dependent on measureable data,known physical principles and numerous uncertain and hard to predict variables.

    They don’t forecast the future as much as tell what we can likely expect from given scenarios,which include human social and economic behavior. (Although Hansen’s Scenario B in his three projected climate model scenarios of surface temperatures made in 1988, sure looks as if he had a crystal ball!)

    Comment by Lawrence Brown — 14 Aug 2008 @ 6:18 PM

  105. #95 Ray Ladbury says:

    “Actually, you are more interested in the behavior that persists across various initial conditions and in spite of fluctuations. You are also interested in the RANGE of behaviors. The results of any single run are not that interesting. That’s why the paper is so baffling.”

    This viewpoint is what is baffling. The authors should have used multiple runs of FIXED initial conditions. The inline reply to #67 makes it clear why this, though desirable, is unfortunately not possible. So although I agree you are interested in the range of behaviors, and you are interested in ensemble behavior, you are NOT interested in behavior across various initial conditions. If your goal is to compare model output to reality, that is.

    Studying model behavior across a range of initial conditions is useful, but answers a different question than what this paper tried to address.

    Comment by Richard Sycamore — 14 Aug 2008 @ 7:48 PM

  106. Studying model behavior across a range of initial conditions is useful, but answers a different question than what this paper tried to address.

    Which would be what the models predict given a fixed, known-wrong, set of initial conditions?

    Wow, that’s really going to tell us a lot.

    Comment by dhogaza — 14 Aug 2008 @ 9:52 PM

  107. What Koutsoyiannis et. al seem to be upset about is the use of output from global climate models to make hydrological predictions. So, here is another example of just that: a paper on the Western U.S. snowpack that compares historical temperature and precipitation observations to hydrology. The authors use the output of global climate models to make general projections about the future of the snowpack:

    http://sciencepolicy.colorado.edu/admin/publication_files/resource-1699-2005.06.pdf

    Mote et .al (2005) DECLINING MOUNTAIN SNOWPACK IN WESTERN NORTH AMERICA, Bulletin of the American Meteorological Society, vol. 86, Issue 1, pp.39-49

    Instead of data from eight locations, we have:

    Snow Course dataset: “A total of 1144 data records exist from the three data sources for the region west of the Continental Divide and south of 54°N. Of these, 824 snow records have 1 April records spanning the time period 1950–97 and are used in most of the analysis. For the temporal analysis, a larger subset of the 1144 snow courses was used.”

    Temperature/precipitation dataset: “Data from the nearest five stations are combined into reference time series. There is a total of 394 stations with good precipitation data and 443 with good temperature data.”

    The t/p data was fed into the VIC hydrology model, which produced estimates of snowpack, which were then compared to the observed snowpack record, with a close match. Thus, Mote et al show that the VIC model, when forced with historical temperature and precipitation records, gives a decent match to observed snowpack records. If a global or regional climate model produces a given temperature/precipitation forecast, the VIC model should produce a realistic estimate of snowpack based on that.

    Mote et al also discuss the influence of El Nino and PDO indices on the year-to-year variability in precipitation (the long-term memory issue):

    ..only a small fraction of the variance of precipitation is explained by any of the Pacific climate indices, and, more importantly, the widespread and fairly monotonic increases in temperature exceed what can be explained by Pacific climate variability.

    Then, they use global climate model projections to predict future trends, which is what K et. al claim is unjustified:

    We are left, then, with the most important question: Are these trends in snow water equivalent an indication of future directions? The increases in temperature over the West are consistent with rising greenhouse gases, and will almost certainly continue.

    Estimates of future warming rates for the West are in the range of 2°–5°C over the next century, whereas projected changes in precipitation are inconsistent as to sign and the average changes are near zero.

    It is therefore likely that the losses in snowpack observed to date will continue and even accelerate, with faster losses in milder climates like the Cascades and the slowest losses in the high peaks of the northern Rockies and southern Sierra.

    So, here is what Koutsoyiannis et. al can do: take the ~400 locations used in Mote et. al and compare that historical dataset to predictions of global climate models for that region. That would allow them to at least address their stated question, which was, quote, “the credibility of the geographically distributed representation of climate by GCMs.”

    The model for doing that could be Salathe 2005, available here: http://cat.inist.fr/?aModele=afficheN&cpsidt=16653268

    The ECHAM4 simulation closely reproduces the observed statistics of temperature and precipitation for the 42 year period 1949-90.

    For more, just read Chapter 11 of the IPCC FAR, Regional Climate Predictions, which goes into great detail on the sources of regional uncertainty in climate models.

    After looking at these observations and models, what would the rational response for denizens of the Western U.S.? Well, first, stop burning coal. Second, include the reality of water scarcity in any future growth planning. Third, start investing in solar and wind energy to replace the coal.

    Speaking of which, our wonderful Congress has failed yet again to pass the renewable energy tax credit, thereby bringing the entire industry to a grinding halt, with companies racing to finish all projects in a few months. Shamelessly, the Senate Energy committee has also been hosting a fight over what sector of the coal industry will receive billion-dollar DOE largess, and what district that largess will go to (the May 8th hearings featuring Bodman & Thompson). Must be seen to be believed… http://appropriations.senate.gov/hearings.cfm?s=erg

    Opening quote (Byron Dorgan, ND): “…with 50% of the electricity coming from coal, and with climate change legislation being enacted calling for targets and timetables and so on, how do we continue to use our coal resource? The answer to that is through technology, and through learning, and through demonstration projects and going from demonstration to commercial application of projects that will capture carbon…”

    In 2006, the U.S. dug up and burned 1.054 billion tons of coal, resulting in CO2 emissions of 2.134 billion tons (Energy Information Administration). The claim that we will be capturing and storing any meaningful fraction of the carbon dioxide produced from burning that coal (every year, no less) is just ludicrous nonsense.

    Solar and wind-based technology is the only real answer. For something more positive, see the latest major breakthrough in solar energy conversion technology: http://www.sciencedaily.com/releases/2008/07/080731143345.htm

    ‘Major Discovery’ Primed To Unleash Solar Revolution: Scientists Mimic Essence Of Plants’ Energy Storage System, Aug 01, MIT

    Comment by Ike Solem — 15 Aug 2008 @ 12:09 AM

  108. Richar, #105

    No, if we were running a physical experiment, we would use the same initial conditions. Reality and the finite ability to be accurate in the real world will do the fuzzing.

    In a mathematical model, however, the same number put through the same equation will always produce the same value.

    So running an equation through multiple times will produce the same result.

    Comment by Mark — 15 Aug 2008 @ 2:16 AM

  109. David #102, what use is 0.03 bits of information when you need a yes or no answer?

    Rather like having a half-penny when you are shopping. Technically you’re not broke. You have a half-penny. You can’t spend it, which is the point of money. So are you broke?

    And it’s really less than that, because the coin could do a lot more than “fall heads” or “fall tails”. You could drop it, it could land its edge, someone could steal it midair, you could give up and walk away…

    Comment by Mark — 15 Aug 2008 @ 2:20 AM

  110. Richard Sycamore, Sorry, in the interest of brevity, I was probably less specific than I should have been. Actually both studies are interesting, but they tell you different things. We are certainly interested in multiple runs with the same starting conditions–that tells you how much fluctuations affect the end result. However, we never have perfect knowledge of initial conditions, so varying the initial conditions is also interesting. Frankly, I am uncertain what question K. et al. could possibly have been trying to answer. Their approach makes no sense unless you want to use GCMs for long-term weather prediction–and that’s not an enterprise I’d bet on.

    Comment by Ray Ladbury — 15 Aug 2008 @ 7:06 AM

  111. Thanks for #110, Ray. Replies directly to #106.

    You say:
    “Their approach makes no sense unless you want to use GCMs for long-term weather prediction”

    Your distinction between “weather” and “climate” is traditional and understandable, but is it verging on dogmatic? Ocean tempeartures vary hugely, but on slow, “climatic” time scales. I would contend that “long-term weather prediction” is exactly what the AOGCMs do, if by “weather” you take to mean the chaotic dynamics of “O” in the AOGCM.

    So maybe their approach DOES make sense, when viewed from a different perspective.

    Comment by Richard Sycamore — 15 Aug 2008 @ 8:47 AM

  112. Richard, you’re entitled to your own opinions, but when you apply your own definitions to support them, your ideas become disconnected from the subject under discussion and fly off tangentially.

    If you redefine climate as weather, then, yes, climate is the same as weather. This is a perspective, but I shudder to think of the contortions required to view their paper from that perspective. Ouch!

    “Contrariwise, if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.” — Lewis Carroll

    Comment by Hank Roberts — 15 Aug 2008 @ 12:29 PM

  113. Richard, Individual runs are not completely uninteresting. If they exhibit extreme behavior, one can try to figure out why. Those that cluster around median behavior give an idea of what sorts or perturbations the system is stable to. There is no reason to expect any single run to correspond to anything in the real world, though, so K. et al.’s approach is misguided. It’s a little like doing a Monte Carlo run. Individual runs will be more of interest in telling you about the model than about the system you are trying to model, but take enough runs and you will elucidate the physics of the system.

    Comment by Ray Ladbury — 15 Aug 2008 @ 12:33 PM

  114. #113 “take enough runs and you will elucidate the physics of the system”
    Take enough runs and you will elucidate the *conjectured* physics of the system.

    Otherwise, agreed.

    #112 Sit back, be patient, and maybe you’ll learn something.

    Comment by Richard Sycamore — 15 Aug 2008 @ 2:52 PM

  115. Richard 114

    What’s the difference?

    Physics is not equal to the reality.

    But they either conform to as good as we can read our instruments TO reality.

    So although we “conjecture” that gravity acts as per Newtonian physics (while gravity isn’t listening: it’s too busy being gravity), this IS the “physics of the system”.

    And what the clucking bell was your “sit back” comment apart from hugely offensive and derogatory?

    Weather is NOT CLIMATE.

    We have

    Weather: what we are getting THIS INSTANT. Hugely variable.
    Seasons: Winter will generally be colder than Summer
    Climate: In an ice age it will be colder than an interglacial

    But even in a climatic ice age, we still have summer (which is still warmer than winter) and we still have weather that can, on any particular day, be warmer than a day in another season.

    If you shut your yap and THOUGHT maybe you’d learn something.

    Comment by Mark — 15 Aug 2008 @ 3:19 PM

  116. #115
    “what the clucking bell was your “sit back” comment apart from hugely offensive and derogatory?”
    I meant sit back and learn something from Gavin, not me. I was asking Hank to please stop getting in the way all the time. I want Gavin to clarify where weather stops and climate starts, and how this relates to the characteristic time scales of ocean fluid dynamics. How this might relate to the title of the opening post.

    “If you shut your yap and THOUGHT maybe you’d learn something.”
    I’m all ears, thinking cap on, ready to learn. Please, proceed.

    My captcha phrase is “Gavin suffer”. That’s ptrobably a bad sign.

    Comment by Richard Sycamore — 15 Aug 2008 @ 5:19 PM

  117. Richard, Where climate starts and weather stops is not a particularly productive way of looking at things.
    It is a little like asking when the microworld stops being quantum, or when you can’t do physics without relativity. The answer is going to vary depending on the phenomenon being discussed. In the case of climate, it depends on how the noise diminishes over time. Gavin and Raypierre have both emphasized that there are many different timescales to climate–even with respect to the oceanic interactions.

    Comment by Ray Ladbury — 15 Aug 2008 @ 6:07 PM

  118. > stop
    Delighted.

    Comment by Hank Roberts — 15 Aug 2008 @ 6:45 PM

  119. Richard, How about this as a suggested definition of the timescale of climate? A climatic trend emerges at a confidence level CL on a timescale such that the proportion two series of climate model runs–one possessing the trend, the other not–the proportion of runs with the trend that clearly exhibit it is CL and the proportion of runs without the trend that appear to exhibit is 1-CL. Of course it likely means that different climatic trends will have different timescales, and it makes the timescale dependent on CL the desired confidence level, but I think it makes sense.

    Comment by Ray Ladbury — 16 Aug 2008 @ 7:06 AM

  120. The topic is “hypothesis testing and long-range memory”. Where weather stops and climate starts is, in fact, the issue – insofar as “weather noise” may become “climatic noise” if it persists long enough. It is the issue that eminent hydrologists such as Dr. Koutsoyiannis are exploring in papers such as this one. Rather than dismiss the paper as being methodologically flawed, why don’t you ask yourselves what he is trying to get at? You have been trying to sweep this issue of weather vs. climate under the rug, saying “it doesn’t matter”, or “it isn’t helpful to look at it that way”, or providing inexpert definitions in terms of expectation vs. realization. But I am very glad to say that Ray has thought about it overnight and has come back with an attempt at a potentially workable definition. It shows perhaps you understand that this may be an important issue after all. I will think about Ray’s definition, whether I agree with it and what it’s implications are. Thank you, Ray, for listening and trying to address my question.

    Comment by Richard Sycamore — 16 Aug 2008 @ 10:05 AM

  121. Re: #120 (Richard Sycamore)

    The only one trying to “sweep this issue of weather vs. climate under the rug” is Koutsoyiannis. That’s one of the points of this post.

    Comment by tamino — 16 Aug 2008 @ 11:23 AM

  122. Richard, remember, I am hardly an expert. I’m just thinking about this from my own experience with modeling complex systems–also not unlimited. In the case of climate you are looking at a time series that is affected by many factors, but factors such as CO2 stand out because while not as large as other factors, they are persistently positive. Over time, since there are not many such signals, particularly that are changing, they will emerge from the noise–how rapidly depends on how many other influences there are and their time dependence.

    Comment by Ray Ladbury — 16 Aug 2008 @ 11:23 AM

  123. Isn’t the difference between weather and climate much simpler than that? Like in, climate is forced, weather unforced variability. The latter being completely removable by ensemble averaging. And completely unpredictable over more than a few years (and that only for the ocean related stuff).

    When studying climate, weather is the noise. There is no ‘climate noise’ then.

    No, this doesn’t help for empirical separation of the two; for that, Ray’s approach may have a point.

    Comment by Martin Vermeer — 16 Aug 2008 @ 11:33 AM

  124. Martin, At the risk of being a pain, what do we really mean by “unforced variability”? Isn’t that effectively saying that there are many forcers acting on a nonlinear system on that timescale, so deterministic predictions are not possible? I mean, at some level, even weather events are “forced” in that they have a proximate cause, even if it is altered by a butterfly flapping its wings in the Amazon.

    Comment by Ray Ladbury — 16 Aug 2008 @ 3:41 PM

  125. Well I can’t build models using math, but my 30 year old pear trees, were a close run thing to establish (one died back to the ground) and used to bear at the very end of September/early October and I had to beat the storms to pick the fruit. Now, here in Duluth Mn, I can start picking today. No matter the quibbles, my pear trees attest to the general accuracy of the models.

    Comment by John Sarette — 16 Aug 2008 @ 4:44 PM

  126. Ray:
    http://www.realclimate.org/index.php/archives/2005/09/what-is-a-first-order-climate-forcing/
    “… It is helpful to distinguish forcings that are important in the global mean, from those which might be important locally but not have much impact for ‘global warming’….”

    Comment by Hank Roberts — 16 Aug 2008 @ 7:18 PM

  127. See a statistical distribution of balls in a Pachinko machine.
    Then the entire mechanism loses its level, two legs tilt the level to one side, and if one leg was lower then the entire table would be out of level and violently tipping.

    Fortunately, the Pachinko machine sits atop another Pachinko machine.

    It’s Pachinko machines all the way down.

    Comment by Richard Pauli — 17 Aug 2008 @ 12:13 AM

  128. Ray,

    not quite. Yes, I mean variability that would happen anyway even if all external forcings were strictly constant. And yes it is the chaotic, “fading memory” part of variability.

    Whether you can say that weather events have a proximate cause, I suppose formally so. But that is not a “forcing”, it’s an initial condition. As a metaphor think of an ODE: the forcing is the F(t) on the right side, what makes the solution different from the homogeneous solution. The complication here is that there isn’t just one solution but a whole bundle of them, even for a strictly prescribed external forcing regime. Due to the chaoticness, prescribing initial conditions will help only for a limited time.

    I am having great difficulty with the expression “internal forcing” used by some.

    Comment by Martin Vermeer — 17 Aug 2008 @ 4:26 AM

  129. Hank, Thanks for the vector. When we talk about climate, though, we are not necessarily talking about something “global”. Climatic effects manifest at all scales. Richard had asked when “weather” becomes “climate”, and I am trying to say that that isn’t really the best way to look at it. Weather and climate are two different attributes of the same Land-Water-Atmosphere system. On short timescales, the behavior of the system is dominated by weather, which is chaotic, while on longer timescales (how long depends on the particuar climatic trend under discussion) climate trumps weather. Climate seems to not be chaotic–its behavior being determined largely by energy balance. Maybe the way to look at this is not so much that climate is “average weather” as to say that once you average out weather effects, climate manifests.

    Comment by Ray Ladbury — 17 Aug 2008 @ 6:53 AM

  130. when “weather” becomes “climate”

    When does the boy become the man?

    When you die, when does death arrive?

    When does the embryo become a baby?

    where does life begin?

    Comment by Mark — 17 Aug 2008 @ 8:38 AM

  131. Maybe the way to look at this is not so much that climate is “average weather” as to say that once you average out weather effects, climate manifests.

    Yes, precisely. With the interesting difference that with climate models you can do “ensemble averaging”. With the real thing you’re stuck with climate plus a single instance of weather. Then you can only rely on long enough time series, and even then it will never be perfect.

    Comment by Martin Vermeer — 17 Aug 2008 @ 8:47 AM

  132. Ray, just pointing out, from the original post:

    > There are very clearly two parts to this paper – the
    > first is a poor summary of the practice of climate modelling
    > …. This is however just a distraction …. The second part
    > is their actual analysis, the results of which lead them
    > to conclude that “models perform poorly” …

    Didn’t want to lose the focus or end up getting turned around.

    Comment by Hank Roberts — 17 Aug 2008 @ 10:15 AM

  133. Mark #130: not even wrong ;-)

    Comment by Martin Vermeer — 17 Aug 2008 @ 12:57 PM

  134. tamino Says: “Estimating the Hurst parameter from observed data is very tricky business.”

    Unpublished work of mine from the early 1990’s confirms this. There was a whole bunch of interest in whether financial markets exhibited long time behavior, and if so, how to estimate it.

    There are lots and lots of problems with empirical observation of fractals, although in fluid dynamics the example of the Kolmogorov-Obukhov scaling of homogeneous turbulence does provide a useful example – there are actually three regimes that can be observed – the molecular diffusion regime (Heisenberg’s Ph. D. dissertation, actually), the 3-D inertial regime (Kolmogorov-Obukhov), and then the 2-D inertial regime, where, if you have a big enough box of fluid, it stops looking like a 3-D box and more like a 2-D shell.

    If you look at a plot of the log of the energy in the flow against the log of the wavenumber (or wavelength), then in principle you can see the three straight-ish line segments, and the three slopes are pretty well explained by the scaling theory. So you have in some sense a successful fractal theory – over the right window of scales, you get the predicted scaling.

    The problem is that in fact you have three unsuccessful fractal theories – because they only hold over a range of scales before the physics in the other regimes takes over. And the limited window of scales may or may not be enough to get good estimates of the scaling. Of course, you can might around this issue if you have many replications of the experiment – with enough realizations you can do lots of things. So in the lab, it’s really just a problem of experimental design to see if you can tease out the scaling.

    And now, another problem. Back in the early 1980’s, Chuck Leith had found in the literature about twenty different explanations of the Kolmogorov-Obukhov scaling exponent, many of them mutually contradictory. I don’t know if in the intervening years more of these explanations have arisen, although to some extent the Yakhot-Orszag theory should have consolidated things a bit. So in the case that you careful enough to verify the scaling exponent, the next question is which “theory” did you verify? A good idea for people that want to go around fitting fractal explanations to observed data is to have a good reason to believe that they are excluding something.

    But there is a good reason to believe that lots of scaling explanations lurk around every corner – probably the first to draw attention to the abundance of fractal approximations was Michael Barnsley who even wrote a book “Fractals Everywhere” about the ubiquity of scaling representations of, well, everything. And it is not just that there is one scaling representation of some arbitrary bunch of data, there have to be many different ones; (consider the fractal compression of an image, or of any slight distortion of that image – you can think of this as lots of different scaling explanations for the original image).

    An experiment that most people here could do on whatever computer they are using to read this, would be to generate a bunch of points in a plane by their favorite fractal; now embed that plane in three dimensions on a smooth surface. Now compute the fractal dimension of those points. Topologically, the fractal dimension should be the same. Well go measure it; is it the same? (Good luck getting the number to come out the same…)

    So yeah, you can stick data into an algorithm and get Hurst exponents, or any number of other scaling parameters. But as tamino points out, there are a lot of other values for these scaling parameters which are also very close to the observed data. A surprisingly small perturbation is usually all it takes to completely derange the estimates of scaling.

    And of course we haven’t even started to bring intermittency into the question.

    Comment by Andrew — 17 Aug 2008 @ 1:37 PM

  135. Based on what’s been said here, I understand that if you look at the weather over a spatial scale which is less than, say, 1000 km, and/or over a time scale which is less than, say, 30 years, it’s extremely unlikely that you’re seeing the CO2 “signal” and not the random “noise”.

    In fact, every day in the press there are stories about how the change in weather in some small region, over the course of a couple decades or less, is due to global warming.

    Will this blog join me in denouncing such stories as inaccurate?

    [Response: depends on the case and on whether there is a larger scale and longer time period context. But we’ve said over and again, that short term weather events are difficult to impossible to attribute to climate change. – gavin]

    Comment by Steve — 18 Aug 2008 @ 11:28 AM

  136. re # 130 Mark wrote: “when “weather” becomes “climate””

    Hmmm, I don’t know what you mean. Climate is usually defined as 30-50 years mininum averages according to the IPCC and the WMO…and has been for quite some time.

    Comment by Richard Ordway — 18 Aug 2008 @ 12:06 PM

  137. Richard, #130.

    It was a response to elicit rational though as to why asking (or even trying to draw a line between) the difference between climate and weather. As per Richard Sycamore and alluded to by Ray’s comment #129.

    The answer, like the other analogues I used, depends on what you define as climate and what you define as weather (with the middle state, what do you define as seasonal, but that also depends on where you are on the planet too). In fact, to draw a line is completely wrong. As they are with the analogues. There’s a point at which IT really is and another point, maybe a long way off, where IT really isn’t.

    And it really isn’t “weather” when you average over 30 years.

    It really isn’t climate when you average over a year.

    But that’s what Richard is trying to do. Get us to draw a line where one side is weather and one side is climate. And if we don’t define one, he’s going to do it himself and show that the other side of the line is also weather, hence

    weather == climate

    However, if you aren’t good at setting an argument, this may have been a bit too subtle for you.

    ;-)

    Comment by Mark — 18 Aug 2008 @ 12:46 PM

  138. The time period necessary to find climate could vary with different purposes. A bacteria that divides every 20 minutes might (were it not a bacteria :) ) have the perspective that a snow storm is an ice age and each snowflake is a weather event. A rock might think of each ice age as a weather event A craton might see ice house and hot house conditions as being weather events, as well as the individual comings and goings of supercontinents (manifestation of mantle weather), although over billions of years these may form a climate, or if the mantle climate changes to fast to find it in a single instance, one could comb the universe for similar planets to get more data (like an ensemble of model runs) etc…. As the time scale lengthens, some factors that could be considered external forcings start to become more dependent on the system itself.

    Climate is the average of all things about the weather, not just a simple average. It’s the average of the standard deviation. It’s the average of the shape of the statistical distribution. It’s the average of the standard deviation of the shape of the statistical distribution. It’s the average of the shape of the distribution of the variations of the shape of the distribution… of every variable, which is the Temperature, wind, etc, at every point in the system at every moment in time. Luckily all these things don’t vary completely independently of each other, so even a single number like average global surface temperature has at least some important meaning.

    A single glacier’s retreat over a year may be enhanced by global warming, a single glacier’s growth may be reduced by global warming, – depending on expected regional effects. But it’s hard to know just how much, because global warming isn’t just hidden in shorter-term weather noise, it may have an effect on that weather noise. But when many many glaciers, most of them over much of the globe, have a longer term trend, of a size and persistence that natural variability isn’t expected to cause, and that forced global warming can account for, than it can be evidence of that forced global warming working at least in some way as expected, and each individual glacier’s change over any small time period contributes to the statistics, and in that way is a part of it.

    I have used the term ‘internal radiative forcing’ but I use it in single quotes as I have just done. In the forcing/feedback dichotomy it’s a feedback, an aspect of the internal variability. I think a possible source of confusion is that radiative feedbacks are sometimes named as radiative forcings – ie the water vapor feedback has a radiative forcing of … etc. On the time scales of ice ages, CO2 and ice sheets are feedbacks but they can be discussed as having radiative forcings even in that context (whereas even for low-frequency variability (interannual to intraseasonal), water vapor and cloud feedbacks (PS includes weather-dependent aspects of how clouds would respond to aerosols, but not including changes caused by changes in aerosols themselves) are so much more rapid that they are still feedbacks). In the context of change, forcing may be defined as relative to some baseline – preindustrial CO2 concentration, for example. Yet in an investigation of the total greenhouse effect (for an equilibrium climate, no time dependence), one might tally up the ‘radiative forcings’ of each agent, including clouds and water vapor (but watch out for the overlaps – order of adding greenhouse agents affects the individual contributions but leaves the total the same). I’m not sure if such use of the term forcing (for water vapor and clouds in particular) is technically wrong or not, but it’s really not confusing at all if you know what’s being discussed; you can tell what is meant from the context. (But for the sake of the ‘average’ person who doesn’t have time to study the science in any detail, it’s obviously more important to use precise terminology or otherwise explain the context.)

    Comment by Patrick 027 — 18 Aug 2008 @ 1:30 PM

  139. Gavin (135), I appreciate and respect your comment, “…short term weather events are difficult to impossible to attribute to climate change…” and do recall your saying that many times before. But to be accurate, in deference to Steve’s comment, you should not be so loose with your term “we.” It probably applies to most if not all of the RC moderators (if that is what you meant, never mind this post), but can not include most AGWer posters who can inundate us with localized “proofs” of GW.

    [Response: Who are these people? I find myself inundated with requests to condemn exaggerations on an almost daily basis, only to find that no-one ever said any of the things the accusers are complaining about. So, here is a new rule. Instead of demanding restatements of generalities which we have made clear over and again (and by we in mean not only the RC contributors, but also the authors of the IPCC reports and 99% of the field), how about actually linking to these people and comments you are being inundated with? I’m happy to discuss case by case assertions where there is an actual statement to be discussed, rather than say, some nonsensical interpretation on a blogger. – gavin]

    Comment by Rod B — 18 Aug 2008 @ 3:52 PM

  140. Richard Sycamore & others may care to read W.F. Ruddiman’s “Earth’s Climate: Past and Future” for the perspective offered of climate timescales.

    Comment by David B. Benson — 18 Aug 2008 @ 4:32 PM

  141. Here’s what NOAA has to say about weather and climate:

    “Climate – The average of weather over at least a 30-year period. Note that the climate taken over different periods of time (30 years, 1000 years) may be different. The old saying is climate is what we expect and weather is what we get.”

    And about climate change:

    “Climate Change – A non-random change in climate that is measured over several decades or longer. The change may be due to natural or human-induced causes.”

    http://www.cpc.noaa.gov/products/outreach/glossary.shtml#C

    Comment by Lawrence Brown — 18 Aug 2008 @ 5:21 PM

  142. Re: myself:

    “because global warming isn’t just hidden in shorter-term weather noise, it may have an effect on that weather noise.”

    … individual bits of noise are weather but the noise overall may have a ‘texture’ that is an aspect of some climate.

    “It’s the average of the shape of the distribution of the variations of the shape of the distribution… of every variable,”

    … and any number of derived quantities, like the temperature gradient, the temporal relationships of weather patterns or low frequency variability, the average frequency of ENSO fluctuations, the correlation of A and B in time and/or space…

    “one could comb the universe for similar planets to get more data (like an ensemble of model runs)”

    … being careful to note the variation among the population and note relationships between variation in behavior to variation in the population…

    Comment by Patrick 027 — 18 Aug 2008 @ 7:23 PM

  143. Gavin (139), just to clarify, you can’t possibly read and edit all of the posts on RC, as you moderators do, and not see an abundance of posters saying (paraphrasing) ‘my plants came up early, my pond froze a month late, cherry blossoms bloomed a few weeks early, it’s hotter here than it’s been for 75 years, it’s dryer there than its been for 50 years, it’s wetter than …, high temps in Europe a few years ago killed …. people, etc., etc., etc., all because of global warming. I simply suggested these not be included in your common “we“.

    [Response: The issue is context. Individual events do not prove anything, but they can be examples of something that is happening on a wider scale which can be attributed. Take a different subject – foreclosures for instance. If you are foreclosed, that doesn’t imply that there is a rise in a foreclosures, but if there is a rise in a foreclosures, and you get foreclosed, the latter is perfectly valid as an example. It doesn’t mean that you wouldn’t have been foreclosed in any other circumstance (who can tell?), but within context it makes sense. And if you look at any newspaper, their mainstay of reporting is finding specific people who exemplify some larger trend. Why should climate be different? – gavin]

    Comment by Rod B — 19 Aug 2008 @ 7:59 AM

  144. “…Why should climate be different? – gavin.”

    Because the connectors between climate change and localized weather are magnitudes more tenuous and ill-defined with any specificity, than the connection, say, between foreclosures going up and me getting foreclosed.

    I meant to comment only on the use of “we” in your comment, (“… we’ve said over and again, that short term weather events are difficult to impossible to attribute to climate change. – gavin.”), not the main point. You now seem to be implying the opposite of your first point. But, I didn’t intend to and am not inclined to start a debate on the point, so I’ll just write it all off to my misreading your comments. Sorry.

    Comment by Rod B — 19 Aug 2008 @ 1:44 PM

  145. Rod B., It is one thing to say “Katrina proves climate change is real.” That’s BS, and every knowledgeable climate scientist would agree. It is quite another thing to say that Spring comes earlier than in the past–that’s a manifestation of a global trend that has been observed due to climate change. On the one hand, you have a single event and someone asserting that it is proof. On the other hand, you have an event and someone saying it is consistent with an observed and recognized global, long-term trend.

    Comment by Ray Ladbury — 19 Aug 2008 @ 2:25 PM

  146. Rod B, if I may interject

    Regular people are probably not very concerned about the statistical averages of events, but rather about the “here and now.” It’s not very convincing to tell people that they should be worried about global warming, but then claim that the anomalous event that just happened yesterday is unrelated. On the other side, it’s not scientifically accurate to say that the event yesterday was necessarily a result of global warming. So I have a problem with statement like “Katrina was caused by global warming” but not statement like “Here was Katrina, this is what may happen more as the climate warms.” I’m not sure there is a perfect way to communicate this issue.

    If we have a fair die (with say, two sides representing normal conditions, two for hotter than average, and two for colder than average) represnting the 1951-1980 climatology. Now say we make three of those sides hotter than average, two average, and just one colder than average. If I roll the die and get a “hotter than average” you might not think much, but after I roll the die a few more times (enough times where we can statistically say the die is unfair) then what will you say? Do you attribute the next “hotter than average” to my unfair die, or to random chance? And will you be happy?

    Comment by Chris Colose — 19 Aug 2008 @ 2:39 PM

  147. Increase in the amount of heat on the Earth predicts certain changes, globally and locally. It takes quite drastic a change laymen to notice it. Scientific obvervation and reports from layman coherently show the predicted changes happening increasing numbers. They nicely demonstrate the validity of the predictions and the models used in the prediction.

    Climate is changing, and weather is becoming more extreme, these are the facts supported by scientific theories and direct observations.

    Comment by Petro — 19 Aug 2008 @ 4:10 PM

  148. This post is against my better judgment, but, what the hey…

    Chris, I got lost in your die analogy but think I understand your point, similar to Ray’s. My initial point was that there are many AGW advocates who do not subscribe to Gavin’s first statement that, “… short term weather events are difficult to impossible to attribute to climate change… ,” and contrarily do cite local weather anamolies as proof of global warming. (“Many” is certainly not all, probably not even a majority, and includes very few, if any, of the professional climate scientists — my sole and simple point ala the use of “we”.) If you cite Katrina as ‘the kind of thing that global warming might cause a few decades from now’, that is probably acceptable, though just barely. Spring showing up early this year is not. If it shows up a teeny earlier on a trend line (remember those? ;-) ) over 50 years or so (30 bare minimum), maybe. Likewise it is not appropriate to attribute the few thousand “extra” deaths during Europe’s hot spell a few years back (a one-off anomaly) on GW, or any other of a myriad of like examples. Secondly, the folks that cite Texas’ recent hot spell as GW proof completely turn their back on and reject any relevance of the midwest’s cold spell to anything, as example.

    I do agree with the difficulty of convincing people of AGW that this poses (seriously). It’s true that making stuff up might make the convincing easier, but that’s orthogonal to the topic.

    Comment by Rod B — 19 Aug 2008 @ 5:05 PM

  149. RE weather v. climate, & single events v. statistical datasets.

    This might help to distinguish between weather and climate.

    Emile Durkheim, a father of sociology, claimed that “social facts [not psychological or individual level facts] cause social facts.” An example he used is suicide, a highly personal decision and event. Whereas all sorts of factors may cause a person to commit suicide, including psychological factors, suicide rates of a nation, group, or category are caused by “social facts,” such as culture and other more pervasive, larger happenings (e.g., the Great Depression).

    Whereas predicting single suicides (the behavior of a single person) is difficult, if not impossible (like predicting a tornado, its exact path, and which buildings it will demolish), Durkheim found that suicide rates for countries, genders, single/married, social classes, age groups, and religions remain about the same year to year, perhaps slightly increasing or decreasing due to shifts in the larger society/culture (note, he didn’t distinguish between the social and cultural).

    Some 100 years ago the pattern he found was that suicide rates were higher for men, the rich, singles, Protestants and Protestant countries, and in cities. And these rates held fairly constant year to year (similar to the global average temperature — which does not increase or decrease, whether under natural conditions or global warming, as much as a 24 hour period on an autumn day in Illinois.

    So, what’s going on, I ask my students. Do they get to November and the suicide limit has been reached that year, so they say “no more for this year,” or the limit has not been reached, so they say, “we need more to fill the quota”?

    It is social facts that are causing this fairly steady rates. Durkheim came up with the theory of anomie (normlessness) to explain it, at least for Western countries — a condition where certain people (men, Protestants, rich, singles) and social conditions of cultural change are not under rules and controls the way women, Catholics, the poor, and traditional societies are. I guess the anomie factor would sort of be like the forcings of greenhouse gases in this analogy.

    Now what’s interesting when my Soc 101 students use a recent, simple “U.S. states” dataset, they find that there’s an association for ruralness and higher suicide (also for “% newcomers” and suicide, also “Westness of state” and suicide). But we figure that anomie may still be at work, since our cities are well settled and most are old, but the suburbs and rural West are more recently settled by diverse people who don’t know each other as well, so there is more anomie in such areas, esp areas with higher % of newcomers.

    So I hope this helps with the notion of climate (v. weather) — it is a statistical artifact created from a vast amount of weather data. It is fairly stable and constant, esp year to year or decade to decade. It’s why I still refer to my 1980 world atlas re regional climates; it’s why people like to move to California. And then it’s raining cats and dogs on the day they arrive, but most have faith that the climate (if not that day’s weather) is warm and sunny, so they don’t turn around.

    For people who wonder when does weather become climate, I would guess that the more the weather data the higher the confidence — so its more a matter of more data is better for predicting things. Less data (or data just from one locality, even though over a very long time period) just wouldn’t help increase the confidence much.

    Now if climate, esp world climate, is changing, even a tiny bit, we have to look for some very strong forcing to move that elephant of a dataset even a tiny bit. And it’s really quite amazing, nearly unbelievable to a layperson, that what we can’t even see (CO2 molecules or anomie) could have such a big impact. But that’s where science leaves common sense in the dust.

    But I still say AGW caused hurricanes Katrina & Andrew, etc — at least these single weather events are part of the dataset that show an increase in hurricane intensity over the past several decades.

    And all the local data that seemingly contradicts AGW (like it’s getting colder in Himodflethclseville), well, that’s also a part of the dataset which shows AGW is upon us.

    Comment by Lynn Vincentnathan — 20 Aug 2008 @ 11:33 AM

  150. Lynn (149), good description and analysis. I take it to mean that you can not attribute weather events to climate change with anything short of clear multidecadal — maybe longer — trends, but this doesn’t mean that a one-year weather anomaly can’t be climate change related, it just means that it can not be substantiated and attributed as such.

    Your belief then that climate change caused Katrina is nothing more than that — your own belief or hunch or suspicion — something any person can do even if he can’t spell climate change — and not anywhere close to substantiation (proof to the commoner). This is where your credibility can suffer, depending on your audience.

    But then you nail it with your last statement, which really blasts one’s credibility IMO. That is basically attributing any and all weather anomalies on AGW: hotter and drier than years in Texas this spring and summer? AGW! Colder and wetter than years in the midwest? AGW! I’ll name it; you’ll attribute it to climate change. Now, in the final analysis 100years from now you might, in a highly improbable scenario, be proven correct. But today, getting any commoner to believe that you have the faintest idea of what you’re talking about is out of the question.

    Arctic and northern Canada/Greenland, and southern US get hot, while deep southern Canada and northern US get cold — all because of GW? Is this what the models say (serious question)??

    Comment by Rod B — 20 Aug 2008 @ 1:33 PM

  151. That is basically attributing any and all weather anomalies on AGW: hotter and drier than years in Texas this spring and summer? AGW! Colder and wetter than years in the midwest? AGW!

    Well, that’s not actually what she said, though I can understand why you might interpret it as you did.

    She said it’s all part of the same DATASET. She needs to answer for herself, but I interpret her statement to mean that taken as a whole, all the data shows warming despite local cold weather phenomena.

    Read closely, she did NOT attribute the cold weather example to AGW. Her exact words:

    And all the local data that seemingly contradicts AGW (like it’s getting colder in Himodflethclseville), well, that’s also a part of the dataset which shows AGW is upon us.

    No attribution of the single datapoint to AGW there …

    Comment by dhogaza — 20 Aug 2008 @ 1:54 PM

  152. dhogaza, well, one really has to parse the hell out of her statement, and unless one does that the implication at least is clear. None-the-less, as you point out, I can see how I might have misinterpreted what Lynn said; I didn’t mean to do that.

    Comment by Rod B — 20 Aug 2008 @ 5:17 PM

  153. RE 149-151, I guess it’s really not so important to me whether any part of Katrina’s intensity can be attributed to climate change.

    I’ve been reducing my GHGs since 1990 with the idea that it might help in reducing various AGW effects (assuming enough other people also join in reducing their GHGs). So when Hurricane Rita greatly damaged a friend’s home, I emailed him that I had known that GW might increase such harms, and I had been reducing my GHGs in hopes of reducing just such harms. And I’m reducing them now with the hope of reducing such harms in the future.

    Comment by Lynn Vincentnathan — 20 Aug 2008 @ 6:03 PM

  154. [edit]

    [Response: Please note that using alternate names on threads you have already commented on is extremely bad form. Feel free to repost your question under your more usual login. – gavin]

    Comment by Luke Warmer — 20 Aug 2008 @ 8:36 PM

  155. Re Rod @ 143: “my plants came up early, my pond froze a month late, cherry blossoms bloomed a few weeks early”

    As individual anecdotal events these are of course unattributable to climate change, but as a series of recorded dates of plant germinations, pond freezings, or blossoming dates they do form a record of climate change.

    Comment by Jim Eager — 20 Aug 2008 @ 8:47 PM

  156. As analogies go, consider the following…

    You start a journey on the highway, with given common speed limits, and being an ordinary law-abiding citizen you set your car cruise control accordingly. Low level or alertness, thinking about matters of importance.

    Fifty miles down the road, a bull moose is considering the relative merits of grass resources on his side of the road and the other. He comes to the conclusion that it is appropriate to make a taste test.

    As a result, your car and the bull try to occupy the same spot in space-time. Ambulance will recover you, a wrecker will take your car for recycling, and the bull will provide many a feast dinners for the dogs of the local hunters’s club.

    Attribution of this unfortunate event? Was the accident caused by the particular speed limits? What was the role of the Government? Had the Government engineers’s arbitrary decision concerning the speed limit been i.e. a bit higher, the accident would not have happened at all. Both you and the bull moose would have passed the spot safely. This appears quite certain.

    So, must we draw the conclusion that the Government should institute generally higher speed limits and this act would end all accidents of this kind?

    Of course not. The result of higher speed limits is marginally more accidents, with markedly heavier risk to life and property. Scientific theory and observations tell us that.

    So, can Katrina be attributed to AGW or not? Prior to that day, many of the (approximately ten) oceanic and atmospheric factors that drive and steer the start-up, development and movement of a hurricane would have been different in a non-AGW world. Different sea surface temperature distributions, jet streams, pressure fields, atmospheric temperature and humidity profiles, cloud distributions, less or more Saharan dust in the area, different easterly waves, width of the tropical climate band … Assuredly no way for the Katrina event to have happened as experienced as such then and there.

    Which does not mean that the probability of a major tropical storm hitting a vulnerable major Gulf city has been radically modified. Single events just are not predictable with our current knowledge (data and models) and probably will never be. It is reasonable that some changes in the statistics can be predicted, though.

    Comment by Pekka Kostamo — 20 Aug 2008 @ 9:54 PM

  157. A general point: when people complain “why doesn’t it just keep getting hotter and hotter, yadda yadda year/s were actually cooler than yadda yadda” it helps to remind them that climate is a bit like the economy. It is not a simple system under the strict influence of a single causation. For example, CO2 is a a bit like interest rates in reverse: lower interest rates tend to “heat up” the economy, but of course other factors affect the economy. A decision to lower interest rates may not spruce up the economy because other factors may outweigh the rate change to bring the activity down. However, low interest rates can be “expected” on average to pump up the economy. Also, the economy is going to fluctuate anyway up and down (as will the stock market) and that puts “noise” into any attempt to make correlations.

    BTW, what’s the deal on solar cooling? I mean, what’s allegedly happening/will happen right now, re the sunspot level below expectation, e.g. Victor Manuel Velasco Herrera in Mexico predicting upcoming 60-80 year colder spell, etc?
    http://www.sott.net/articles/show/164133-60-80-year-little-ice-age-coming
    Oddly (?), headlined on Drudge but not a lot of play in the “conventional” media – for good reason?

    tyrannogenius

    Comment by Neil B — 21 Aug 2008 @ 10:32 AM

  158. Neil, you might look at

    http://www.leif.org/research/The%20Open%20Flux%20Has%20Been%20Constant%20Since%201840s%20(SHINE2007).pdf
    and
    http://www.leif.org/research/AGU%20Fall%202006%20SH21A-0313.pdf

    Leif Svalgaard has suggested there’s good reason to doubt the notion that a low solar cycle would lead to a sudden great cooling. See his posts at solarcycle24.com too.

    We know a lot more now about what volcanos were happening during the previous cool spell, which can explain some of it, for example:
    http://scholar.google.com/scholar?sourceid=Mozilla-search&q=Maunder+minimum+volcanos

    Comment by Hank Roberts — 21 Aug 2008 @ 2:04 PM

  159. There is also some reason to doubt that there is anything unusual about this solar cycle.

    http://science.nasa.gov/headlines/y2008/11jul_solarcycleupdate.htm

    Comment by Jim Cross — 21 Aug 2008 @ 5:32 PM

  160. This brings me to wonder how the big question can be answered.

    Given that we have climatic models of the Earth and an Earth can we tell the difference?

    That is the test, could a sentient being tell whether the models reproduce Earthly weather and climate?

    Could that being say; “This is real climate and that realisation is not an Earthly climate”.

    Conceptually the first problem is that, if one gives the models and the Earth equal status, one must ask is; “Was the particular realisation that is the history of the Earth’s climate likely”?

    Could it all have turned out very differently.

    If the actual realisation we have enjoyed turns out to have been highly unlikely then it should turn out to be highly unlikely in the models; and conversely.

    So the models have countless numbers of realisations and the Earth may have produced but one of countless realisations how do we tell if they are akin.

    Well we cannot rerun the Earth history.

    But like monkeys and typewriters we can produce endless model runs and presumably some of those runs will resemble the actual realisation we have experienced close enough to give us confidence that they are good models.

    The trouble is that past successes are no guarantee of future glories. Even such a realisation is no guarantee of accurate prediction.

    For now it would be good to know that the current models are capable in principle of reproducing the real climate at least once amongst their countless runs.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 24 Aug 2008 @ 8:59 PM

  161. Re 160 – It’s not an ‘all or nothing’ game. Also, there’s basic overall physical arguments that don’t require quite so much computation to understand (arguments exist that a human mind can grasp. Granted, some such arguments are informed by computer modelling, but they aren’t just the results of models).

    Comment by Patrick 027 — 25 Aug 2008 @ 1:18 PM

  162. #161
    But to figure out at what temperature negative feedbacks like clouds and moist convection (and local human interventions) will cap GHG warming requires something like a computationally intensive GCM. i.e. Something spatially explicit with a great many equations, parameterization, and a bit of tuning. Is this not true, Gavin?

    Comment by Richard Sycamore — 25 Aug 2008 @ 11:21 PM

  163. The PETM is probably a decent model for the sort of extreme case you’re wondering about, pushing the system to an extreme — at least in relatively recent paleoclimate. I’ll hope one of the real scientists corrects my attempt here:

    Abrupt reversal in ocean overturning during the Palaeocene/Eocene …
    http://www.nature.com/nature/journal/v439/n7072/full/nature04386.html

    That page points the kind of intervention needed to reverse such a major hot spell.
    In that case, plankton species evolved that drew down the CO2.
    Local interventions won’t change a global event.

    Global change: Plankton cooled a greenhouse
    Nature News and Views (14 Sep 2000)
    http://www.nature.com/nature/journal/v407/n6801/fig_tab/407143a0_ft.html

    The illustration of the latter article rather dramatically sums it up:
    http://www.nature.com/nature/journal/v407/n6801/fig_tab/407143a0_F1.html#figure-title

    Comment by Hank Roberts — 26 Aug 2008 @ 12:26 AM

  164. Alexander Harvey,
    I’m not sure exactly what you mean by “reproduce Earth’s climate”. Certainly, climate models exhibit realistic behavior on the scales of their resolution. However, in a large physical system, there will be so many different possible realizations of climate signal + weather noise that no two will likely ever repeat. The question you probably want to ask is how often we get “close to” a particular outcome. In this sense, the situation is not unlike that with statistical thermodynamics–no two Monte Carlo runs will yield exactly the same results, but the macroscopic properties of the vast majority of runs will yield something close to the equilibrium properties of the system. Likewise, even though weather is different in every run, the climates yielded by the models are recognizable as Earthlike.

    Comment by Ray Ladbury — 26 Aug 2008 @ 7:43 AM

  165. Alexander Harvey #160 asks a valid question: what are the criteria by which model “realism” is judged? Is the double ITCZ problem a problem, or isn’t it? What kinds of errors or misrepresentations are tolerated? This is not a hard question to answer.

    Ray #164 suggests: “climate models exhibit realistic behavior”

    yet no one has answered my question in #1:

    “How well do the GCMs perform at generating suitably high Hurst coefficients?”

    Here I am talking about the stochastic internal variability in GMT in unforced control runs, not deterministic responses to extermnal forcings and not gross qualititative aspects of circulation (which Gavin notes above are NOT always realistic, e.g. double ITCZ problem). I want to know: do the control runs exhibit the proper kind of scale-free distributins of weather and climate phenomena that we know happen in the real world? This too is a simple question.

    [Response: No it isn’t actually. What would one compare the control run statistics with? The real world has had forcings – greenhouse gases, volcanoes, solar etc. Given the way the statistics work they cannot extract a purely intrinsic signal from the externally forced one (that is an attribution problem that requires some kind of model). Indeed, people have already reported that including forcings changes the Hurst coefficients (i..e Vyushin et al, 2001 for volcanoes). – gavin]

    Comment by Richard Sycamore — 26 Aug 2008 @ 9:15 AM

  166. Re 162 – yes,but – how sensitive is the climate to the tinier of perturbations in the boundary conditions and physics? If adding a small lake completely throws off the global average surface temperature response to some change in CO2, that would imply that the climate is so unpredictable that it’s not even … (basically, as I recently wrote somewhere else, if 3/4 of an elephant is covered up by a tarp, you can probably still tell that it’s not a monkey that’s under there – you don’t need to know everything in order to know something. Figuring out the climate sensitivity to CO2 changes down to the nearest 0.1 deg may be nearly impossible and perhaps not very meaningful to the single model run that is reality – but figuring out the sensitivity with a bit less precision is more likely to be doable and still yields meaningful information, for science and for policy implications)

    Comment by Patrick 027 — 26 Aug 2008 @ 11:52 AM

  167. > tiny perturbations
    Not very, Patrick.

    You write: “If adding a small lake completely throws off the global average …. ”

    Are you making that up, as a hypothetical?
    Do you believe it’s true?
    Where do you find any support for that idea?

    Comment by Hank Roberts — 26 Aug 2008 @ 5:16 PM

  168. #165
    Thanks as always for your patient reply, Gavin, and also the reference. Unfortunately the “improved” scaling behavior discussed in that paper is merely a band-aid for a model that does not produce correct intrinsic scaling behavior when unforced. And THAT is what LTP is about – the scale-free patterning that occurs as a result of *intrinsic* maximum entropy thermodynamics. It is NOT about the quick responses that happen as a result of external forcings. These authors have misunderstood and/or misrepresented the long-term pesistence phenomenon, and they have taken you along for the ride. What they are dealing with is short-term persistence, not long-term persistence.

    You will ask “How do I know the unforced climate exhibit long-term persistence, if we’ve never seen an unforced climate?”. And that is a very good question. If you look at other model thermodynamic systems that are climate-like, but unforced you will see they have scale-free patterning, aka long-term persistence. Thermodynamic theory suggests the climate system should be no different.

    Granted, arguments based on theory or worse, analogy, are weak.

    However your citation of this paper indicates to me that you, like the authors of that paper, are only half-informed about the nature and problem of long-term persistence. That would explain your review of the Koutsoyiannis et al (2008) paper. It’s a paper that is easy to dismiss based on weak methodology. It’s value lies in the last sentence of the abstract. Which is not understandable by anyone who has a weak understanding of the problem of long-term persistence.

    I am happy to be proven wrong on that. If you can do that, then feel free to delete this comment. I’m not here to try to embarrass anyone. I’m here to understand your arguments by probing their assumptions.

    Comment by Richard Sycamore — 26 Aug 2008 @ 10:02 PM

  169. Uninformed? About what? Sources would really help.

    Looking:
    http://scholar.google.com/scholar?num=100&q=%2Bclimate+%2B%22long-term+persistence%22&as_ylo=2007

    it seems there’s plenty written, and the climate models already have passed this test.

    E.g., http://w3k.gkss.de/staff/storch/pdf/rybski-etal.2007.pdf
    JOURNAL OF GEOPHYSICAL RESEARCH, VOL. ???, XXXX, DOI:10.1029/

    _________excerpt follows____________

    Abstract. We study the appearance of long-term persistence in temperature records, obtained from the global coupled general circulation model ECHO-G for two runs, using detrended fluctuation analysis. The first run is a historical simulation for the years 1000−1990 (with greenhouse gas, solar, and volcanic forcing) while the second run is a 1000 year “control-run” with constant external forcings. …

    … most continental sites have correlation exponents γ between 0.8 and 0.6. For the ocean sites the long-term correlations seem to vanish at the Equator and become non-stationary at the Arctic Circles. In the control-run the long-term correlations are less pronounced. …

    … The expressions “long-term correlated”, “long-term persistent” or “long-term memory” refer to time series, whose
    auto-correlation functions do not decay exponentially, as is the case with autoregressive processes, but decay much
    slower following a power-law. It has been suggested that the narrow spatial distribution of the exponent γ at continental and coastline stations may be used as an efficient test for the quality of climate models [Govindan et al., 2002]. Newer analysis [Vyushin et al., 2004] has revealed, that climate simulations taking into proper account the natural forcings and in particular the volcanic forcings, reflect quite well this quite ”universal” feature of the observable data.. …”

    ——end excerpt—-

    So, I”m an amateur. Reading this, it seems to me that they say the simulations “reflect quite well” what was suggested. As a test of the quality of the models, they’re saying, this issue came up, the analysis was done, and the models proved to handle it okay.

    Citing references, where’s the issue going these days? Sources please?

    Comment by Hank Roberts — 27 Aug 2008 @ 12:09 AM

  170. Re 167 – Sorry, you misunderstood my intention. I don’t believe a small lake or even a large one would throw it off – indeed while the climate sensitivity could certainly vary with such things as continental drift and the mix of species present, I expect based on the physical arguments that there is some generality to it. I guess I could have been clearer though.

    Comment by Patrick 027 — 27 Aug 2008 @ 12:20 AM

  171. Re 167 – well, now I may have made another error… certainly continental drift over millions of years (and the resulting mountains and ocean current changes) would have strong effects on regional climate and regional climate sensitivity (for the same reason that not every location on earth will experience global warming in the same way with today’s geography). But the effect on climate sensitivity in so far as change global average surface temperature… well, maybe it would be more subtle, although rearranging the continents could certainly change the threshold at which an ice age starts, etc, and could affect the way glacial-interglacial positive CO2 feedback works… but anyway…

    Comment by Patrick 027 — 27 Aug 2008 @ 12:47 AM

  172. #169
    Ah, a paper so new it has not even been cited once.

    They study an unforced control run in one model, EchoG. (Where are the others?) About this model they state:
    “In some cases, as we will show below, semistable oscillations occur in the fluctuation functions. Consequently, there is no scaling and a power law fit is not meaningful.”

    And this is suposed to be an example of realistic scaling behavior?

    They go on to state:
    “In some areas, in particular in the equatorial Pacific, biannual cycles occur, a feature at variance with observational evidence. Unfortunately, these cycles cannot be eliminated simply by a seasonal detrending analog to equation (1), since the period of 2 years is too unstable. For the same reason, we were not able to remove the oscillations (automatically) in the Fourier spectrum. Therefore, in order to get rid of these oscillations, we additionally consider time series of biannual temperatures (i.e., temperature
    averaged over 2 years of daily data).”

    In other words, in order to detect correct scaling behavior it is necessary to patch over a more fundamental flaw in the model’s predicted circulation.

    But the most telling comment is the last paragrph of the conclusion:

    “Finally, in this paper, we only studied temperature records and focused exclusively on the linear correlation properties. It is an interesting question how far the global climate models are able to reproduce also the nonlinear ‘‘multifractal’’ features of the climate system [see Koscielny-Bunde et al., 1998;Weber and Talkner, 2001; Govindan et al., 2003; Ashkenazy et al., 2005; Bartos and Ja´nosi, 2006;
    Livina et al., 2007]. It is known that in particular rainfall is significantly multifractal [see, e.g., Tessier et al., 1996; Kantelhardt et al., 2006], and it will be interesting to see if this feature is also reflected in model rain fall data.”

    So it appears my question regarding multifractal scaling has not, in fact, been answered, but “will be interesting” to address in future work.

    One hopes that in the future they will examine more than just one model. That model was selected for a reason.

    [Response: All of the AR4 data is now available for people to do want they want (it’s a little mean to criticise authors who did not have access to other models at the time they were writing). I stated right up at the top that analysing that data properly would have been a much more productive use of K et al’s time and indeed, the results may be interesting. If you are in a hurry to find the answers, I suggest doing it yourself. If you prefer not too, then I’d suggest waiting for someone else to do it before deciding anything. The work done so far indicates that the forcing makes a difference to the statistics, which implies that control runs are not going to behave the same way as the real world, and indeed, that some of the ‘LTP’ defined from the observations, far from being a problem for attribution studies, may in fact be a signature of the forcing! (I don’t know that’s the case, but it would be somewhat ironic if it was). – gavin]

    Comment by Richard Sycamore — 27 Aug 2008 @ 10:11 AM

  173. > a paper so new …

    I wrote “e.g.” — that’s one example. See their references.
    I gave you the search link for more work.
    You can look at the papers citing Vyushin’s later work for more examples.

    You quote extensively from the source I found you, but you’re blathering. What supports what you believe?
    Cite sources and quote from them please. Declaiming your superior knowledge without cites is asking us to do the homework for you.
    ___________________
    reCaptcha: models UNTANGLE

    Comment by Hank Roberts — 27 Aug 2008 @ 10:57 AM

  174. Richard Sycamore,
    OK, let me see if I’ve got this straight. You are castigating a paper that at least made a reasonable effort to gain knowledge you think is important and using the shortcomings you see in that paper to justify the piss poor, ill conceived paper by K. et al., which is so confused it doesn’t really answer anything corresponding to the real world. Hey, whatever, dude.

    Comment by Ray Ladbury — 27 Aug 2008 @ 12:35 PM

  175. Re 171 – … of course changes in the nature of CO2 feedback wouldn’t have direct bearing on the climate sensitivity to CO2, just to clarify…

    Comment by Patrick 027 — 27 Aug 2008 @ 12:40 PM

  176. See also, as a further example, not as the definitive statement of the current state of the literature:

    http://coast.gkss.de/staff/storch/pdf/bunde.detection.GRL.2006.pdf

    One of the authors (I wonder which?*) has a sense of irony; note the final paragraph of that paper:

    “We conclude that the previous claim that the most recent warming, observed by quality controlled instrumental data, would be inconsistent with the hypothesis of purely natural dynamics [Hasselmann, 1993; Hegerl et al., 1996; Zwiers, 1999; Barnett et al., 2005] is supported by our long-term persistence analysis of different proxy-based reconstructions extending over many centuries and even up to two millennia. In case of the rather smooth reconstructions, the detection appears feasible even before 1985. An interesting detail is that the two fiercely arguing groups around Mann and McIntyre lead both to very early detections, while the most conservative detection result is obtained when the more ”bumpy” reconstruction by Jones and coworkers is used.”
    _______________________________________________________
    *http://coast.gkss.de/staff/storch/BILDER/donaldNEU.jpg

    Seems there’s indeed plenty of work, it’s an active area. Don’t fail to read and check the footnotes and look for citing articles. The DOI reference is more current than the journal references in Google Scholar at the bleeding edge of research publication.

    Comment by Hank Roberts — 27 Aug 2008 @ 1:28 PM

  177. #174 Castigating? Far from it. They studied one model of many, just as Koutsotyiannis studied only 8 points of many. There are good reasons why people study a part of a problem before tackling the whole of it. In both cases the follow-up papers should prove very interesting. We are learning new things all the time about how well the models in comparison to the real world. I agree that Koutsoyiannis paper is not definitive. But I also think there’s a provocative line in the abstract worth thinking about seriously.

    #173 Blathering? I am merely letting the record show that the scaling behavior of the unforced GCMs was not studied prior to IPCC AR4, which is confirmed by the fact that Gavin has invited me to do the analysis myself. I had to establish that fact before I could assert it. If that’s blathering … well, whatever.

    [Response: That’s not correct. The Fraedrich and Blender papers studied it in a long control run and it’s shown graphically in the post above! I am only unaware of anyone doing it systematically with the AR4 models (but it could be ongoing). – gavin]

    Comment by Richard Sycamore — 27 Aug 2008 @ 1:49 PM

  178. #177 inline reply
    You’re right, it’s not strictly correct. I read the OP attentively, so allow me to correct myself.

    Regarding Blender et al. (2006) Gavin states in the OP:
    “This is one example from Blender et al (2006, GRL) which shows the basic pattern though. Very high Hurst exponents over the parts of the ocean with known multi-decadal variability (North Atlantic for instance), and smaller values over land.”

    However note the comment in Rybski et al. (2008):
    “[38] We could verify neither the claim that the long-term correlations vanish in the middle of the continents [Fraedrich and Blender, 2003; see also Bunde et al., 2004] nor that the strength of these correlations increases from the poles to the equator [Huybers and Curry, 2006]. Both claims not only differ from our findings for the model runs, but are also in remarkable contrast to the enormous number of observational data [Eichner et al., 2003; Kira´ly et al., 2006].”

    So although the question has been “studied”, it has not been answered to the point where there is an established consensus. (I dithered over whether to use the word “studied” or “established”. Make that change and my point stands.)

    This gets back to Gavin’s qualitative assertion in the OP that the results of the model-data comparisons are “mostly something similar”. One wonders what that means.

    Let the record show that a *consensus* on this issue was not reached prior to the publication of IPCC AR4.

    Comment by Richard Sycamore — 27 Aug 2008 @ 2:48 PM

  179. Attributing the warming in AGW to something other than greenhouse gases is a bit like Mark Twain’s saying that Shakespeare’s plays were either by Shakespeare or by someone else with the same name. The calculated increase in energy that falls on the planet due to the increase in CO2 in the atmosphere is sufficient to cause the warming. If it hasn’t caused the warming the energy went somewhere else and warmed something there.

    reCaptcha: Lt coarse

    Comment by Jeffrey Davis — 27 Aug 2008 @ 2:58 PM

  180. #179
    You miss the point. The choice of noise model affects conclusions that fall out from a trend attribution analysis. In any attribution exercise an error in the estimation of one parameter has downstream consequences for the other estimated parameters. Similarly, the amount of variance attributed to unexplained factors (“noise” and other poorly understood processes including some feedbacks, nonlinearities and synergies) is variance NOT available to be attributed to various forcings. To presume the consequences would be “unscientific” as someone here likes to say.

    That is why the choice of null noise model matters. Because you can’t simply presume the magnitude of
    GHG forcing effects. As with the other forcings (solar, aerosols, volcanoues, …) these effects have to be estimated objectively from data (or inferred by subjective tuning to data). Use of a linear additive model with i.i.d. noise may be a problem if it is a poor approximation to reality.

    So it is not a matter of GHG effects occurring at point A or B. It is a matter of forced trend vs. internal thermodynamic noise. A bit of Shakespeare, a bit of monkeys typing sonnets – to use your analogy.

    Gavin is welcome to delete any of comments or smash them to bits, if he can.

    [Response: It’s very much a function of what you are looking at. For the global mean temperature the physics of the situation preclude large natural variability in the absence of forcings. In particular, the rise in ocean temperatures over the last 50 years imply a net and persistent radiative imbalance – not just ‘noise’. People sometimes forget that internal variability has to be a real phenomena – heat has to come and go from somewhere. For long time scales (greater than a year or so), this can only mean the ocean, and for that case the long term trends are clear. No physically constrained noise model will match that (however LTP). At local scales, or different metrics, the issues are less clear, but #179 is correct, LTP does not provide an explanation for global warming. – gavin]

    Comment by Richard Sycamore — 27 Aug 2008 @ 6:55 PM

  181. Ray,

    You wrote:

    “I’m not sure exactly what you mean by “reproduce Earth’s climate”.”

    Nor am I. Your quote is not what I wrote.

    Which of these statements (in quotes) have you picked from.

    “That is the test, could a sentient being tell whether the models reproduce Earthly weather and climate?”

    That is a legitimate question. What are the criteria?

    “But like monkeys and typewriters we can produce endless model runs and presumably some of those runs will resemble the actual realisation we have experienced close enough to give us confidence that they are good models.”

    Nothing argumentative in that.

    “For now it would be good to know that the current models are capable in principle of reproducing the real climate at least once amongst their countless runs.”

    This is also a simple question. Do they show that they are likley to ever get really close JUST ONCE.

    **************

    Perhaps the more important question is my:

    “Conceptually the first problem is that, if one gives the models and the Earth equal status, one must ask is; “Was the particular realisation that is the history of the Earth’s climate likely”?

    Could it all have turned out very differently.

    If the actual realisation we have enjoyed turns out to have been highly unlikely then it should turn out to be highly unlikely in the models; and conversely.”

    To suspect the stated question to be true, would I feel lead to madness. To suspect that the climatic history of the last 100+ years was a fluke would mean that we can make little progress.

    There would be no point validating climatic models against the record if it was an outlier.

    To suspect that is totally false, and that the climatic record is largely determined and typical of all possible realisations means that we can make progress. But it also means that true models must share that tendency and all the twists and turns of the last 100+ years should be commonly (but not inevitably) reproduced and that we can make verifiable predictions. Not just of the next 10 years but by turning a blind eye of the last 10 years.

    *********

    Going back to what I said:

    Could a sentient being in anyway tell the difference between a computer model and the Earth’s historic climate.

    This is an important question. I am not sure I could, could any of you?

    If we cannot, not for any of the models, then we have no way of judging between them.

    If we cannot tell the difference then perhaps there is nothing salient in the historic record and perhaps indeed the historic realisation is but one of a similarly varied range of possibilities. If that be so then we face madness.

    Personally I think that the climate is well constrained. I suspect that with luck will may have another decade that is free of major volcanoes and major El Ninos and we may gain a major insight to what is going on to an unprecedented level of detail. That is my hope. It simplifies things immensely.

    Just now the 10 year CRU temperature gradient has just passed a minimum in which it was essentially flat. centred on 2003 (1998-2008).

    It was also flat centred on 1992 and 1982. In between in peaked at .4C/decade (1979 and 1997). Personnaly I suspect this shows the balance between GHG and solar variation. If we can go another complete solar cycle without a any other major interference then we might expect the 10 year gradient to peak once again at around .4C/decade around 2113 and then slow again. Also around 2119 we could expect the temperatures to be around .2C greater than now. That is if you like my prediction. I just hope for two things, that we have no more volcanoes in the next 11 years and I live to see the outcome.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 27 Aug 2008 @ 8:27 PM

  182. “LTP does not provide an explanation for global warming”

    I think all reasonable, rational people understand and agree with the spirit of this statement. (Yes, there are a few inactivists and deniers out there.) The issue is the *magnitude* of the GHG effect. It is extremely unhelpful to spin this into a black and white issue of LTP vs GHGs. Let’s just get the LTP and the radiative imbalance components correct and let the GHG chips fall where they may.

    The point is: there is currently little reason to believe that the models used to measure attribution exhibit realistic LTP. What this exchange has showed me is that this is still an open question.

    [Response: I don’t see how you come to this conclusion at all. The examples looked at so far all show LTP with spatial structure that makes sense. In the comparisons with widespread data the magnitudes and patterns look similar (though I haven’t quantified that). Therefore, there is no reason to believe that there is a problem here. I agree more work might profitably be done with the AR4 models (and the observations) and perhaps we can get a clearer answer about what fits and what doesn’t. But I see no reason to a priori assume that the models don’t have realistic LTP. – gavin]

    For the record, I agree with Ray, Hank, Gavin and others here that uncertainties are no excuse for inactivism. But that doesn’t mean one should deny their existence or role in downstream calculations.

    Comment by Richard Sycamore — 27 Aug 2008 @ 8:56 PM

  183. #181 Alexander Harvey’s question is quite clear. He wants to know how you judge a model to be “mostly something similar” to data. i.e. What are the criteria? It is a very simple question. A list of criteria and an objective measure of fit is probably all he is looking for.

    If no such list can be produced, or if there is no such list, there is no shame. Pattern-matching is not an easy thing. (As these captcha phrases prove.)

    Comment by Richard Sycamore — 27 Aug 2008 @ 9:07 PM

  184. Let me see if I can make sense of this. “He wants to know how you judge a model to be “mostly something similar to data.'”

    At the top of the thread, the quoted bit is pulled out of this context:

    “…. calculated Hurst exponents for the entire database of weather stations and show that there is indeed significant structure (and some uncertainty in the estimates) in different climate regimes. …. What do you get in models? Well in very long simulations that provide enough data to estimate Hurst exponents quite accurately, the answer is mostly something similar. ”

    That seems clear: similar structure — illustrated by the two pictures shown along with that quoted text.
    They look similar. No?

    One wouldn’t want to, say, count pixels to try to quantify the similarity, it’d make more sense to work from the data that was used to create the pictures. Good opportunity for some grad student, if it’s not already being done or already been published. Anyone else looking further into the literature? Lots more there than I can read.
    _________________
    “and experts”

    Comment by Hank Roberts — 27 Aug 2008 @ 10:15 PM

  185. Gavin #182 says:
    “I see no reason to a priori assume that the models don’t have realistic LTP”

    This I take as an invitation to explain why this is not an “a priori assumption”, but a demonstrable fact? Realize that it matters quite a bit which “models” you are referring to. I was referring there to the attribution model by which the various forcings are estimated. Will you tell me what the assumptions are for the distribution of residuals (errors) in that model, or shall I go look that up?

    [Response: I don’t understand your last statement at all. The forcings are derived from first principles – radiative effects of CO2, CH4 etc, or from more complicated chemical-transport modelling (ozone, aerosols etc). This has nothing whatsoever to do with with what we’ve been discussing. Attribution is done by comparing the signatures of these forcings in various fields (temperature changes by latitude, in height, in the ocean etc) and allowing for uncertainties in sensitivity and the amount of intrinsic variability. The whole of chapter 9 discusses this in IPCC AR4. However, whatever assumptions are made there it has no bearing on whether the models have realistic LTP or not – that can only be demonstrated by doing the appropriate comparisons – and the ones so far (though incomplete) don’t give any reason to think there is a problem. – gavin]

    Comment by Richard Sycamore — 27 Aug 2008 @ 10:39 PM

  186. Richard,

    Thanks, yes I feel it is important to know what we are looking for not just what we are looking at.

    If I may I should just like to say that whilst taking a walk with my dogs I reminded myself of two great thinkers.

    Turin & Rogers.

    In case you haven’t guessed I mean Will Rogers, that’s right.

    He amongst others (including Twain) is attibuted with the saying:

    “It’s not what we don’t know that’s the problem, it’s what we know that ain’t so.”

    It is a criterion I often judge my opinions against.

    So I reminded myself that Turin gave us the Turin test.

    And I asked myself: could I tell the difference between a climate model simulation and Earthly reality.

    Answer: I do not know.

    Which is humbling.

    Then I asked: Where would I start to look.

    As it happens I have whiled away the last few weeks analysing the Hadcrut3 baseline data to extract the seasonal components, the amplitude and phase of the annual, bi-annual, etc. Not a mean feat as it has to be repeated for each 5×5 grid square.

    So I am pretty familiar with the seasonal shape, harmonic amplitude and lag for most of the world.

    The global representation of fundamental harmonic’s phase makes quite a pretty picture and is quite data rich and as it happens.

    So I thought to myself, well that would make a good criterion to judge between a model run and our Earthly realisation. That would be easy and I could do that.

    Unfortunately I remembered Rogers, and concluded that I have know means of knowing that such a subtle test would be valid as I simply do not know if our experienced realisation was indicative of anything.

    There are times when one gets the uneasy feeling that one is looking down the wrong end of a Howitzer. I may believe that the problem is soluble but I have no means of knowing.

    Going back to the phase data: It is reasonable coherent, no large discontinuities, in the Northern extratropical region at least. It shows promise as a criterion I could comprehend and test against. As it is derived from the baseline averages it is possibly quite solid and it does represent an aspect of climate. But is it typical? Who knows?

    So I asked myself well typical or untypical it happened and the models should be capable “at least in principle” of producing results that show that the are capable of reproducing it given enopugh runs.

    I think by this I mean that they are not constrained to a markedly different climate. I am sure statisticians know how to handle such things. But to my mind it would be; if it seems to be a duck which swims on a pond and the model has long arms and swings in a tree, then they simply have too little in common.

    Sometimes when looking down a barrel it might be better to shout “Fire”.

    My Best Wisshes to you all.

    Alexander Harvey

    Comment by Alexander Harvey — 27 Aug 2008 @ 11:35 PM

  187. Alexander Harvey #186 says
    “It’s not what we don’t know that’s the problem, it’s what we know that ain’t so.” Attributed to Will Rogers, or maybe Mark Twain.

    It appears that the most likely correct attribution is to Josh Billings.

    I don’t know what bearing this may have on any other confusion about attribution.

    Comment by Rick Brown — 28 Aug 2008 @ 12:26 AM

  188. Alexander, 186 and 18 :

    “Could a sentient being in anyway tell the difference between a computer model and the Earth’s historic climate.

    This is an important question. I am not sure I could, could any of you?

    If we cannot, not for any of the models, then we have no way of judging between them.”

    That first sentence is OK the middle one is assumption and the last one is your “what we know that ain’t so”. Or at least your implication as taken from your 186 comment on duck vs man.

    I ask of you: is it important to be able to tell the difference? Surely if we can’t tell the difference, then as far as we can tell, they are the same.

    But the last query has no basis in the first two and so I call “wrong” on it. Judging between them? Should we? Why? And if we can’t, what does that matter?

    Now where models may well break down is that they don’t allow nearby phase spaces to be moved to unless they are seamless from the current phase spaces. Like Ice Ages and Interglacials. But a *possible* phase space is a runaway Venus climate. The models don’t see that as being likely because there is nothing that we know that will push our climate that far out of the current “possible” without having feedbacks that pull us away. But it IS in the phase space and there may be a mechanism to move us over there. Such a process is narrow and unlikely therefrom (if it were likely it would fall out of what we do know more easily because when there are more ways, some of those ways will fall in the realm of what we know).

    So models are true and accurate. For a given value of “true” and “accurate”, but then again, that’s what’s the base truth of Science has. QCD is “true” but not really “the truth” but

    a) we don’t know anything better than the standard model
    b) we don’t have any measurement that tells us what we would change
    c) it still works damn well where we need or want to apply it

    a and b are your queries in the first part of my quote. c is where I call your “duck vs man” judgment wrong.

    And if we get a better climate model it will be mostly the same as the one we have now. It will just have more possibilities covered and more accurate constraints on where it will go. That may open up new areas of possibility but does not change that what we have is “right enough”.

    Comment by Mark — 28 Aug 2008 @ 4:03 AM

  189. Also, I think it’s “Turing test,” named for the late computer scientist Alan Turing.

    Comment by Barton Paul Levenson — 28 Aug 2008 @ 6:41 AM

  190. Dear Barton,

    Thanks, yes I meant the mathematician not the shroud. I am getting tired and feeling rather old but it is no excuse, particularly as I studied his work many moons ago.

    Dear Mark,

    I will get back to you, hopefully today.

    Dear All,

    If anyone is interested in the phase lag of the seasons and either has some data they could point me to, or would be interested in what I have found please let me know. Plotted as a contour map it clearly describes the oceans and continents and even appears to have an anomaly around the Norwegian Sea. In the oceans it has features that I suspect are linked to the major gyres.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 28 Aug 2008 @ 9:59 AM

  191. model: climate
    map: territory
    phylogenetic tree: life
    tasty bait: worthwhile pursuit

    Comment by Hank Roberts — 28 Aug 2008 @ 10:02 AM

  192. Richard Sycamore, I’m not sure what you are advocating. If the models took a best fit of sensitivity to data, I might be concerned. However, sensitivity is determined independently and constrained by many different lines of data–all of which favor a sensitivity in the 2-4.5 K/doubling range. Energy is conserved even in a system with LTP.

    Comment by Ray Ladbury — 28 Aug 2008 @ 11:43 AM

  193. This may be an OT question, but I wasn’t sure where it actually belongs: how many cat-5 hurricanes would have to form over the next decade to prove scientifically (more than just give common credibility to) the notion that climate change includes increased frequency and intensity of tropical storms?

    Better than just an answer, if someone can show me where/how to look for the answer, I’d appreciate it a lot.

    Comment by A.C. — 30 Aug 2008 @ 7:13 PM

  194. I think that the number of parameters or interactions make it impossible to proove conculsively my mathematical models that the climate is changing because of man-made greenhouse gasses. But it is not forbidden to make some judgements based on reason. And these judgements points to one important fact: The fossile fuel is running out. So the threat will decrease of it own. What is important is finding new energy sources to replace fossile fuel.

    Comment by Knut Holt — 31 Aug 2008 @ 4:31 AM

  195. Knut Holt, let me get this straight. You are contending
    1)that climate models cannot be verified
    and
    2)that the problem will take care of itself due to the finite supply of fossil fuels.

    Where in the hell did you get either of these two ideas?

    Re:1–complicated models are not needed to demonstrate anthropogenic causation–there simply is no other viable explanation for simultaneous warming of the troposphere and cooling of the stratosphere

    Re:2–Ever year of coal, tar sands, oil shale, methane clathrates…
    There’s plenty of carbon based fuel to anti-terraform Earth for whatever species will thive in the environment we will create. It won’t be us.

    Comment by Ray Ladbury — 31 Aug 2008 @ 9:32 AM

  196. Many sites saying that “petroleum” is running out are quite visible.
    Don’t mistake “petroleum” as meaning all fossil fuels.
    That’s the confusion that led to the claims a few years ago that there wasn’t enough “petroleum” to cause global warming.
    They ignored coal.

    Comment by Hank Roberts — 31 Aug 2008 @ 10:49 AM

Sorry, the comment form is closed at this time.

Close this window.

0.592 Powered by WordPress