RealClimate

Comments

RSS feed for comments on this post.

  1. This all makes sense, but will the IPCC report, which influences investment and policy decisions, reflect your statement that the reasons for mismatch between modeling and observation is an active research question? Will it retain the high degree of confidence regarding catastrophic anthropogenic global warming without clear answers for the responsibility of the mismatch?

    [Response: You are not following the argument. That models and observations do not match in all respects is normal and expected. It was true for TAR, AR4 and will be for AR5. There is nothing new in this general issue. If you think that policies are being made based on exact numbers coming from a climate model, I’d have to ask for some evidence. Polices are being made (or at least considered) on the strongly evidence based premise that climate sensitivity is non-negligible, but that conclusion doesn’t depend on models as much as paleo-climate and so is unlikely to change. PS. I have no idea what you mean by “high confidence” in “catastrophic anthropogenic global warming”. – gavin]

    Comment by Alex — 13 Sep 2013 @ 12:18 PM

  2. Good explanation. Thanks.

    Related to this, at conferences many modelers compare their results relative to “observations”, but do not mention what those observations are. That makes it hard for a listener that is knowledgeable with observations to judge whether the deviations are due to the model or the observations. Thus my plea to the modelers: please mention the name of the observational dataset in your legend.

    Comment by Victor Venema — 13 Sep 2013 @ 12:47 PM

  3. I have to take issue with your citing of Quine, and using it to set aside the Popper-Feynman point of view. After all, further down in the article you cite is this bit:

    “Thus, if we accept Quine’s general picture of knowledge, it becomes quite difficult to disentangle normative from descriptive issues, or questions about the psychology of human belief revision from questions about the justifiability or rational defensibility of such revisions.”

    Other criticisms are also discussed, including one that the whole issue, underdetermination, is overblown. My point, however, is that you should not use this argument unless you are willing to accept Quine’s general picture of knowledge. I for one, am not.

    Quine is a powerful representative of an epistemological tradition of thought that currently dominates the English-speaking world, but it is a tradition that has basically reached a bad dead end. More detail than that would not be appropriate for a bog dedicated to climate science.

    Comment by Lichanos — 13 Sep 2013 @ 3:31 PM

  4. http://www.washingtonpost.com/rf/image_606w/2010-2019/WashingtonPost/2013/09/11/Editorial-Opinion/Graphics/Tsketch11.jpg

    Stripmining the future is its own reward, thus far.

    Comment by Hank Roberts — 13 Sep 2013 @ 4:07 PM

  5. Gavin,

    I think your example concerning neutrinos and relativity is off the mark. Implicit in Feynman’s dictum is that the observations have to be correct. That’s ‘correct’ in a scientific sense, not a philosophical one. In the latter sense anyone who’s stayed awake in their Phil 101 course knows there’s no such thing as ‘correct’ (though you keep arguing anyway); while in the former sense the new airplane design either crashes or it doesn’t.

    [Response: Don’t agree. The neutrino example is spot-on. No experiment is so clean or pure that there are no ancillary hypotheses (which is what would be required for Feynman’s dictum to be accurate. Indeed, Feynman made some of his best discoveries by challenging experimental data which proved to be dubious (so-called ‘Feynman points’). – gavin]

    By the way, I thought it was admirable when the authors of the neutrino experiment more or less said at the time, “this is what we found but we could easily be wrong”, inviting others to help figure out why.

    Comment by Watcher — 13 Sep 2013 @ 4:15 PM

  6. Hi Gavin, good to see these things set out in a good, logical manner.

    I wonder if it works better to separate the “model” from the “simulation”? i.e. don’t lump errors in the forcings (boundary conditions) and initial conditions in with the model errors.

    I prefer to think of it this way: you have a model and you want to test whether it behaves like the real world. So you try to simulate some aspect of the real world and compare the simulation with observations. This is evaluating the simulation, not the model per se. If there is a discrepancy between simulation and observations, it might be (partly) because of errors in the forcings or initial conditions or in some other aspect of the experimental design. You say all this, of course, in your item 2 (model error) — but the point is that it isn’t model error.

    Separating things out into more components like this is necessary if we want to build a useful statistical model of the data-model comparison, i.e. one that doesn’t just answer the question “is the model right?” (since if we look closely enough the answer will always be “no”) but also the more interesting question “how wrong is the model?”

    [Response: Yes. This would be a good distinction. These points are lumped together in my point 2, but it would be clearer to break it out. Thanks. – gavin]

    Comment by Tim Osborn — 13 Sep 2013 @ 4:33 PM

  7. Alex wrote: “high degree of confidence regarding catastrophic anthropogenic global warming”

    A high degree of confidence is appropriate, given that catastrophic anthropogenic global warming is already occurring right before our eyes, all over the world. Deniers have to work very hard to ignore it.

    Comment by SecularAnimist — 13 Sep 2013 @ 5:15 PM

  8. Lichanos,
    And yet, what we have is quite clearly a case of holistic underdetermination. Even if we do find a discrepancy, it may not be possible to state exactly which of the hypotheses underlying our model, and we cannot even state that there is a statistically significant discrepancy.

    The Popperian paradigm only works for logically simple theories, and it works best when you have multiple theories with which to compare an observation. After all, even after the negative findings of Michelson and Morley, physicists did not reject the aether. The Lorentz equations were initially derived assuming that motion causes compression of the aether.

    Comment by Ray Ladbury — 13 Sep 2013 @ 5:24 PM

  9. > Quine … dead end

    You mean dead because “there may be little at stake since the “fantasy of irresolubly rival systems of the world” doesn’t get anywhere useful?

    Those claiming AGW can’t be true because:FREEDOM don’t think their position is a fantasy. They do seem to think their position is irresolvably at odds with the science and economics that show we’ve been stripmining the future to make money faster today.

    Well, this ought to be looked at by the Metaphysics Research Lab

    Comment by Hank Roberts — 13 Sep 2013 @ 5:48 PM

  10. In observational error, you’ve omitted the biggest one: the sample isn’t a representation of the population. There are statistical tools to measure that but they also rely on assumptions about the population. There are many measures used in climate science but limitations on accessibility or funding can often create a variety of sampling techniques not all of which have the same certainty wrt the population they represent.

    Comment by Tim Beatty — 13 Sep 2013 @ 7:06 PM

  11. Tim Beatty,
    Random sampling errors are actually fairly well understood, even for “distribution-free” cases, and if you have enough data, this isn’t really a problem. Climate science is quite fortunate in this regard, as data are not scarce.

    The trick comes in interpreting the data, and that requires models. In general, the longer a result has stood, the more you are likely to be able to take it to the bank, precisely because more data will have accumulated and any errors in interpretation will likely have been found.

    The idea that a single observational disagreement will make the problem go away is sheer fantasy, and ignorant fantasy at that.

    Comment by Ray Ladbury — 13 Sep 2013 @ 7:56 PM

  12. I have been reading this blog for nearly ten years and think that Gavin taking us on a trip down a summary of academic theories relating to gaps in prediction and observation very disappointing, partly due to the assumption that he is an expert in those areas now. When was the last time we had some straight shooting regarding the implied power of the prediction made by models, the championing of them here and the advocacy power they have. Very disappointing to see the emphasis of conviction shift to paleo- climate alone when the tide turns.

    [Response: Given your extensive reading of the blog, you surely can’t be unaware that I have consistently stated that the best constraints on sensitivity come from the paleo record- and most importantly the last glacial period. Thus I’m a little puzzled as to what inconsistency you think you have detected. I am still a strong advocate for the usefulness of climate modelling, and models are consistent with inferences from paleo. – gavin]

    Comment by NickC — 13 Sep 2013 @ 8:46 PM

  13. Gavin, regarding your response in #1: If you think that policies are being made based on exact numbers coming from a climate model, I’d have to ask for some evidence.
    I would direct you to the IPCC FAR Summary for Policymakers in which the bold-faced paragraph headlining the ‘Projections in Future Changes in Climate’ section reads:

    For the next two decades, a warming of about 0.2 C per decade is projected for a range of SRES emission scenarios. Even if the concentration of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1 C per decade would be expected.

    The table immediately below said paragraph predicts sea level rise with two significant digits under a variety of scenarios. You must be aware that this report has been cited numerous times by Nancy Pelosi, Harry Reid, Al Gore, and many lobbyist organizations pushing cap and trade legislation and many other EPA regulations. While I agree that any mismatch between models and observation “…in all respects is normal and expected.”, it is the very quantity in bold-face in the FAR summary that is both mismatched and driving these policy considerations.

    What I was hoping for in this post was some technical leads for the mismatch, specific to your bullet point #2, above. Is it the fundamental CO2 forcing prediction, based on effective radiation temperature to space? Or the indirect CO2 forcing predicted due to H2O increases at high altitudes that have not materialized? I am probably as aware of any reader here of modeling challenges in general, and can appreciate the work your groups have performed, but I can also appreciate the implications of the mismatch that prompted your post: there is fundamental uncertainty in the interaction of the complex mechanisms that drive climate change, including the human effect.

    [Response: The IPCC report is far more than a single line about the short term ensemble mean trend. Even the SPM is substantially more detailed, let alone the rest of the report. Your claim of a ‘fundamental’ mechanism that is at play here is simply wrong. As I outlined above, there are many reasons for mismatches, and the shorter the time period, the more reasons there are – forcings, initial conditions, internal variability are all likely playing a role as has been demonstrated in a number of recent papers. We don’t yet have a full synthesis (but people are working on it), but for you to automatically assume the answer says more about prior beliefs than it does about the evidence. – gavin]

    Comment by Alex — 13 Sep 2013 @ 8:59 PM

  14. Ray Ladbury:
    Maybe it’s a chicken/egg problem, but how do you test basic assumption like gridding? I would think model disagreement with sample could be attributed to a number of things unrelated to the model: 1) the natural event is extreme so the actual population including its sample is outside model limits, 2) the sample is not an accurate representation of the population or 3) the population is more complex or dynamic than the sample methods. As an example (and I don’t have data, just a thought experiment), when we estimate average global temperature and we grid up the planet, how do we test that the grid size is appropriate to sample? How do assess whether grid size required for accurate population is potentially seasonally or geographically (or both) dependent? Or if the grid is oversampled in certain areas? If the model is the only test, it could be revealing an extreme population, and extreme sample or model error. How do we know a model could be exactly accurate but the data it needs 10×10 sq mile sample sets in the ENSO SST region but only an 100×100 sq mi sample sets in Ukraine? Maybe it’s my ignorance of available data but I don’t know how to measure sensitivity to that kind of sampling error. It seems the goal is to get the model to agree with the sample but how do test the sample against the population and how do you estimate the variance in the population with the variance in the sample? 2012 is different from 2013. Both are represented as a population (nature) and as a sample of population (our measurement of nature). Is one of the populations extreme? Is the sample correlated well enough to the population? Are sampling methods dependent on conditions? Do we treat sampling as part of the model or separate?

    Comment by Tim Beatty — 13 Sep 2013 @ 10:47 PM

  15. Please, a glossary for the various abbreviations in your post.

    Thanks!

    [Response: Here. Let me know if there is anything that is not clear. – gavin]

    Comment by AIC — 14 Sep 2013 @ 1:10 AM

  16. Gavin, would it be safe to assume that you have enough confidence in paleo inferences to have mismatch of models to observation lessen your conviction we remain on a worrying path. In other words does it lessen your certainty or just point to gaps in knowledge that will eventually still bear out the overall thrust that anthropogenic contribution is worrying. I think it is pertinent to the discussion.

    [Response: Yes. I’m not quite sure what specific mismatch you are referring to (or just making a general point), but on the basic issue – should one be concerned about future anthropogenic climate change – I have not changed my opinion, mainly because that is not based on models at all. Models are there to help us quantify the changes and without them we would have much larger uncertainties. – gavin]

    Also in 12 you say ” and models are consistent with inferences from paleo” could you elaborate, on face value it seems incorrect, I must be missing something.

    [Response: Charney sensitivity from paleo is around 3ºC, models are in the same range (the latest GISS model for instance is around 2.5ºC). – gavin]

    Comment by NickC — 14 Sep 2013 @ 1:44 AM

  17. “PS. I have no idea what you mean by “high confidence” in “catastrophic anthropogenic global warming”. – gavin]”

    7
    SecularAnimist says:

    13 Sep 2013 at 5:15 PM

    Alex wrote: “high degree of confidence regarding catastrophic anthropogenic global warming”

    A high degree of confidence is appropriate, given that catastrophic anthropogenic global warming is already occurring right before our eyes, all over the world. Deniers have to work very hard to ignore it.

    ???? Can you clarify, Gavin, since you let this comment through moderation…

    [Response: I have high confidence that anthropogenic effects are dominating current climate change and will increasingly do so in the decades to come. The changes that we have seen so far are not catastrophic on a global scale, though future changes are going to be much larger and there is a very real risk of substantive damages. However, when people use the term ‘catastrophic anthropogenic global warming’ they are not referring to any real science but are attempting to paint anyone who talks about the science as an alarmist. AGW is real and growing, but whether it turns into a catastrophe is very much up to us. – gavin]

    Comment by sue — 14 Sep 2013 @ 2:36 AM

  18. Gavin, I’ll have to read up on Quine to grasp your deeper epistemological point here, but your examples seem far off.

    The faster-than-light neutrino experiment was an *error*, so of course it didn’t falsify special relativity. I’m sure Feynman didn’t claim that errors falsify theories, so there’s no contradiction of Feynman’s dictum here.

    [Response: You are missing the point entirely. Since there is never a perfect experiment or observation, error is always a possibility. And since you (correctly) note that erroneous claims do not falsify anything, there is always the possibility that an experimental result however conflicting on it’s face, was actually in error. Therefore it is never as simple as Feynman’s dictum implies. – gavin]

    Saying that a map doesn’t capture the true landscape or a portrait the true self is very confusing as a lead-in to a discussion of climate models. Maps actually do capture what they are supposed to capture, quite accurately. I don’t know what you mean by a portrait, but it’s not going to be a good analogy for a science like climate science.

    [Response: Sometimes I like to use metaphors. Sue me. (But listen to the Neil Gaiman story first). – gavin]

    I think it would be bad, bad news for climate scientists to start talking this way, to start retreating behind vague figurative/artistic analogies to describe their ability to cohere with reality.

    Adopting an epistemolgy of lower standards, one where hypotheses or theories can’t be falsified, creates too much room for bias and motivated reasoning. Perhaps it wasn’t your intention to suggest a sloppy, low standards epistemology. Climate science is viewed by outsiders — and described by Judith Curry — as a biased, groupthink-driven field. There is very little that scientists can learn from 20th century epistemologies — many of which would make science impossible. The last thing climate science needs right now is some wishy-washy epistemology.

    [Response: The last thing any science needs are false epistemologies that are just hoisted up the flagpole in order to ignore the balance of evidence. I illustrated my points with real cases where different resolutions have been found to previous mismatches, assuming that future mismatches will all be resolved in a single way is a-historical and extremely unlikely. – gavin]

    (BTW, Alex said there’s “fundamental uncertainty…”, not that there’s a fundamental *mechanism*.)

    Comment by Joe — 14 Sep 2013 @ 4:50 AM

  19. (Fixed a few type errors)
    Hi Gavin
    Maybe the following example is useful as a clean example ref:
    … we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong.
    actually doesn’t work except in the purest of circumstances (and I’m not even sure I can think of a clean example).

    Here is a clean example:
    Distance traveled (m) = velocity (m/s) * time (s)
    The dimensions are traceable to international standards.
    I can measure both distance, velocity and time in SI units.
    I can determine the standard uncertainty for all the measured variables from statistics
    It is falsifiable – i can move a body at a certain velocity for a certain time and measure the traveled distance
    If the traveled distance does not fit with calculated distance within the uncertainty calculated by using the international standard Guide to the expression of Uncertainty in Measurement the model might be wrong.

    [Response: This is simply a definition of velocity and so the statement is a tautology – it cannot be otherwise. Thus it doesn’t come into the discussion of testing theories. – gavin]

    The model is useful if I can make predictions which affect my choice of action.

    In my every day life it is useful even within uncertainties of 10 – 20 % (caused by uncertainty in predicted velocity) when estimating time of arrival.
    If the uncertainty became to high it would not be useful.

    The model is analogue to:
    Increase in global average atmospheric temperature (K) = Effect from CO2 (K/ppm CO2) * Increase in CO2 level (ppm CO2)

    For the model to be useful it must be correct within some level of standard uncertainty for some averaging period.
    And repeatably so for many periods.

    [Response: You have a very impoverished view of what utility is. Is it useful to know that a medical treatment improves outcomes by 0 to 30% in different trials? The answer for the FDA is very different than for a patient or a researcher. – gavin]

    Comment by Dag Flolo — 14 Sep 2013 @ 6:00 AM

  20. Gavin:

    “Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world. Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading – since it is obvious why there is going to be a discrepancy, albeit one that doesn’t have much implications for our understanding.”

    This is almost worth a post in itself, as these fundamental misunderstandings are the basis for so many skeptical arguments (in particular combining the single realization of the real world with an ensemble mean).

    In fact, in this thread, alex and nickc are both arguing based to some extent on fundamentally misunderstanding this (along with an apparent belief in model/reality mismatches that don’t actually exist to the extent they believe) …

    Comment by dhogaza — 14 Sep 2013 @ 8:05 AM

  21. So, if the model doesn’t work, there’s nothing to worry about?

    [Response: No. If there is a mismatch, there is maybe an interesting reason for it, and people should try and find out what it is. – gavin]

    Comment by Frank Davis — 14 Sep 2013 @ 9:14 AM

  22. Gavin,

    Since I’m spreading admiration around, I reckon you should get some, too: you had to know a discussion of this type would be lively!

    So let’s go to your responses to #18. Just because v = D/t doesn’t mean that all predictions based on Newton’s law of motion are tautologies. I agree that for some ‘simple’ theories they are so obvious in retrospect that they seem so. For example, if I measure v using D/t as my rocket leaves the atmosphere, it is not a tautology to predict where it will be 5 or 6 years from now so that it can drop a probe on Jupiter.

    [Response: I didn’t say that Newton’s Law did not give testable predictions. Only that your specific example (v=D/t) was a tautology (and has nothing specific to do with Newton’s Law in any case). – gavin]

    Ah! But maybe it didn’t make it to Jupiter. Does this falsify the theory? No, I forgot to account for the orbital motion of Earth and Jupiter, so indeed this expensive miss does not invalidate Newton’s theory. If I understand you correctly this is what you are getting at.

    [Response: Yes – this is more to the point. ]

    Nevertheless, it is surely possible to take into account the appropriate factors, calling into use gravitational constants and what-not, and come up with a better prediction. If I include only Earth and Jupiter the result will still be off, but I reckon that if I start taking the Sun into account my prediction will start looking a lot better. Maybe it needs tweaking a bit due to the lunar fly-by before it starts looking really good.

    [Response: Yup. But how do you tell whether any remaining mismatch is due to a missing body or to the difference to general relativity? – gavin]

    Nevertheless, while each of the ‘pieces’ of this construction comprises equally simply things like D=vt and F=GMm/r^2 the final prediction is not a tautology.

    [Response: F=GMm/r^2 is a theory, D=vt is a definition. There is a difference. ]

    I would venture to say that this is how science is supposed to work, and it is perverse to insist that Feynman’s philosophy be judged on the basis of the first simplistic ‘experiment’. It is the job of a scientist to know which theory/hypothesis is being tested by a given experiment, and is at the root of what I would call a properly designed experiment. Maybe we can say that a good scientist is able to take the world of Quine and reduce it to the world of Popper, at least to an extent that is ‘good enough’.

    [Response: Agreed. ]

    Just to belabour the point; notice that my Jupiter prediction failed to take into account either the colour of the rocket’s paint or the newly discovered earthlike planet around Alpha Centauri (or wherever) because my scientific judgement tells me that while this leaves me open to the criticism that my model is incomplete, I have good reason to believe that these things don’t matter in the current context. Indeed, if pressed I can estimate the impact using the same D=vt and F= etc. and show this to be the case. Maybe I launch several probes that all arrive safely, and in my mind I elevate my model to a ‘theory of Earth-Jupiter space travel’.

    Now, suppose that next year I launch another probe, and this time it misses. Does that falsify Newton’s laws? Perhaps, but it’s more likely that my theory was incorrect. I check things out a bit and notice that this year Mars has moved close to the flight path, so previously I got the right answer with an incorrect theory that posited no influence from the planet Mars. Scientific honesty requires me to admit to another expensive error and revise my theory to include the new factor, and once again attempt the conversion of a Quine to a Popper situation.

    My, my, I have gone on and I should probably get to the point. There is philosophy and there is science. If scientists behaved like philosophers nobody would ever get anything done because they’d all be too worried about having missed some factor, and anyway what if I’m just imagining the space probe in the first place? In a scientific sense it MATTERS very much what measurements say, and I will say again that measurements are the only things that really do matter. It’s not sufficient to say that they might be wrong, or they might be measuring something different from what they seem to, and so therefore I might be right even though my theory doesn’t agree with them.
    The job of a scientist is to sort through the mess and develop a theory that can account for the measurements. Furthermore, for that theory to be useful it must be capable of producing verifiable predictions (e.g. the probe will get to Jupiter no matter what year I launch it). If the predictions don’t work out then the theory must be modified or abandoned.
    Anything less is not science.

    [Response: I am a scientist, not a philosopher, and anything I am talking about here comes directly from the practice of science, not theorising about it. However, as I’m sure the philosophers reading will be happy to know, there is some connection between what scientists actually do and how it is modelled by philosophers. It’s not a perfect model though (of course). – gavin]

    Comment by Watcher — 14 Sep 2013 @ 9:34 AM

  23. This is a personal perspective of the subject from that of a practicing statiscian, and only a very amateur climate guy.

    In the case of Earth’s climate as a source of observations, there’s an additional difficulty. As Slava Kharin observed in slides for the Banff Summer School, 2008,
    “There is basically one observational record in climate research.” (See Slide 5, http://www.atmosp.physics.utoronto.ca/C-SPARC/ss08/lectures/Kharin-lecture1.pdf) And this is an issue. For there is enough variability in Earth’s climate that if the system were “initialized” again, say, 50 years back and somehow magically all the external inputs to the system kept exactly the same, the result would be a little different. There is a debate about how big this “internal variability” is (see Kumar, Chen, Hoerling, Eischeid, “Do Extreme Climate Events Require Extreme Forcings?”, http://dx.doi.org/10.1002/grl.50657), with climate amateur but statistician me coming down on the side of “not as much as you might think”. (My reasons are complicated, and I’ll write them up in an upcoming paper I’m putting on arXiv.org, that being a critical review of the statistics in the recent NATURE CLIMATE CHANGE paper by Fyfe, Gillett, and Zwiers, shared first with those authors. There are different flavors of variability beyond internal and external. See http://hypergeometric.wordpress.com/2013/08/28/overestimated-global-warming-over-the-past-20-years-fyfe-gillett-zwiers-2013/ for more.) But the point is, such variability makes modeling even harder, for not only are the general parameters of the physical system necessary to get right, but, if prediction is a goal, actually TRACKING the actual realization Earth is taking is part of the job. Slava Kharin argues, and I agree, that the one-observational-record reality means a Bayesian approach is the only sensible one. That’s not universally held in geophysical work, however.

    Nevertheless, it’s important, I think, to parse properly what this all means. The reason why we want models is to help understand what data means, and what physical effects are important, how much, and how. We, of course, also want to use them for policy predictions, but using these as predictive devices is a tricky business. Statistically speaking, NONE of that should be taken to mean the long term projections are off in expected values in any significant way. Forcings are forcings, and AB INITIO physics says that extra energy needs to go someplace and be dissipated throughout the (primarily) fluid systems of Earth somehow. The devil is in the latter details, as are the impacts. But they will occur, even if amounts and timings will be off, as they necessarily must be.

    So-called “two-sample comparisons” are tricky in complicated systems. Most direct techniques for doing so assume constant variation over large swaths of samples. That kind of approach tends to give large Regions of Probable Equivalence (ROPEs) which, of course, are less useful that otherwise. When this is done for predicting elections, say, something called “stratification” is used, where observations are qualified by (in this case) spatial extent, time of day, and other auxiliary variables and the response state of atmosphere considered as conditioned on these, and the model evaluated comparably, where it can be. Alas, sometimes doing that leaves few observations or few model runs to compare. That’s okay if a Bayesian approach is used. Not so much otherwise.

    Gavin said all this, but I wanted to second his view, giving mine, as well as put a note about my ongoing hard look at Fyfe, Gillett, and Zwiers.

    Comment by Jan Galkowski — 14 Sep 2013 @ 9:41 AM

  24. I have to echo some of the comments made above concerning the reliance on paleo studies.
    The notion that proxy ESTIMATES of temperature 1000 years ago when there was no anthropogenic CO2 are a superior test of AGW theory than current temperature MEASUREMENTS in the presence of a significant anthropogenic CO2 component strikes me as absurd.

    [Response: Here’s a test: If you read something written by someone who basically knows what they are talking about and it seems absurd to you, ponder – at least for a second or two – that it might be your interpretation that is at fault rather than the statement. If you did (and perhaps followed the links), you would realise that my comment had nothing whatever to do with temperatures 1000 years ago. But nice try. – gavin]

    Comment by Watcher — 14 Sep 2013 @ 9:42 AM

  25. Models tap into the physical world processes but focus only on a given range of frequencies. Therefore any interpretation – conclusion is prone to human error. To understand future states better it appears to involve as much data as possible (which would also increase error rate). It would help to identify tipping point systems of the spectrum better. The conclusiveness, the reliability should increase with data spectrum ratio. I really like to read another post on CMIP5, combined modelling with all methane forcings.
    But even a small data model (for instance analogous albedo “Daisyworld”) seems to be reliable in predicting trends.

    Also i find this interesting

    The treatment of signal and noise in constructing climate scenarios is of great importance in interpreting the results of impact assessments that make use of these scenarios. If climate scenarios contain an unspecified combination of signal plus noise, then it is important to recognise that the impact response to such scenarios will only partly be a response to anthropogenic climate change; an unspecified part of the impact response will be related to natural internal climate variability. However, if the objective is to specify the impacts of the anthropogenic climate signal alone, then there are two possible strategies for climate scenario construction:

    attempt to maximise the signal and minimise the noise;
    do not try to disentangle signal from noise, but supply impact assessments with climate scenarios containing both elements and also companion descriptions of future climate that contain only noise, thus allowing impact assessors to generate their own impact signal-to-noise ratios (Hulme et al., 1999a).

    Link

    When it comes to science messaging i think it would help to point out more often general agreements/predictions and underestimation (and why).

    Comment by prokaryotes — 14 Sep 2013 @ 10:03 AM

  26. re: maps and models

    I think that James Gleick was spot on in his book Chaos:

    “Only the most naive scientist believes that the perfect model is the one that perfectly represents reality. Such a model would have the same drawbacks as a map as large and detailed as the city it represents, a map depicting every park, every street, every building, every tree, every pothole, every inhabitant, and every map. Were such a map possible, its specificity would defeat its purpose: to generalize and abstract. Mapmakers highlight such features as their clients choose. Whatever their purpose, maps and models must simplify as much as they mimic the world.” (Gleick p.278-279)

    Comment by csoeder — 14 Sep 2013 @ 10:05 AM

  27. Joe:
    “Saying that a map doesn’t capture the true landscape or a portrait the true self is very confusing as a lead-in to a discussion of climate models. Maps actually do capture what they are supposed to capture, quite accurately. I don’t know what you mean by a portrait, but it’s not going to be a good analogy for a science like climate science.”

    Huh?! Here Be Dragons…

    “The good cartographer is both a scientist and an artist. He must have a thorough knowledge of his subject and model, the Earth…. He must have the ability to generalize intelligently and to make a right selection of the features to show. These are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject – it requires artistic judgement.”
    – Erwin Josephus Raisz (1893 – 1968)
    —–

    “The foremost cartographers of the land have prepared this for you; it’s a map of the area that you’ll be traversing.”
       [Blackadder opens it up and sees it is blank]
    “They’ll be very grateful if you could just fill it in as you go along.”
      – Blackadder II, British Comedy set in Elizabethan times.
    —–

    “A map is the greatest of all epic poems. Its lines and colors show the realization of great dreams.”
      – Gilbert H. Grosvenor, Editor of National Geographic (1903- 1954)
    —–

    “When our maps do not fit the territory, when we act as if our inferences are factual knowledge, we prepare ourselves for a world that isn’t there. If this happens often enough, the inevitable result is frustration and an ever-increasing tendency to warp the territory to fit our maps. We see what we want to see, and the more we see it, the more likely we are to reinforce this distorted perception, in the familiar circular and spiral feedback pattern.”
      – Professor Harry L. Weinberg, 1959 in Levels of Knowing and Existence: Studies in General Semantics
    —–

    “There is no such thing as information overload, only bad design.”
      – Edward Tufte
    —–

    “If you want a database that has everything, you’ve got it. It’s out there. It’s called reality.”
      – Scott Morehouse, Director of Software Development, ESRI
    —–

    “our earth is a globe
    whose surface we probe
    no map can replace her
    but just try to trace her”
      – Steve Waterman, The World of Maps

    Comment by Radge Havers — 14 Sep 2013 @ 10:08 AM

  28. data spectrum ratio“?

    Comment by Hank Roberts — 14 Sep 2013 @ 11:36 AM

  29. Gavin wrote (in reply to #17): “The changes that we have seen so far are not catastrophic on a global scale”

    To paraphrase Tip O’Neill, all catastrophe is local. And when “local” catastrophes are occurring everywhere at once, that’s “global”.

    The millions of people all over the world who have already experienced mass destruction of their homes, livelihoods, food supply and/or water supply as a result of AGW-driven climate change and extreme weather might not agree that the changes we have seen so far are “not catastrophic”.

    Which is, of course, why the primary “mission” of the deniers at this point is to deny any link between global warming and these ongoing and rapidly escalating effects — to argue, in essence, that yes, the world is warming; and yes, we are experiencing exactly the sort of effects that climate science has predicted for a generation would result from that warming; but no, those effects are not the result of the warming.

    So what is causing them? According to the deniers, nothing. They are just our imagination.

    Comment by SecularAnimist — 14 Sep 2013 @ 11:37 AM

  30. I wrote yesterday (#7): “A high degree of confidence is appropriate, given that catastrophic anthropogenic global warming is already occurring right before our eyes, all over the world. Deniers have to work very hard to ignore it.”

    And right on queue, for a perfect example, see the piece in today’s Washington Post by Bjorn Lomborg, perhaps the hardest working denier in show business.

    Sure, your decades of smoking cigarettes have given you lung cancer. And yes, you are coughing up blood. But you can’t attribute every bloody cough to the cancer. There are always bloody coughs every once in a while. It’s just natural variation, you see. And it doesn’t mean you are going to experience “globally catastrophic” effects from the cancer, like, you know, death.

    Comment by SecularAnimist — 14 Sep 2013 @ 11:45 AM

  31. Hank Roberts, Re “Spectrum Ratio” see also “Vautard, R., and M. Ghil (1989): “Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series”, Physica D, 35, 395–424.” Link or Singular spectrum analysis

    Comment by prokaryotes — 14 Sep 2013 @ 12:25 PM

  32. And Spectral signal-to-noise ratio

    Comment by prokaryotes — 14 Sep 2013 @ 12:32 PM

  33. “If you want a database that has everything, you’ve got it. It’s out there. It’s called reality.”  – Scott Morehouse, Director of Software Development, ESRI—–

    Gads! As a daily user of ESRI software for more than 20 years, I shudder at the thought of their executives being taken as authorities on anything but sales.

    I suppose he thought he was being clever, but the notion of reality as a database is absurd. After decades of producing books along the lines of “Modeling Our World: The ESRI Way,” I guess they believe their own propaganda.

    Comment by Lichanos — 14 Sep 2013 @ 12:36 PM

  34. I wonder if there is a non-equilibrium quasi steady state non-reproducible thermodynamic system, one with a vast number of internal degrees of freedom (other than the terrestrial climate system), which is successfully described by a computational model. If its dimensions are small enough to make it fit into the lab and studied that way in controlled experimental runs, so the model is verified properly, it is even better.

    - A system is reproducible if for any pair of macrostates (A;B) A either always evolves to B or never.

    [Response: Define “successfully”. – gavin]

    Comment by Berényi Péter — 14 Sep 2013 @ 1:30 PM

  35. A model doesn’t have to be perfect… just better than the competition. Like, you don’t have to out-run the lion, just the other guy…

    Comment by Martin Vermeer — 14 Sep 2013 @ 2:13 PM

  36. Gavin, you have hit upon one of my favorite topics. There was a related discussion here:

    Naomi Oreskes, Kristin Shrader-Frechette, Kenneth Belitz, “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,” SCIENCE Vol. 263, No. 5147 (Feb. 4, 1994), pp. 641-646.

    Note that your argument applies analogously to natural language too.

    To help “climate change communication” in the public debate, I have been trying to create a natural-language-style flow-chart cartoon language that illustrates the principles of the lack of precise prediction in complex systems, for purposes of elementary pedagogy. This is not climatology, it is (non-mathematical) general systems to picture different things in the same format..

    I am happy to report that it has had SUCCESS in counteracting denialist arguments in the comments section under the new Lomborg opinion column in the Washington Post. Here is the thing I did:

    http://www.youtube.com/watch?v=SIvcQTXdjTg&list=PLT-vY3f9uw3AcZVEOpeL89YNb9kYdhz3p

    And here is the complete list of the series:

    http://www.youtube.com/playlist?list=PLT-vY3f9uw3AcZVEOpeL89YNb9kYdhz3p

    They are all exactly one-minute long. The “food web” cartoon (#3) takes a similar approach.

    Comment by Lee A. Arnold — 14 Sep 2013 @ 2:52 PM

  37. With regard to comparisons (evaluation may be a proper word here) of models to (using) satellites, it is worth pointing out that this is a big research field in its own. There are various approaches for comparing these two and each approach has its own advantage and limitation.

    For example, 1) one can do a “traditional” comparison whereby one compares means, standard deviations etc with satellite based estimates. This will tell you if a model captures the overall range of values and spatial variability, but will not tell you anything about how good any particular process is simulated.

    2) Another way would be to carry out a process-oriented comparison, wherein one focuses on a set of processes or natural variabilities (e.g. ENSO, NAO or Indian Ocean Dipole) and investigate how good a particular model reproduces climatology of certain variables during those processes/variabilities (in reference to similar climatology from the satellites). But this approach will not have the advantage of the first one.

    3) One could also employ satellite simulators so as to avoid comparing apples to oranges. The simulators take model data of a certain geophysical variable and carefully simulate it in a way particular satellite sensor would have seen that variable. This ensures a fair comparison. And it not only takes care of mismatches and sampling issues between models and satellites, but also different sensitivities of different satellite sensors to geophysical variables.

    4) Eventually one could combine any or all of the approaches above, which I think would be the most stringent litmus test of the models.

    All of this, of course, only applies if you have satellite based data sets (which in most cases go back to 1979) for comparison.

    Comment by Abhay — 14 Sep 2013 @ 2:58 PM

  38. At this point it would not seem out of place to quote Richard Hamming:
    “The purpose of computing is insight, not numbers.”

    The same can be said of models. A model need not even be the best to accomplish this–Tamino’s 2-box model is a case in point, as its simplicity allows the important contributors to climate to be isolated and assessed.

    On the other hand, the denialist model… Oh, yeah. There is no denialist model. And you guys wonder why no one intelligent takes you seriously?

    Comment by Ray Ladbury — 14 Sep 2013 @ 3:04 PM

  39. [Response: Define “successfully”. – gavin]

    In case the system can be studied experimentally in a controlled lab environment, definition of “successfully” is straightforward. Both the experiment and model simulation can be run as many times as necessary with controlled parameters. As the system is supposed to be non-reproducible, only statistics of macroscopic state variables are comparable, of course, but with enough runs that can be made to converge to an arbitrary degree, provided the model is “correct”. If it is not, divergence is clearly visible, that is, the model is falsified.

    [Response: Interesting, but not relevant. This presupposes a perfectly known set of basic equations that we can test for convergence as scales get arbitrarily small. That isn’t the case for climate models – too many magnitudes of scale between cloud microphysics or under-ice salt fingering and grid box averages. – gavin]

    In case of modelling a single run of a unique physical instance, I have no idea what “successfully” means.

    [Response: Similar to your first point – coherent statistics over time periods, robust patterns of teleconnections, process by process similarities, coherent emergent properties, quantitative matches in response to large perturbations (volcanoes, orbital forcing, continental configurations etc.).

    However, theories in physics are usually supposed to hold for a wide class of systems, some of which may be studied in the lab. In that case it is a must to do so, because it is the easiest way to verify a theory. This is what one would expect in this particular branch of nonequilibrium thermodynamics, but I must admit I am ignorant enough to be unaware of any such attempt.

    Can you give a pointer? Or explain why it is not done, should that be the case.

    [Response: Only specific processes can be examined in the lab. Radiative transfer, aerosol formation, some aspects of cloud microphysics, ocean diffusion etc. – but the real world has many good experiments that the numerical models can be evaluated against (some mentioned above). – gavin]

    BTW, for reproducible systems we know quite a lot. Unfortunately the terrestrial climate system does not belong to this class.

    Journal of Physics A: Mathematical and General Volume 36 Number 3
    Roderick Dewar 2003 J. Phys. A: Math. Gen. 36 631 doi:10.1088/0305-4470/36/3/303
    Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states

    Comment by Berényi Péter — 14 Sep 2013 @ 3:09 PM

  40. Re: Model error.

    There are model errors and there are model errors. However, we’re not talking about one or a few mismatches between model predictions and observational data. Instead we are talking about the wholesale failure of the models to “predict” global temperatures, despite the undeniable increase in CO2, upon which the IPCC bases its reports and forecasts.

    [Response: Not really.

    – gavin]

    All of the models ca 2007 that the IPCC used to forecast climate change predicted a steady increase in temperature (based, as they were, on the assumption that CO2 is the primary driver of temperature) and yet global temperatures have remained essentially flat since then.

    [Response: If you reason from false premises, you are very unlikely to conclude anything useful. Point 1. Models do not predict a ‘steady increase’ in temperature, they predict many ups and downs and single runs often show a decade with insignificant OLS trends. Point 2. Models are not built with the ‘assumption’ that CO2 is the primary driver of temperature change. The models are actually completely ecumenical about reasons for climate changes – they will change their climate as a function of volcanoes, the sun, aerosol pollution, deforestation, ozone depletion as well as the main greenhouse gases. – gavin]

    In short, the models have been “falsified.” Unfortunately, science — Big Science — is not Popperian but Kuhnian, and all that matters is the defense of the prevailing Paradigm, and the data be damned.

    [Response: Oh dear. You didn’t read any of the top post did you? Please try again. – gavin]

    Comment by Hoya Skeptic — 14 Sep 2013 @ 4:14 PM

  41. #33 shuddering on a high horse:

    “I suppose he thought he was being clever, but the notion of reality as a database is absurd.”

    Literal minded much? He was being ironic. It’s basic cartography about selectivity, expressed in short form; a variant of a pretty standard line in geography. I’m surprised that you would miss the thrust of it after twenty years in the business…

    Comment by Radge Havers — 14 Sep 2013 @ 4:47 PM

  42. Thanks Gavin for the discussion.

    Your reference to the Paleo is understood, however as with models there must be some inherent uncertainty in the different methodologies (particularly the transient constraints as recent data should also be accounted for in them).

    [Response: Of course. – gavin]

    As you say in a previous post on sensitivity … “There are three main methodologies that have been used in the literature to constrain sensitivity: The first is to focus on a time in the past when the climate was different and in quasi-equilibrium, and ESTIMATE the relationship between the relevant forcings and temperature response (paleo constraints). The second is to find a metric in the present day climate that WE THINK Is coupled to the sensitivity and for which we have some empirical data (these could be called climatological constraints). Finally, there are constraints based on changes in forcing and response over the recent past (transient constraints). There have been new papers taking each of these approaches in recent months.” (My capitalisation)

    My point about all this really is how are any of these actually tested beyond theory, I fully appreciate the shortish time frame where we see a mismatch being a problem asserting anything is wrong yet, but could it do just that if it persistently continues? i.e. are the paleo inferences ever testable over decadal timeframes? Or any timeframes?

    [Response: Be clear here that ‘the theory of climate’ is encapsulated in the GCMs (as best it can be given current technology). There is substantial structural uncertainty about the details of what that implies. We look to the real world and the paleo record to constrain those aspects of the climate system that have non-negligible uncertainties (most often climate sensitivities in a general sense). However, we don’t calibrate the emergent properties of the GCMs to the emergent properties derived from observations – they stay (more or less) as evaluation targets. For optimum evaluation purposes you obviously want true out-of-sample evaluations – and many aspects of paleo provide that (since models are not tuned to ice age conditions e.g.), as do future projections (dependent on reasonable scenarios of relevant forcings). True predictions – for instance of the consequences Pinatubo prior to impacts happening, longer term trends (i.e. post 1980s) have all proven skillful. – gavin]

    Ray, who decides what the “insight” from model mismatch should be?

    Comment by NickC — 14 Sep 2013 @ 6:22 PM

  43. What evidence is there for the assumption that climate sensitivity based on paleo record is applicable to present day? I would expect it to vary quite significantly based on factors such as ice cover, ocean currents, biosphere etc.

    [Response: Good question. The answer is that the variation is apparently less than one might think – some discussion of this in the PALAEOSENS (2012) paper. – gavin]

    Comment by Astar — 14 Sep 2013 @ 7:13 PM

  44. I’m quite fond of Gaiman (who lives not far from me), and Quine and Feynman are worthy intellectuals to bring into the discussion. But you missed one bloke with a most fitting quote for your ruminations–The philosopher Alfred Korzybski who is generally credited with first stating, “The map is not the territory.”

    And of course there is Magritte’s “Ceci n’est pas un pipe.”

    But as to your conclusion, you state: “The sea ice loss rate seems to be very sensitive to model resolution and has improved in CMIP5 – implicating aspects of the model structure as the main source of the problem.”

    Any ideas what exact aspects of the “model structure” might have been “the main source of the problem.”

    [Response: I haven’t looked into it myself, and I’m not aware of any papers really going into the details (other than those that remark on the improvements). In our own modelling, we have improved the calculations to reduce the amount of numerical diffusion (which helped a lot), and increased resolution (which also helped), but changes to the ocean model also have a big impact, as do Arctic cloud processes and surface albedo parameterisations, so it gets complicated fast. – gavin]

    Comment by wili — 14 Sep 2013 @ 7:47 PM

  45. NickC,
    Who decides on the insight?

    The relevant community of experts, of course. Who is in a better position to appreciate the strengths and weaknesses of a model and where it is most likely to bear fruit if tweaked. And ultimately, if the mismatch is sufficiently severe, the same experts will develop a different model. That is how science works.

    Comment by Ray Ladbury — 14 Sep 2013 @ 8:13 PM

  46. @ #41 on irony:

    I’ve seen too many people taking the map for the terrain. Many of them armed with computer models, often produced with ESRI software.

    And while were on irony, I once had a client ask me why we couldn’t have a spatial database at a scale of 1:1? After all, it was a computer model… Borges would have laugned.

    Comment by Lichanos — 14 Sep 2013 @ 9:33 PM

  47. Relating to comment 20 and Gavin’s Point 1 in response to comment 40:

    There may be reason to strongly suspect that in any sufficiently complicated dynamical system model (such as climate) with stochastic parameters (e.g., exactly when and where a lightning strike starts a major wildfire or a major submarine earthquake perturbs ocean circulation in a region or a major volcanic eruption introduces stratospheric aerosols), it is almost certain that any given run of the model will have periods of significant deviation from the mean of multiple runs. In other words, we should expect the “real” climate to significantly differ from ensemble means.

    The paper V. I. Klyatskin, “Clustering of a positive random field as a law of Nature” Theoret. Math. Phys. 176(3):1252-1266 (Sep 2013) treats much simpler models, but it rigorously establishes the conditions under which such behavior occurs in the simpler models.

    Abstract: In parametrically excited stochastic dynamical systems, spatial structures can form with probability one (clustering) in almost every realization because of rare events occurring with a probability that tends to zero. Such problems occur in hydrodynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics.

    Keywords: intermittency, Lyapunov characteristic parameter, dynamical localization, statistical topography, clustering

    Comment by Bill Everett — 15 Sep 2013 @ 1:36 AM

  48. Radge, I’m not sure about your examples. How do poetic and figurative quotes about cartography tell us something about climate science. I shudder to think that climate scientists think of themselves as artists or having broad interpretive license, and I’m a social scientist. (I was talking about Rand McNally road atlases and the like. We’re not going to discover that we were wrong about the location of Orlando.)

    Gavin, is the issue of model mismatch related at all to confidence levels. I’m thinking of things like the last IPCC report saying that they were 90% confident that the warming of the 20th century was mostly caused by human activity. I assume a similar confidence level about future anthropogenic warming.

    Is there a post on how such confidence levels are calculated? I’m familiar statistical methods that social scientists use, like regression, ANOVA, MLM/HLM, SEM, and the PCA stuff that came up with Mann. We never generate confidence levels around a prediction that spans a large body of work, except maybe some Bayesian stuff.

    [Response: This particular attribution issue was discussed in depth in a post last year. That is somewhat separate to future projections though. – gavin]

    Comment by Joe — 15 Sep 2013 @ 3:17 AM

  49. We have many studies presenting the projections from GCMs under various forcing scenarios where unforced variability is simulated, and we have a few studies (not many I think) which have a model reproduce the *actual* forcings and unforced variability and see how well the output matches observations (a recent one by Yu Kosaka and Shang-Ping Xie being a case in point). I don’t know of any studies where the GCM runs are re-done with real-world forcings and unforced variability to pin down exactly where the original projections differed from reality. Presumably this is done by modellers to improve their models but are the conclusions published? They might say for example, “Ah yes, run number 12 in GCM model XYZ was a little too warm but that’s because real world forcings were a little lower than in the projections – the physics was correct, it was the scenario that wasn’t quite right”. In other words we want to know whether projections were off because the inputs haven’t matched reality or because the physics isn’t quite right. Hope that makes sense!

    [Response: In broad terms this is correct. We are currently exploring the impacts that updates in the forcings have on the CMIP5 model runs and exploring the range of uncertainty where we don’t have solid information. This takes time though. – gavin]

    Comment by Icarus62 — 15 Sep 2013 @ 3:52 AM

  50. A map as a model of physical space has a characteristic perhaps worth mentioning here.

    The human map-user, on getting themselves lost within the physical space, will consult the map and often conclude that they are not lost but that the map is deficient in some way and thus continue ahead oblivious to their actual location.
    This can become remarkably absurd before the logic of the situation becomes apparent. I hear sensible people say that they managed to walk miles in the wrong direction, reassured by minor features that they were on-route and ignoring the obvious discrepancies all around them. Indeed, I remember myself once deciding that I was no-route even though the stream I was following was flowing in the wrong direction!
    Could there be a lesson in this for climatalogical understanding? If so, does it apply to you, to “us”? Or does it apply to the other lot? I know which I’d put my money on.

    Comment by MARodger — 15 Sep 2013 @ 5:23 AM

  51. Under the heading of ‘Model Error’, shouldn’t you include as well the possibility that the model omits or misrepresents elements of the subject?

    [Response: I did. In the first line of the section. – gavin]

    Comment by Lichanos — 15 Sep 2013 @ 9:17 AM

  52. I’ve been (trying to) follow all this with baffled admiration, but can’t resist a brief lay interpolation:

    “If at first you don’t succeed, try, try again”

    Seems to be climate models have done a dam’ good job at continuously observing and upgrading and finding new ways to measure and approximate. Difficult, but how else do people suggest we try and get a grip? A whole lot of people are good at tearing down (an adolescent exercise) but if they have a contribution, how about buckling up and trying to help? It might feel like work, but work is good for you!

    Comment by Susan Anderson — 15 Sep 2013 @ 11:34 AM

  53. Climate consensus will probably come no time soon. After all, Australia, one of the more progressive countries on this earth (it’s been illegal NOT to vote there since 1925) has recently changed governments partly because their carbon tax was so unpopular. My simple regression-based statistical climate model predicts global carbon dioxide, surface temperature & sea level at yearly time steps. It is now calibrated on actual 1959-2012 data & its results are generally in the same ball park as the IPCC. When recalibrated on real 1959-2012 plus fake 2013-2027 data, assuming nearly flat surface temperatures for 2013-2027 (just like 1998-2012), the result is that the 21st century warming estimate goes from about 2.74 deg C (4.93 deg F) to 1.86 deg C (3.35 deg F). That’s still a significant empirical climate sensitivity to carbon dioxide. Results available on request: rgquayle@gmail.com

    Comment by Rob Quayle — 15 Sep 2013 @ 1:32 PM

  54. How do you analyse the mismatch between modelisation of the pliocene climatic optimum with constraints on the surface sea temperature derived from proxies and the proxies about temperatures and precipitations the pliocene climatic optimum? PS : especially the mismatch about precipitation. One example : High resolution climate and vegetation simulations of the Late Pliocene, a model-data comparison over western Europe and the Mediterranean region, A. Jost et al., Clim. Past, 5, 585-606, 2009.

    [Response: Not yet clear. There is much that could be improved in model set-up for Pliocene climates – CO2 is only approximate, CH4 unknown, aerosols unknown, land surface types approximated etc. There are clear mismatches – particularly in the equator-to-pole temperature gradient which points to some kind of missing physics relevant to warm climates. But then the observed data are not perfect by any means and span a long time period (multiple orbital cycles) and so there may be some apples/oranges comparisons going on. For some more recent discussions try Lunt et al (2009) and Lunt et al (2012). – gavin]

    Comment by jdeuxf — 15 Sep 2013 @ 2:11 PM

  55. #52 Susan Anderson

    “I’ve been (trying to) follow all this with baffled admiration”.

    Stick with it Susan. All will come into clear focus before very long.

    Comment by simon abingdon — 15 Sep 2013 @ 2:49 PM

  56. Dear Gavin and all, I advise policy-makers within the GOP on environmental matters. It’s been a struggle, but I’d been making some headway in this area, thanks to excellent remarks by senior leaders, especially former Sec of State George Schultz.

    I cannot begin to tell you the damage that has occurred by the past exaggeration of climate predictions. I understand very well the issues of model accuracy & believe in “climate disruption” (the proper term), particularly ocean acidification.

    However, we are losing the policy argument due to past claims vs. present results. It is time to reboot the entire process. Reach out to skeptics and engage the public. Your intransigence is harming the entire planet. Thank you, Charles Stack, MPH

    [Response: My ‘intransigence’ is harming the planet? Not journalists lying, politicians denying, companies polluting, or the whole host of perverse incentives society has created that make it cheaper to do the wrong thing for the environment? None of those things matter compared to my blogging? Phew – I had no idea! Thanks. – gavin]

    Comment by Charles Stack, MPH — 15 Sep 2013 @ 3:17 PM

  57. Landmark Report of an Ad Hoc Study Group on Carbon Dioxide and Climate – Woods Hole, Massachusetts to the Climate Research Board, Assembly of Mathematical and Physical Sciences, National Research Council, NATIONAL ACADEMY OF SCIENCES | Washington, D.C. 1979

    We have examined the principal attempts to simulate the effects of increased atmospheric CO2 on climate. In doing so, we have limited our considerations to the direct climatic effects of steadily rising atmospheric concentrations of CO2 and have assumed a rate of CO2 increase that would lead to a doubling of airborne concentrations by some time in the first half of the twenty-first century. As indicated in Chapter 2 of this report, such a rate is consistent with observations of CO2 increases in the recent past and with projections of its future sources and sinks. However, we have not examined anew the many uncertainties in these projections, such as their implicit assumptions with regard to the workings of the world economy and the role of the biosphere in the carbon cycle. These impose an uncertainty beyond that arising from our necessarily imperfect knowledge of the manifold and complex climatic system of the earth.

    When it is assumed that the CO2 content of the atmosphere is doubled and statistical thermal equilibrium is achieved, the more realistic of the modeling efforts predict a global surface warming of between 2°C and 3.5°C, with greater increases at high latitudes. This range reflects both uncertainties in physical understanding and inaccuracies arising from the need to reduce the mathematical problem to one that can be handled by even the fastest available electronic computers. It is significant, however, that none of the model calculations predicts negligible warming.

    The primary effect of an increase of CO2 is to cause more absorption of thermal radiation from the earth’s surface and thus to increase the air temperature in the troposphere. A strong positive feedback mechanism is the accompanying increase of moisture, which is an even more powerful absorber of terrestrial radiation. We have examined with care all known negative feedback mechanisms, such as increase in low or middle cloud amount, and have concluded that the oversimplifications and inaccuracies in the models are not likely to have vitiated the principal conclusion that there will be appreciable warming. The known negative feedback mechanisms can reduce the warming, but they do not appear to be so strong as the positive moisture feedback. We estimate the most probable global warming for a doubling of CO2 to be near 3°C with a probable error of ±1.5°C. Our estimate is based primarily on our review of a series of calculations with three-dimensional models of the global atmospheric circulation, which is summarized in Chapter 4. We have also reviewed simpler models that appear to contain the main physical factors. These give qualitatively similar results.

    One of the major uncertainties has to do with the transfer of the increased heat into the oceans. It is well known that the oceans are a thermal regulator, warming the air in winter and cooling it in summer. The standard assumption has been that, while heat is transferred rapidly into a relatively thin, well-mixed surface layer of the ocean (averaging about 70 m in depth), the transfer into the deeper waters is so slow that the atmospheric temperature reaches effective equilibrium with the mixed layer in a decade or so. It seems to us quite possible that the capacity of the deeper oceans to absorb heat has been seriously underestimated, especially that of the intermediate waters of the subtropical gyres lying below the mixed layer and above the main thermocline. If this is so, warming will proceed at a slower rate until these intermediate waters are brought to a temperature at which they can no longer absorb heat.

    Link

    Comment by prokaryotes — 15 Sep 2013 @ 3:23 PM

  58. > Carbon Dioxide and Climate: A Scientific Assessment

    that’s the Charney report.
    Cited by 192 other papers (links in Scholar)

    Comment by Hank Roberts — 15 Sep 2013 @ 4:06 PM

  59. Charles,

    You wouldn’t suppose misrepresentation of past claims has anything to to with it? A deliberate disinformation campaign using methods honed and tested by the tobacco companies? I’d find your note more ‘credible’ if you gave example of a past claim made by say Gavin, and why it was wrong. As a man who claims to be advising ‘Republican’ leadership, surely you have enough command of the facts to substantiate your allegations.

    captcha coincidence: lityHpe formulate (which I read as Lity Hype)…let’s see if I got it right!

    Comment by Dave123 — 15 Sep 2013 @ 4:54 PM

  60. “[Response: Interesting, but not relevant. This presupposes a perfectly known set of basic equations that we can test for convergence as scales get arbitrarily small. That isn’t the case for climate models – too many magnitudes of scale between cloud microphysics or under-ice salt fingering and grid box averages. – gavin]”
    “[Response: Only specific processes can be examined in the lab. Radiative transfer, aerosol formation, some aspects of cloud microphysics, ocean diffusion etc. – but the real world has many good experiments that the numerical models can be evaluated against (some mentioned above). – gavin]”

    Well, I think I still could not drive my point through. It is not about climate science as such, it is about physics. If we were dealing with a reproducible system, the MEP principle would hold along with the fluctuation theorem, see Dewar 2003. Those would put strict constraints on any computational model, one could literally test model output against them.

    However, the climate system is clearly not reproducible, it is chaotic. Indeed, if it were reproducible, it had to linger around a Maximum Entropy Production state. But it does not, for most of the entropy production happens on Earth when short wave radiation is absorbed and gets thermalized. That is, by decreasing the (rather high) Bond albedo of Earth one could increase rate of entropy production, which is inconsistent with a MEP state.

    Therefore the rules of the game should be different for some non reproducible systems. Please note shortwave albedo of Earth has large spatio-temporal variations, but its annual global average is restricted to a narrow range, even if it is not determined by simple material constraints, but by an intricate interplay between many internal degrees of freedom. And the value it fluctuates around is very different from the one we would expect for a non-chaotic non-equilibrium quasi steady state thermodynamic system for which energy exchange with its environment is dominated by radiation. Mercury is black, Earth is not.

    Questions:
    1. Do you believe the MEP principle can’t be generalized to another, deeper extremum principle which would hold to a class of nonequilibrium thermodynamic systems terrestrial climate belongs to? If so, why?
    2. Are multiscale properties of climate you have mentioned not connected to a SOC state? In the vicinity of a critical state one would expect scale invariant behavior in all state variables. Is it seen in climate?

    One could, of course, take a different track and delve into Dewar 2003 deeper to see where reproducibility comes into the picture and how far one can get without it.

    However, even in that case one would need actual experiments to verify theoretical expectations, that is, a model that would fit into the lab.

    If an extremum principle, valid for climate, could be found and verified experimentally, that would make testing computational climate models much easier.

    Consider the case of celestial mechanics. With a naive computational model, coding Newton’s laws in a straightforward manner, one gets into trouble soon. We do know both mechanical energy and angular momentum are conserved quantities in any setup (with no dissipative processes, of course). However, due to subtle computational errors which add up, the model lacks these properties, which means it should be rejected as a device to compute future states of the system. On the other hand, it also shows the way to improve the model, that is, to take care of conservation laws at each algorithmic step.

    Similarly, an underlying principle, if one exists, could take care of some multiscale phenomena in climate models, reducing the need for guessing parameterization schemes greatly while improving model quality.

    [Response: Nice thought. If such principles can be found, they might indeed be useful. However, I am not optimistic – the specifics of the small scale physics (aerosol indirect effects on clouds, sea ice formation, soil hydrology etc.) are so heterogeneous that I don’t see how you can do without calculating the details. The main conservation principles (of energy, mass, momentum etc) are already incorporated, but beyond that I am not aware of anything of this sort. – gavin]

    Comment by Berényi Péter — 15 Sep 2013 @ 5:09 PM

  61. Joe,

    For the purposes of communicating basic ideas of modeling to a broad audience, maps would seem to offer a ready and intuitive lead-in. They are models, after all, that people use in every day life. But maybe that goes out the window if your audience insists on taking maps (and portraits for that matter) for granted. I suppose in that case you might as well be talking about sofas and dust ruffles. 

    Still, don’t social scientists create maps, and aren’t they capable of handling figures of speech? And even if all you can call to mind are road maps, what, you’ve never encountered examples that are illegible, contain misinformation, are out of date, etc.? You’ve never heard of people getting lost and driving into a ditch following their GPS? I certainly have. Being able to locate Orlando in Florida is a pretty low standard. By contrast is not the London Subway Map useful, elegant and wondrous?

    Nor is interpretation anything to sneer at. For instance, consider the process of geologic mapping where interpretation can be an ongoing part, from beginning to end, of making sense of apparently chaotic terrain. It’s a fair analogy to portraiture– if you’ve ever tried your hand at it and understand that analogies are by definition imperfect. So there is real beauty in some maps, as there is in some models, theories, solutions. Great!

    Unclench. ‘Mapping’ in a colloquial sense isn’t just about making maps, it’s about how humans make sense of the world. 

    And btw, what’s with everybody “shuddering” at this and that already? I admit I have a soft spot for stroppy nonsense, but I don’t get all the banal melodrama.

    Comment by Radge Havers — 15 Sep 2013 @ 6:23 PM

  62. @ 51 – Gavin’s response:

    I asked:
    -Under the heading of ‘Model Error’, shouldn’t you include as well the possibility that the model omits or misrepresents elements of the subject?

    and got this:
    ~[Response: I did. In the first line of the section. – gavin]j

    This is very interesting, because he is referring, I believe, to this text of his in the post:

    ~”There are of course many model errors. These range from the inability to resolve sub-grid features of the topography, approximations made for computational efficiency, the necessarily incomplete physical scope of the models and inevitable coding bugs.”

    Now, grid resolution is a mechanical problem, that can be improved with computing power, and has, although I guess there is a limit, unless we go back to Borges & Morehouse and build a 1:1 scale model.

    Presumably approximations made for efficiency may drop away as computers get more powerful and programming tools get more sophisticated.

    Coding bugs? I see those as simple blunders, hard to catch sometimes, but with time…

    “The necessarily incomplete physical scope of the models,” is…what exactly? Elements of the total system that are left out because there are only 24 hours in a day? This is the only bit of GS’s text that deals, obliquely, with my question. What he is describing are the inevitable limitations on models, things we all accept in a GOOD model. But what if the model is simply incorrect? Wrong? Makes the wrong connections? Is wrong at a conceptual level about what are the forcings, and how they interact? With such a complex system, such an error is not hard to imagine, and GS implicitly accepts its possibility.

    The entire discussion of error in this post is, however, based on the assumption that the model is correct, even though it’s wrong, as it must be to some extent, but fundamentally correct. What if that assumption is wrong? The fact that it performs well in hindcasting simply makes it plausible, not correct. I was simply raising this possibility, and GS seems to think it is out of the question.

    Comment by Lichanos — 15 Sep 2013 @ 6:33 PM

  63. Simon A, I am significantly less baffled than you would like to imply, unlike yourself. Humility is not a sin, but pretense is.

    I would maintain a dignified silence except I just picked up the terrific link in the addendum. I love maps! Of course they are limited, but they present such a nice example of useful metaphor and that is a gorgeous collection of well crafted wordsmithing:

    http://www.realclimate.org/index.php/archives/2013/09/on-mismatches-between-models-and-observations/comment-page-1/#comment-408422

    Comment by Susan Anderson — 15 Sep 2013 @ 7:52 PM

  64. However, we are losing the policy argument due to past claims vs. present results.

    This (if I understand your statement correctly) is one of the core problems with the interaction between climate modeling and public policy. Modeling-based claims (or predictions) about future climate often pertain to trends that can only be unequivocally observed on 10-year to 100-year timescales. (E.g. CO2 is expected to rise and temperature is expected to rise over the next decades.) However, the climate, and thus the observational record, is sensitive to many factors, some of which are difficult to predict with any certainty in the short-term, e.g. volcanoes (cooling by aerosols), land-cover (changes to cloud-cover from transpiration), solar intensity, and stochastic internal dynamics, like El Nino/La Nina. Indeed, these (short-term) factors may oppose long-term trends. (Take a look at the jagged terrain of the 20th century temperature record.) Herein lies the rub.

    Policy-makers, and the people they represent, experience the world in real time. Weather is extremely important, and immediate, to almost everyone in the world. Average climate trends are not. When asked “how’s the weather?”, who responds with the average rate of temperature change over the last 30 years?

    As trivial as this may seem to some scientists, I believe it is one of the central reasons that there is little government action on climate change (or on most long-term problems, for that matter).

    In short, we are ill-equipped to imagine geological (earth) time or to think on the scale of an entire planet. As long as this situation continues, I see little reason to expect a change in the political dialogue.

    Comment by missoula — 15 Sep 2013 @ 9:27 PM

  65. Gavin,

    I’d be curious to hear more about the further complications you talk about here:

    I haven’t looked into it myself, and I’m not aware of any papers really going into the details (other than those that remark on the improvements). In our own modelling, we have improved the calculations to reduce the amount of numerical diffusion (which helped a lot), and increased resolution (which also helped), but changes to the ocean model also have a big impact, as do Arctic cloud processes and surface albedo parameterisations, so it gets complicated fast.

    It seems like this should be an active area of research, so it’s surprising that there are no papers.

    I would imagine that the understanding of “structural” model lacunas is itself evolving. For example, how many climate models include the bacterial dynamics associated with cloud formation in the ocean? (http://ucsdnews.ucsd.edu/pressreleases/biological_activity_alters_the_ability_of_particles_from_sea_spray_to_seed_clouds) This might be an extreme example, but it seems like there is a certain hubris in discounting the uncertainty associated with undiscovered climate forcings and dynamics, given that understanding in this area is clearly advancing.

    Comment by missoula — 15 Sep 2013 @ 9:48 PM

  66. @ Charles Stack

    in•tran•si•gent also in•tran•si•geant (n-trns-jnt, -z-)
    adj.
    Refusing to moderate a position, especially an extreme position; uncompromising.

    ——————————————————————————–

    I don’t think that’s an appropriate contention here at all, scientific questions aren’t resolved by compromise or by moderating a position; they are resolved by evidence. While I would characterize myself as unconvinced with respect to the magnitude of the expected warming and the timeframe for realization of that warming, if I were advising the GOP I’d pull out the IPCC report for the science. IMO, the political questions for now should be centralized around risk tolerance, determining what constitutes something being “worrying” to society as a whole, and the adaptability of both civilizations and ecosystems. These aren’t climate science questions.

    Also, RC has quelled many “extreme positions” such as the recent methane bomb scare stories.

    Comment by John West — 15 Sep 2013 @ 10:33 PM

  67. Folks, I’ve been working in climate science since the mid 1970s, focused upon engineering controls for agricultural and industrial methane. Projects have won awards from the British government, and Kyoto CDM projects completed in Asia and Latin America. Most recently, I’m certified in former VP Gore’s “Climate Reality Leadership Corps” training. Trust me, I know my stuff.

    There is an old story about “The Boy Who Cried Wolf,” you should all read it.

    You may not like them, but you MUST engage the entire voting public, including (*gasp!*) dreaded Republicans and others, to tackle this problem. Drop the term “denier,” it is insulting and stupid. Forget the blame; many of the largest polluters are taking some of the biggest technology risks to reduce emissions. Most of all, don’t be so damn strident with your models & predictions, you don’t know everything. Australia’s gutting of their carbon laws is bound to be followed by other nations. Recent events, back-pedaling and poor responses are no less than a disaster. BTW, acidification is a MUCH worse looming problem than temperature ever will be….if we shut down photosynthesis in the ocean’s euphotic zone, it’s game over. Have a good night.

    [Response: Perhaps you could point out where I claimed to know everything? Or where I advocated never talking to republicans or conservatives? Or where I cried wolf? You obviously have a beef with someone, but I suggest you track them down and take it up with them – rather than with me. – gavin]

    Comment by Charles Stack, MPH — 15 Sep 2013 @ 10:58 PM

  68. Gavin, you cite Charney in the context of having high confidence … From Charney …

    “However, we have not examined anew the many uncertainties in these projections, such as their implicit assumptions with regard to the workings of the world economy and the role of the biosphere in the carbon cycle. These impose an uncertainty beyond that arising from our necessarily imperfect knowledge of the manifold and complex climatic system of the earth.”

    If we assume the uncertainties could be quantified as 10 in 1979, you then think that we have revealed enough to decrease the imperfect knowledge back to an uncertainty of a smaller value? Would it be correct to say that the majority of the original uncertainty still exists considering the complexity?

    You also appear to say in the following response to an earlier post that GCM’s are evaluation targets:

    “However, we don’t calibrate the emergent properties of the GCMs to the emergent properties derived from observations – they stay (more or less) as evaluation targets.”

    I agree, they should evaluate something … I am not as sanguine as Ray though about how a mismatch ‘insight’ would be employed in the science when the belief remains that the underlying assumptions remain robust enough coupled with the scientist as advocate model you support. We have been brilliant at communicating and encouraging the downside risk without much attention to the uncertainty at all.

    Comment by Nickc — 15 Sep 2013 @ 11:02 PM

  69. Of course Feynman is not here to defend himself or interpret his words, but I think he would reject the the use of the neutrino-faster-than-light experiment to demonstrate the limitations of his metaphysics as expounded by that quote. There is a simple reason one knows that Feynman couldn’t possibly have meant this argument seems to attribute to him (namely that as soon as a new experiment comes along, any model with which it disagrees must immediately be chucked out): Feynman lived to see loads and loads of wrong experiments. The problem with the cited experiment was that it was not reproducible (odd that this word hasn’t come up in the discussion in this context so far).

    [Response: I agree that Feynman likely didn’t take his dictum literally (though the spirit is right). There are many examples in his career where challenges to seemingly conclusive experimental data lead to theoretical breakthroughs (and subsequent experimental verification). But the dictum as written – and often quoted – is too simplistic to be useful as anything other than a reminder that nature is the final arbiter of our understanding. – gavin]

    The broader point is that there is a fundamental difference between wrong models and wrong experiments, that the discussion in this post blurs. In general, you don’t need *any* model to invalidate an experiment. The neutrino-faster-than-light experiment was found to be wrong without any reference to Special Relativity – it was actually a rather simple electronic metering issue.

    [Response: No-one would have done this experiment except for special relativity and no-one would have cared about the error without it appearing to contradict SR. Neither the experiment nor the cabling issue have any import except for that context. – gavin]

    On the other hand, if the experiment had been found to be free from errors, and someone else had established the same result with different apparatus, then Special Relativity would be toast and that is it.

    [Response: I very much doubt it. Given the amount of support SR has in observation and previous experiment, I would predict that an enormous amount of effort would have been devoted to finding the flaws in the concept or execution of this experiment and only after far more work had been done would people slowly come around to the idea. The fact is it is far more likely that an experiment was flawed than such a standard was wrong. Not impossible, just unlikely. – gavin]

    It is certainly true that the fact that it disagrees with something as well-established as Special Relativity made people look harder for issues in the experiment – but that doesn’t change Feynman’s point.

    Now there can be subtleties about whether it’s possible to define the concepts involved in a measurement without any underlying metaphysics. For this sort of thing you can read Kant I suppose. But while this might sometimes bear on discussions of quantum mechanics or cosmology, the concepts involved in climate science are straightforward enough that I think Feynman’s point stands as stated.

    [Response: Experiments and observations in climate science are far less controlled than the neutrino experiment, and yet you think that they somehow rise to the level of unchallengeable? And despite plentiful examples of where challenges were ultimately correct? How odd. – gavin]

    Comment by Mitch Golden — 15 Sep 2013 @ 11:05 PM

  70. Gavin:

    1. Would you agree that if we continue with BAU emissions, then there is “high confidence” of “catastrophic” AGW? If not, what climate science are you studying? I would love to be less alarmed than I am. So you might be able to add another source of error: scientists not wanting to alarm the public (Kevin Anderson cites this as a source of error in his “Going Beyond Dangerous” article).

    2. I want to add another reason for flawed communications of climate science by scientists. To publish a paper in a peer reviewed journal, a scientist must be quite sure of his or her claims… perhaps 90% sure. This is fine for studies of exoplanets or black holes, but not when civilization must take steps to protect itself. If the military worked the same way, we would lose every war we ever fought. I would like to know what sea level rise will be with a 50% probability, and even a 20, 10, 5, and 1% probability.

    Also, a scientist would much rather not predict something that comes true (“Type 1 Error”) than predict something that does not come true (“Type 2 Error”). They are both equally wrong, but in the Type 1 case there was no paper published!

    I think this helps explain part of the reason predictions of Arctic sea ice melt were so far off and why there was/is so much focus on 2~3 feet of SLR this century, when the actual numbers could be much larger (according to Jim Hansen and others). In fact, Jim Hansen wrote about this in his “Scientific Reticence” paper.

    Comment by Dan Miller — 16 Sep 2013 @ 12:05 AM

  71. “You may not like them, but you MUST engage the entire voting public, including (*gasp!*) dreaded Republicans and others, to tackle this problem.”

    Since Charles Stack advises Republicans perhaps he could be bothered to study the history of the GOP, specifically the part concerning Lincoln whose administration abolished slavery with <40% of the popular vote.

    For the big lie, use ALL CAPS.

    Comment by Anonymous Coward — 16 Sep 2013 @ 12:06 AM

  72. Interesting to see Gavin is actually allowing people who fundamentally disagree with him to be included in the discussion.

    [Response: Of course, because you bring so much substance to the discussion. – gavin]

    Of course, his dismissive and rather nasty personality shine through as usual.

    [Response: Your obligation to pay attention to anything I have to say is precisely zero. But you should be clear, it is not my personality that is dismissive, it is my attitude towards people who decide first and look at the science later. – gavin]

    I really think the tipping point has been reached now. Watch for more climate scientists to begin speaking out in a similar way that J. Curry has.

    [Response: We’ll see. – gavin]

    Comment by Dr. Punnett — 16 Sep 2013 @ 1:18 AM

  73. John West #66

    While I would characterize myself as unconvinced with respect to the magnitude of the expected warming and the timeframe for realization of that warming, if I were advising the GOP I’d pull out the IPCC report for the science.

    There is soon a new report out but the last is based on conservative estimates form 2007. Unconvinced by them means to doubt the scientific facts, the consensus on climate change. Political, much more relevant and recognised are observational developments particular for businesses. Innovation is key to combat dangerous climate change and we only have a short window of opportunity to act.
    We are underestimating climate change and underfunding innovation

    Comment by prokaryotes — 16 Sep 2013 @ 4:18 AM

  74. Nickc #68

    We have been brilliant at communicating and encouraging the downside risk without much attention to the uncertainty at all

    This is not a rhetoric game or a political show where one could win with sizing a majority. Today’s uncertainty comes from evaluating feedbacks and tipping points such as how much longer the Ocean will keep sucking up heat and CO2 (OHC) or how fast non-linear developments will occur or for how long we can sustain civilisation based on conservative scenario assessments RCP8.5.

    Example of uncertainty in today’s climate science

    A large, but transient expulsion of subcap methane is expected to increase surface carbon cycling with implications for greenhouse gas feedbacks to climate warming in a future scenario of widespread permafrost thaw and wastage of glaciers and ice sheets. The magnitude and timing of this emission scenario is unconstrained due to large uncertainties in estimating future rates of cryosphere degradation, hydrocarbon reservoir response, and potential methane oxidation. From the Supplementary Information of Walter Anthony et al (2012).

    Link

    NickC #68

    Would it be correct to say that the majority of the original uncertainty still exists considering the complexity?

    No. Example Study of “True Global Warming Signal” Finds “Remarkably Steady” Rate of Manmade Warming Since 1979

    Charles Stack, MPH #67

    Most of all, don’t be so damn strident with your models & predictions, you don’t know everything.

    Yet, a new study concludes: Climate Scientists Erring on the Side of Least Drama

    Comment by prokaryotes — 16 Sep 2013 @ 5:28 AM

  75. This article contains so many errors and false comparisons the whole thing is just delusional. Whoever wrote this is not a scientist.

    [Response: And a good morning to you too. – gavin]

    Comment by John Benton — 16 Sep 2013 @ 6:26 AM

  76. Al Gore on climate communication.

    “Gore: Climate Dialogue ‘Not Won Yet, but Very Nearly’ — August 28, 2013

    “a shrinking number of denialists fly into a rage when it’s mentioned” … “the political climate is changing….The polling is going back up in favor of doing something on this issue. The ability of the raging deniers to stop progress is waning every single day.”

    “When that conversation is won,” he maintained, “you’ll see more measures at the local and state level and less resistance to what the EPA is doing. And slowly it will become popular to propose steps that go further, and politicians that take the bit in their teeth get rewarded.”

    Comment by Hank Roberts — 16 Sep 2013 @ 6:40 AM

  77. Well of course there is a mismatch between the Models outputs and the current temperature trend.

    That would be a sign of a GCM that was potentially accurate.

    The models MUST be either running hot or cold for periods or a decade or so if they are indeed anywhere near accurate.

    It is so obvious that I don’t know why it is not stated more often. (actually I do have a suspicion but more of that later.

    The reason the models cannot match the short or medium term global temperature records is that they are not measuring the same thing! Oh the underlying signal is the same in both outputs, that is the climate change signal. However, the temperature as an added signal which is either a cooling or warming one based on current ‘weather’ influenced by ENSO inter alia and this additional ‘weather’ signal in the temperature record is only averaged in the models if included at all.

    Now scientists have concluded this century, that this natural variation ‘weather’ signal can be large enough to put a significant mismatch between model output and current decadal temperature record. Had to really or the models would be kaput.

    It is obvious really an apple doesn’t equal an orange no matter which way you cut it.

    No it is not a problem that the models are running hot this century it does not prove that they are inaccurate. It doesn’t prove that they are accurate either but it is the behaviour an accurate model should be displaying.

    No, the problem for the models is their hind cast for the 20th century. The models track the temperature record really really well during this period. Only trouble is they shouldn’t if they were anywhere accurate!

    It is impossible for an accurate model to track the temperature record in this way in the short to medium term unless the ‘weather’ signal was neutral for nearly the whole time.

    Also the forcing effect of the increasing CO2, taken over the whole 20th century, was on average weaker than this century due to the ramping up of CO2 emissions during the latter half of that century. This should have made it easier and more obvious for the ‘weather’ signal to create a mismatch between the two outputs.

    So, it is not the current mismatch that is a problem for the models, it is the previous excellent correlation between the global temperature record and the models outputs that is the problem for the models.

    That behaviour is absolutely impossible for an accurate model.

    Alan

    [Response: Hmm, an interesting and testable argument. Well, let’s go to the tape:

    Umm… no obvious sign of some huge increase in fidelity prior to 2000. So that would be a “no” then. – gavin]

    Comment by Alan Millar — 16 Sep 2013 @ 7:41 AM

  78. [edit – no disrespect intended, but I’d prefer if comments focussed on substance, not pedigree]

    Comment by MARodger — 16 Sep 2013 @ 8:33 AM

  79. @ #69 – Gavin’s Response, again:

    [Response: I agree that Feynman likely didn’t take his dictum literally… But the dictum … is too simplistic to be useful as anything other than a reminder that nature is the final arbiter of our understanding. – gavin]

    I find this remark amazing coming from a scientist. I would think that this point, that nature is the final arbiter, is of monumental importance, and he trivializes it. The history of early modern science is of a tremendous effort to establish exactly this principle.

    For Gavin, it is simply a ‘reminder’ of something that is presumably obvious to all. But reading and watching the controversy, I’d say it’s a reminder that is not heard often enough.

    Comment by Lichanos — 16 Sep 2013 @ 8:51 AM

  80. Lichanos@79,
    Your amazement amazes me. Gavin is NOT saying that nature is not the ultimate arbiter. Of course it is. However, the question is how one responds to a discrepancy–does one check the measurement again?; does one modify the theory?; or does one scrap the theory? It is not a simple matter that if prediction diverges from observation that the theory must be wrong. The theory may be more right than wrong. I wonder why that is so difficult for you to get?

    Comment by Ray Ladbury — 16 Sep 2013 @ 10:30 AM

  81. > nature is the final arbiter

    Begin by forming identical Earths in a thousand identical Solar systems.

    Run each of those over time up to the present.

    Nature doesn’t give you one answer.
    Nature gives you a range of possible outcomes.

    Our species got smart enough to fiddle with nature during an unprecedented single opportunity — everything came together to make us possible here.

    What odds that we can improve on the outcome?
    What odds that we can make things worse?

    Nature says — do you feel lucky, punks? Do ya?
    And rattles the dice.

    Comment by Hank Roberts — 16 Sep 2013 @ 10:45 AM

  82. Given that Feynman was a pretty smart guy and a pretty experienced physicist, I think one has to be pretty careful in interpreting his words. I don’t think it’s likely he said something that is just trivially useless. I am sure Feynman would agree with us (as we’re in fact agreeing) that one uses models both to decide what experiments to do and to evaluate how quickly to trust them. (As in, we’d need a damned good, repeated experiment before we’d throw out Special Relativity.) But we can determine whether the experiment is *right* or not without any reference to Special Relativity, and that is what makes the experiment different from the model.

    [Response: Experiments and observations in climate science are far less controlled than the neutrino experiment, and yet you think that they somehow rise to the level of unchallengeable? And despite plentiful examples of where challenges were ultimately correct? How odd. – gavin]

    It’s “odd” because it’s not what I was saying. I am simply pointing out that it is possible to evaluate climate experiments and observations without getting into the sort of philosophical discussions one sometimes has to have when dealing with experiments in quantum mechanics or special relativity. “Temperature”, “radiation”, “water vapor” are all pretty well-defined concepts in this context.

    I agree that physics experiments are far better controlled then those of climate – which means that the former can generally be trusted much more quickly than the latter. But it’s still the case that you just don’t need to look at a climate model to evaluate the correctness of the experiment. For example, ultimately the 1990s UAH temperature data was found to be wrong because of the technical mistakes that were being made, not because it disagreed with models – though of course it did.

    [Response: I think you are missing the point I am making – it is the mismatches between experiment and theory that drive people to look harder for overlooked technical issues or interpretations. It is true that people often find bugs in code or miscalibrations in equipment on their own with no external prodding, but people are more strongly motivated to do so when there is mismatch of the sort we are discussing. Mismatches are clues that we should pay heed to. – gavin]

    Comment by Mitch Golden — 16 Sep 2013 @ 10:51 AM

  83. Charles Stack wrote: “I advise policy-makers within the GOP on environmental matters … I cannot begin to tell you the damage that has occurred by the past exaggeration of climate predictions.”

    I would respectfully suggest that “policy-makers within the GOP” have suffered much more “damage” from the millions of dollars in campaign contributions they receive from the fossil fuel corporations.

    Perhaps you would care to explain exactly how these alleged “past exaggerations of climate predictions” compelled numerous GOP elected officials to deliberately and repeatedly lie about climate science, while seeking to abuse their positions of authority to defund climate research and attack and destroy the careers of leading climate scientists.

    Really, if you want your portrayal of GOP politicians as the well-meaning, innocent victims of “exaggeration” by climate scientists to pass the laugh test, you’re going to have to work harder.

    Comment by SecularAnimist — 16 Sep 2013 @ 11:23 AM

  84. ..if it disagrees with experiment its wrong *

    I am not sure whether it is fair to lay the blame and credit for that claim on to Popper, because it was probably a widely held simplification before he came on the scene. As evidence for this, there is no entry for Popper in the index to Duhem’s book, The Aim and Structure of Physical Theory, which Gavin may have quoted above, and which was written just before 1906.
    Chapter 6 has a whole series of case studies which demonstrate problems with the above simplification. The latter appears to have been quite fashionable at the time of writing. Because Duhem was a good theoretical physicist he was in a better position to choose realistic examples, drawn from science than some philosophers. He says:

    The only experimental check on a physical theory which is not illogical consists in comparing the entire system of the physical ** theory with the whole group of experimental laws, and in judging whether the latter is represented by the former in a satisfactory manner,

    Of course the discussion does impinge on the usefulness of Popper’s falsifiability model which came later.
    ———————-
    * Popper would probably have added

    ” .. if it agrees with experiment it might still be wrong”

    as well as his falsifiability criterion ***
    ** He did not intend to include biology in that remark.
    ***. This may have been introduced by one of Bertrand Russel’s graduate students but I have lost the reference.

    Comment by Geoff Wexler — 16 Sep 2013 @ 11:52 AM

  85. Charles Stack:

    Dear Gavin and all, I advise policy-makers within the GOP on environmental matters.

    I’m skeptical of your claim, Mr. Stack. Which GOP policy-makers, exactly? Have you been paid as a consultant with funds from the Republican party or individual officials? Can you link to any reports, position papers, or other documents you’ve authored that would give us any reason to take you seriously? The more verifiable details you can provide, the better. Thank you.

    Comment by Mal Adapted — 16 Sep 2013 @ 1:59 PM

  86. I really think the tipping point has been reached now. Watch for more climate scientists to begin speaking out in a similar way that J. Curry has.

    Something escaping from the diode bubble.

    Comment by Doug Bostrom — 16 Sep 2013 @ 3:17 PM

  87. Is it true that most (a great majority) of models run “too hot”, i.e. the discrepancy between model projections and measured temp, so far, is that projections are higher than measurements?

    The general explanation, above, about model-measurement mismatch ignores the specifics of this case.

    Can we infer anything from the characteristic of this particular mismatch?

    Comment by Jacob — 16 Sep 2013 @ 3:56 PM

  88. A new paper featuring models and observations and the like :)

    http://www.pnas.org/content/early/2013/09/10/1305332110.abstract

    Comment by SteveF — 16 Sep 2013 @ 4:15 PM

  89. About ‘Observation error”.
    Do you think that it is possible that there are systemic, or considerable errors in the temperature data sets ?
    If not, could we, in this case, eliminate this possibility from consideration ?

    [Response: For the surface temperature data the picture is quite robust – using different methods, different subsets of input data, including corrections or not – so I doubt that there is much uncertainty there that has not already been explored. In some regions there is more uncertainty than others (the arctic, tropical pacific, Africa) but the global picture is clear and consistent with multiple independent sources of information. – gavin]

    Comment by Jacob — 16 Sep 2013 @ 4:44 PM

  90. > “I advise policy-makers within the GOP
    > on environmental matters …
    > the damage that has occurred by
    > the past exaggeration of climate predictions.”

    Some advisor surely has been feeding the GOP policymakers spin for a very long time. When did the GOP policymakers y stop getting bad advice?

    I know your name’s not Surely. But your quote says
    > … I can not tell you …
    So. Who can? Where were the GOP policymakers getting the exaggerations? Who was it they believed, and was that advisor doing the exaggerating, or merely echoing it uncritically?

    Figuring out how the GOP policy makers got such bad advice for so long is a worthwhile study. Maybe not here.

    Comment by Hank Roberts — 16 Sep 2013 @ 5:29 PM

  91. 81, Hank Roberts…Also expressed as “Mother Nature Bats Last.”

    And she always bats 1000. Always

    http://en.wikipedia.org/wiki/Robert_K._Watson

    Comment by Tokodave — 16 Sep 2013 @ 5:49 PM

  92. Far better commentary than Feynman’s misunderstood remarks on the role of theory and experiment is from Eugene Wigner:
    https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences
    The paper itself is the first reference in the Wikipedia article. I strongly recommend taking the time to read this short paper.

    Comment by David B. Benson — 16 Sep 2013 @ 8:30 PM

  93. Could you advance from thee generic discussion, above, of model-reality mismatch, to a more focused discussion of this specific case: climate models vs. observed temps mismatch ?
    You agree that observation error can be ruled out in our case.
    So, what would be, in your opinion, the most plausible area were you would look for an explanation of this particular mismatch ?

    Comment by Jacob — 16 Sep 2013 @ 10:54 PM

  94. Jacob@93,
    Please refer to the chart in Gavin’s in-line response to #77.

    Given that chart, my question is “what mismatch?”

    Given the results of Foster and Rahmstorf 2011, my qeustion is “what mismatch?”

    Comment by Ray Ladbury — 17 Sep 2013 @ 10:15 AM

  95. Jacob (and many, many others) seem to think that if model A, when run from 1900 to present, predicts the relatively flat, global average surface temperature record over the past decade, is a better match to reality than model B which does not. It is a flawed comparison: (quoting the end of Gavin’s point 3)

    “Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world. Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading – since it is obvious why there is going to be a discrepancy, albeit one that doesn’t have much implications for our understanding.”

    Obvious to people who understand weather and climate, that is. If it is not obvious to a person why a “mismatch” between a model and the temperature record are expected, this is a clue that their understanding is far below what it should be for a well read, science literate person who claims to be interested in this issue. More reading of high quality, educational sources is required.

    Comment by t_p_hamilton — 17 Sep 2013 @ 10:46 AM

  96. …the discrepancy of our results … highlights the wide divergence that now exists in recent values of G.

    DOI:10.1103/PhysRevLett.111.101102

    hat tip to: http://www.newscientist.com/article/dn24180-strength-of-gravity-shifts–and-this-time-its-serious.html

    Comment by Hank Roberts — 17 Sep 2013 @ 11:39 AM

  97. Continuing the map analogy:
    maps are not the terrain, they are a model, but they are a useful model. It is useful because the relation of map to terrain is well determined and known, and meticulously maintained. When there is a mismatch between map and terrain (eg. a terrain feature omitted from the map) we don’t say: “Duh, maps are a model not the terrain, don’t expect a full match”. We identify it as a model error and rush to correct it (update the map).

    The trouble with climate models is – we don’t know, maybe even can’t know, if models really represent climate, and what the relation is between model and nature (climate), i.e. what is the extent of the match, or, in which area is the model more reliable and in which less.

    Comment by Jacob — 17 Sep 2013 @ 12:38 PM

  98. [Response: I think you are missing the point I am making – it is the mismatches between experiment and theory that drive people to look harder for overlooked technical issues or interpretations. It is true that people often find bugs in code or miscalibrations in equipment on their own with no external prodding, but people are more strongly motivated to do so when there is mismatch of the sort we are discussing. Mismatches are clues that we should pay heed to. – gavin]

    We are in complete agreement on this. My concern is that you seem to think that Feynman would have disagreed, or that his statement was somehow incomplete. It is only this I am taking issue with.

    Again, here is the full quote from Wikipedia:

    “In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.”

    The emphasis here is on which thing falsifies the other. He’s not going into how one decides whether an experiment is right. In fact, he doesn’t even mention the possibility of wrong experiments, but it would be foolish, I am sure you agree, to presume that he doesn’t know what a wrong experiment is.

    He says nothing that implies, as you seem to be stating, that hunches, mismatch with other experiments, mismatch with theory, and a host of other considerations won’t play a role in the decision of what to look at when trying do decide whether an experiment is right. So you are correct in this – but I don’t think you have any argument with Feynman.

    BTW, readers might be interested in the book “How Experiments End” by Peter Galison, who discusses exactly these issues.

    http://www.amazon.com/How-Experiments-End-Peter-Galison/dp/0226279154/ref=sr_1_1?ie=UTF8&qid=1379442213&sr=8-1&keywords=%22how+experiments+end%22

    Comment by Mitch — 17 Sep 2013 @ 1:41 PM

  99. I thought that GCMs were useful but not the story of climate that is told which is in the paleo-climatic record. The issues of humankinds climate changes vs natural climate change is obvious really.

    GHGs are rising faster than at any known time in the past (10 to 50x as fast). Humans emit a lot of warming and cooling agents which create their own novel global climatic change, not all of it warming of course.

    Models tell us interesting things about possible future climate but in the main useful and interesting they are and not gospel. One question is how do climate models fair against recent warming over the past 50 years. Useful I would suggest but accurate with certainty is not what they are run for surely.

    Comment by pete best — 17 Sep 2013 @ 1:43 PM

  100. > When there is a mismatch between map and terrain
    Always

    > We identify it as a model error
    Nope; models aren’t maps, they’re tools.
    Run the model to generate the “map” — which is probabilities, not certainties.

    Run out enough scenarios; sure, some will match some of what happened on this Earth when looked at in retrospect.

    The model tells you you’re in the same ballpark and how the game is played; but run the model multiple times and you get the outcomes — and hope in retrospect reality fell somewhere in among those scenarios.

    A map describes some few specific details and you can get ground truth by looking.

    Comment by Hank Roberts — 17 Sep 2013 @ 2:35 PM

  101. Jacob: “The trouble with climate models is – we don’t know, maybe even can’t know, if models really represent climate, and what the relation is between model and nature (climate), i.e. what is the extent of the match, or, in which area is the model more reliable and in which less.”

    That is horsecrap. Of course you cannot discern the goodness of the model by looking only at a single parameter, however important. What you need to do is look at the verified predictions made by climate models. There the accomplishments are rather more impressive.

    The models are extremely useful–that is why we use them.

    http://bartonpaullevenson.com/ModelsReliable.html

    Read and learn.

    Comment by Ray Ladbury — 17 Sep 2013 @ 3:44 PM

  102. Jacob wrote: “… we don’t know, maybe even can’t know, if models really represent climate, and what the relation is between model and nature (climate), i.e. what is the extent of the match, or, in which area is the model more reliable and in which less”

    None of that is true. With all due respect, you are simply projecting your own personal ignorance into a universal axiom.

    Comment by SecularAnimist — 17 Sep 2013 @ 4:39 PM

  103. .
    I work with instruments ,we “model a response then look at the error
    the error is the important thing ,
    it tell us how well we understand the process
    often it tell us where to change our parameters to get a better fit
    IE we had a gas mass flow error ,
    it was very clearly a temperature compensation problem
    the error curve was so clear it pointed out the solution

    My beef is not with the models being wrong , that’s to be expected
    the issue is with people trying to gloss over the discrepancy as a minor issue
    When David Hataway , leading light of solar physic got his prediction spectacularly wrong on Solar cycle 24 .
    he just wiped the eggs of his face , chucked his models out
    and went back to the drawing board ,
    that’s a scientist I can respect ,
    science is about good predictability , the facts are never wrong

    Comment by Jeannick — 17 Sep 2013 @ 5:33 PM

  104. Another way to say it: the model doesn’t produce a map. It produces many possible maps.

    Start with a few hundred identical Earths in so many hundred identical Solar systems, all starting at the same time.

    They wouldn’t wind up all the same after a few hundred million years.

    The models for climate don’t give us a map; the results are a varied range of more or less likely probabilities.

    Best we know so far none of the likely paths for Earth go either totally icebound or totally Venus (the reality check is, we’re here). And the planets stay in their approximate relations too, Velikovsky notwithstanding).

    Models don’t say what climate -will- be, they say what likely outcomes are given what we know about the forces involved.

    Looking at model runs — a big brush fanning out from a common starting point — the real climate falls somewhere reasonable among those possibilities.

    Or not, of course.

    Then look at paleo work — how much of it tell us what was living then? Presumably we start off assuming what’s living on the planet in say previous interglacials (primarily in the oceans, most of the time) doesn’t make much difference. That works back to when plankton expanded from shallow coastal to deep-ocean habitats, taking the food chain with it. Before that, life was very different.

    Or not, of course.

    Comment by Hank Roberts — 17 Sep 2013 @ 6:47 PM

  105. Gavin @77

    “Hmm, an interesting and testable argument. Well, let’s go to the tape:

    Umm… no obvious sign of some huge increase in fidelity prior to 2000. So that would be a “no” then. – gavin]”

    Hmm… well the graph you posted only dates from 1980 but clearly up to the year 2000 the hindcast is a much better match than the forecast for the 21st century and pretty close to the temperature record.

    Of course you should have posted the models hindcasts for the whole of the 20th century and then you would see the excellent short to medium term correlation between model output ant global temperature record.

    [Response: Again, testable:

    ..and again, no. – gavin]

    However, I again reiterate, this is impossible for an accurate model. The two outputs are not measuring the same thing and an apple can’t equal an orange no matter how you cut it.

    Accurate models MUST mismatch the temperature record in the short to medium term unless the ‘weather’ signal remains neutral over the whole period.

    Alan

    Comment by Alan Millar — 17 Sep 2013 @ 7:08 PM

  106. And that is part of the crux of the climate debate.
    Many skeptics don’t understand that models are a representation of the laws of nature.
    Instead they think researchers create models by collecting historic climate data, then pick a random curve that fits all the historic data points while making the assumption that CO2 drives global warming, and extrapolate that into the future. And bingo, that’s the model!
    Which make statement like “climate models have been falsified” more understandable. Yes if models were simply a curve fitting exercise, then any difference between predicted and actual climate would falsify a model.

    Comment by Retrograde Orbit — 17 Sep 2013 @ 9:48 PM

  107. Why trust climate models? It’s a matter of simple science
    How climate scientists test, test again, and use their simulation tools.

    Comment by prokaryotes — 17 Sep 2013 @ 10:44 PM

  108. “The models for climate don’t give us a map; the results are a varied range of more or less likely probabilities.”

    Ok. but if the range of results is too big, spread over all possible outcomes – then the models are quite useless. To be useful they need to narrow down the range of results (future temps).

    Putting it another way: it is impossible that all models (i.e. all results) are “correct”. Some of them are obviously wrong. How do we tell which ?

    Comment by Jacob — 18 Sep 2013 @ 3:30 AM

  109. Continuing the map analogy:
    The models are not one map, but many maps (many models), all different.
    The question is: how do we know which of the maps matches best the terrain ?
    We can assume that none of them matches perfectly, and even if there was a near perfect match, we cannot identify now which one it is.
    Is that correct ?

    Comment by Jacob — 18 Sep 2013 @ 3:37 AM

  110. Gavin has been arguing throughout this post that all models are incorrect yet all models are useful.
    That sounds like gibberish unless we understand that models are a representation of ( – our understanding of – ) the laws of nature.
    They are not designed to predict the future, they are designed to represent reality. And of course, as such, they are useful to predict the future, but they will never be 100% accurate doing so.
    But what’s the alternative? Close your eyes and wait for fate to strike you?

    Comment by Retrograde Orbit — 18 Sep 2013 @ 7:35 AM

  111. #108–” but if the range of results is too big, spread over all possible outcomes – then the models are quite useless…”

    Sure, but that’s a counterfactual. The range of the results is quite narrow enough to let us know that we’re not acting safely nor sagely.

    We’ve seen about .8 C warming so far, and there is robust data showing weather responding to that small change in ways that are already quite expensive in blood and treasure. Specifically, there is good data on the increase in extreme precipitation–as seen very recently in Colorado, and also this summer in Alberta and in India–and drought–as seen last year over much of the US and Mexico. (Though, to be fair, there is some debate about best metrics in assessing this.)

    Extreme heat waves have also been a recurring feature–this summer East Asia got nailed:

    The big weather story for Asia (and perhaps the world) during the past month, was the unrelenting heat in Eastern China, Japan, and other surrounding areas. Japan saw its all-time national record beaten when the temperature peaked at 41.0°C (105.8°F) at Shimanto on August 12th (see this blog post for details not only about the Japanese record but for the Chinese records as well). The heat wave in Japan lasted until August 23rd during which every day at least one site in the country broke its temperature record. Taipei, Taiwan also set its all-time heat record with a reading of 39.3°C (102.7°F) on August 9th besting the former record of 38.8°C (101.8°F) set on August 9th, 2003.

    (h/t weatherunderground: http://www.wunderground.com/blog/weatherhistorian)

    My personal informal estimate is that extreme weather events over the last decade which at least are more probable under future regimes have cost in excess of 100,000 lives and $100 billion US. That’s an appalling cost–though small compared to global population and a decade’s worth of global economic output.

    Given that impacts don’t scale linearly–that’s true both because of the statistics of normal distributions, which imply that (damaging) extremes become much more frequent with small shifts in the mean, and because significant breakpoints such as melting points for sea ice, wet-bulb temperatures too high for human survival, and heat tolerance for the most significant human food crops are all ‘in play’–the model forecasts using reasonable emissions inputs ought to be more than enough for anyone using sensible risk analysis to know that we making very bad choices right now.

    Comment by Kevin McKinney — 18 Sep 2013 @ 7:47 AM

  112. “models are a representation of the laws of nature.”

    Model are a hypothesis about how the physical processes interact in a chaotic system.
    Like each hypothesis, they need to be validated.

    Comment by Jacob — 18 Sep 2013 @ 8:49 AM

  113. Jacob appears to be confused about what climate models are and do. Climate models have no relation to maps. Climate model results are not a list/map of future weather conditions and temperatures at points in time. Climate models are tools to identify the important variables and their interaction in an extremely complex physical system.

    If a climate model was a map, when you input the “business as usual” continuous increase of CO2, human population, and environmental degradation, the output/result would be “Here Be Dragons”.

    Comment by flxible — 18 Sep 2013 @ 8:53 AM

  114. Jacob,

    OK maybe I’m just simple minded, but it seems to me that you can tie yourself in knots arguing from generalities. 

    Perhaps your first question should be, are the models useful? The evidence as spelled out in this thread (predictions, interesting questions raised) is yes. Plain and simple.

    Then maybe you might want to know how that happens. Is it a fluke? So you would have to look at the specific mechanics of the systems involved instead of thrashing around in uninformed epistemology. 

    Here’s an off the wall example (i.e., don’t make too much of it). What if I said that a map could be more useful for being spatially inaccurate? Even intentionally so? You would have to look a little deeper before deciding if those statements really made sense or not.

    Note the evolution of the Tube Map:
    http://theoinglis.tumblr.com/post/9009986470/the-evolution-of-the-london-underground-map

    Here it was recognized that loosened physical tolerances just happened to allow for clarity in helping people get where they need to go. 

    Comment by Radge Havers — 18 Sep 2013 @ 9:25 AM

  115. > if the range of results is too big, spread over
    > all possible outcomes – then the models are quite useless…”

    No, you still haven’t understood that a model generates a scenario when you run the model. Then you run the model again and generate another scenario. Do that a handful of times, or a dozen, or a hundred.

    What you get isn’t “all possible” outcomes — it’s a specific number, one scenario per run, one scenario from each time you run the model.

    Look back at an early article at RC:
    Is Climate Modelling Science?

    One of the most important features of complex systems is that most of their interesting behaviour is emergent. It’s often found that the large scale behaviour is not a priori predictable from the small scale interactions that make up the system. So it is with climate models. If a change is made to the cloud parameterisation, it is difficult to tell ahead of time what impact that will have on, for instance, the climate sensitivity. This is because the number of possible feedback pathways (both positive and negative) is literally uncountable. You just have to put it in, let it physics work itself out and see what the effect is.

    2005 was a long time ago. Don’t take that as the current best information. But it’s a good place to start to get an idea why a model is not a map, and what a model is and can do.

    Remember computers now are a wee mite faster than they were in 2005 and models rather larger.

    Comment by Hank Roberts — 18 Sep 2013 @ 10:26 AM

  116. Models reproduce the physical structure and chemical composition of the atmosphere, oceans, precipitation, ice, temperatures, winds, hurricanes, oscillatory events and atmospheric waves, etc. There are no skeptic models, for a good reason – it can’t be done, or else it would have been done long ago. For example, warming is much greater in the arctic than at the equator. Models explain why, as for “skeptics” – have they ever mentioned this inconvenient truth and their explanation for it? Jacob, have you ever heard of this issue? It is just one of many the models reproduce fairly well (not perfectly by any means).

    Comment by t_p_hamilton — 18 Sep 2013 @ 12:09 PM

  117. In models run with the GISS forcing data, the ‘natural+anthropogenic’ temperature evolution matches observations very well for a climate sensitivity of 0.75°C/W/m², which agrees with the value derived from palaeoclimate data. Take away the anthropogenic forcings and it still matches well up to about 1940 or 1950, suggesting that early 20th Century warming was almost entirely natural, and not significantly exacerbated by coal burning, deforestation etc.

    [Response: ‘GISS forcing data’ is neither specific nor infallible. Depending on what you are looking at, it could have a bottom up estimate of aerosol forcing or aerosol forcings from a residual calculation – neither of which really have the range of uncertainty. If you want to estimate the impact of different forcings on past trends you should look at the single forcing (‘historicalMisc’) simulations. What you find is that early century trends had combinations from multiple factors, and given the uncertainties – particular in the aerosol and solar terms – it’s hard to put specific % on the natural/anthropogenic combination. The latter part of the century is easier because aerosols and solar are going the other way and we have additional datasets (such as OHC) to constrain mechanisms. – gavin]

    Comment by Icarus62 — 18 Sep 2013 @ 12:39 PM

  118. > Jacob says:
    > 18 Sep 2013 at 3:37 AM
    > Continuing the map analogy:
    > The models are not one map, but many maps
    > (many models), all different.

    No.
    Models are not maps.

    Think of the analogy this way:
    A sausage grinder isn’t a sausage.

    Comment by Hank Roberts — 18 Sep 2013 @ 2:10 PM

  119. #108 — “Putting it another way: it is impossible that all models (i.e. all results) are “correct”. Some of them are obviously wrong. How do we tell which ?”

    You start with checking the individual parts. Say that you’ve figured out that the thing you’re trying to study (like the climate) has several different components (like GHG, ice albedo, and aerosols).

    Get as much, and as good, experimental data as you can. Does the model accurately reproduce some basic phenomena that happens in the real world when you change the GHG, or the aerosols, solar radiation, or ice albedo? Yes? Good. (For instance, do the models predict cooling after big volcanic eruptions? Do they predict yearly seasons as the Earth’s tilt changes, or tropospheric warming but stratospheric cooling with increasing GHG?)

    Basically, what we’re trying to do is to break down the problem into separate pieces, check that we understand them correctly as distinct pieces, then put them back together. A model might be “okay” if it mistreats a minor piece of the puzzle, but if it gets a major piece wrong, our chance of drawing any solid conclusions are likely doomed.

    As we go on, as our model improves and we get more data, we often tweak existing pieces of the puzzle or add more. Eventually, the model is “good enough” to handle the original problem, and we move on to other (usually related) problems.
    For instance, maybe we have a good fix on the 30-year climate predictions, but now we want good 10- and 20-year predictions. Or maybe we want to get better regional climate predictions. Etc. These often require that we get the 30-year picture, the “big picture”, correct first, before we can move on to these ‘details’.

    Comment by Windchaser — 18 Sep 2013 @ 2:39 PM

  120. “Models reproduce the physical structure and chemical composition of the atmosphere, oceans, precipitation, ice, temperatures, winds, hurricanes, oscillatory events and atmospheric waves, etc. ”

    Well, that is surely their intention, but do they?
    There are too many variables in the real world, not all of them well known or measured, some even unknown.

    Gavin said, correctly, that models are not the reality, they are a simplification of some part of it. We can’t reproduce the atmosphere in models.

    The question of the relation of the model to reality (that which it tries to model) is an open question.

    We don’t know to what extent it does indeed achieve what it is intended – replicate the processes in a way that is somehow similar to what happens out there.

    Everyone repeats the dictum: all models are wrong but some are useful. Well, how can they be useful, if they (some of them at least) miss the mark, i.e. the relation to the real word?

    We must know that what the model shows is in some way related to the world out there, if not it cannot be useful. We cannot claim a-priori that all models are useful, just because we intend them to be so.

    It cannot be that all the models (for example, those shown in #77) are equally correct, or equally useful.

    Comment by Jacob — 18 Sep 2013 @ 5:49 PM

  121. #114:
    ” So it is with climate models. If a change is made to the cloud parameterisation, it is difficult to tell ahead of time what impact that will have on, for instance, the climate sensitivity. This is because the number of possible feedback pathways (both positive and negative) is literally uncountable. You just have to put it in, let it physics work itself out and see what the effect is.”

    What you see is the effect in the model.
    “let it physics work itself out” – the physics of the model.
    Doing models sure is fun, but what we want to know is what the effect would be in the real world.

    How do we know that the model correctly replicates the processes of the real world and produces the same (or similar) effects ?

    [Response: You test it in as controlled a situation as possible and compare to observations – that is what this whole thread is about. – gavin]

    Comment by Jacob — 18 Sep 2013 @ 6:05 PM

  122. Oh ferchrissake! Jacob, where are you getting this crap. It is clear from your posts that you do not understand climate, climate modeling or scientific modeling in general.

    I quote Hamming again: “The purpose of computing is insight, not numbers.”

    Read it again. When you understand that one quote, you will understand when a model is useful. Tamino’s 2-box model doesn’t have nearly the complexity and fidelity to physics found in a GCM. However, it is useful in that it provides insight into the interplay of the forcing mechanisms. Its simplicity makes it easy to see the interaction, despite its divergence from physical reality. Fitting a linear trend (a model) to data is useful even if the data do not trend linearly.

    For a scientist, the model we use depends on the insight we are trying to obtain. If we have enough experience with the system we are studying, we can extract the insight from the model results even as we comprehend the shortcomings of the model.

    For the purposes of climate prognostication, the best models we have are GCM. They say without exception that as we add more CO2 it is going to get warmer. They say that that warming will have some adverse consequences (drought, sea-level rise, increased impulsive precipitation, and so on). No one has come up with any convincing science that suggests otherwise–and certainly they have not come up with a model that comes close to reproducing anything resembling Earth’s climate that incorporates any such mechanism. The insight: it’s gonna warm and we’re in for a wild ride.

    Comment by Ray Ladbury — 19 Sep 2013 @ 9:58 AM

  123. #121
    Sure, you test it – i.e. compare it’s result to measured observations – as much as you can.

    Then you find mismatches. This is a natural process of refining and improving the models. They need to be always validated against some external data.

    The question is – what confidence can we have that the models, in the current state, at this point in the refining process, do represent reality and provide credible or good projections.
    And which of the many different projections of the different models, is more credible?

    Comment by Jacob — 19 Sep 2013 @ 10:13 AM

  124. > what confidence
    Considerable

    > which
    Depends

    http://physicstoday.org/journals/doc/PHTOAD-ft/vol_60/iss_1/72_1.shtml

    “Weather concerns an initial value problem: Given today’s situation, what will tomorrow bring? Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so. Climate is instead a boundary value problem—a statistical description of the mean state and variability of a system, not an individual path through phase space. Current climate models yield stable and nonchaotic climates, which implies that questions regarding the sensitivity of climate to, say, an increase in greenhouse gases are well posed and can be justifiably asked of the models. Conceivably, though, as more components—complicated biological systems and fully dynamic ice-sheets, for example—are incorporated, the range of possible feedbacks will increase, and chaotic climates might ensue….”

    Comment by Hank Roberts — 19 Sep 2013 @ 11:24 AM

  125. Jacob: “what confidence can we have that the models, in the current state, at this point in the refining process, do represent reality and provide credible or good projections.”

    Actually, Jacob, that is two very different questions. The answer to the first is that the models never “represent reality”. They are always a simplification, becuase reality is often too complicated to yield clear insight into what is going on. The answer to the second question–whether we can trust the projections–is more nuanced. If your question is whether adding CO2 will make things warmer than they would be otherwise–I’d say we have about 99% confidence on that. If the question is whether that warming will have adverse effects, I’d say we’ve got 95% confidence that it will. Ask which effects will bite us hardest and fastest and the crystal ball is becoming murky.

    Jacob, there are things we know. There are things we are pretty sure of. There are things we suspect strongly–and then there are “active areas of research”. What we know and what we are pretty sure of are sufficient to compel us to act if we are prudent.

    Comment by Ray Ladbury — 19 Sep 2013 @ 11:26 AM

  126. ” “The purpose of computing is insight, not numbers.”

    Insight into what ? Into the computer process ? I don’t think so.
    If there is no known relation between the model and the external data your insight is worthless.

    “For the purposes of climate prognostication, the best models we have are GCM.”
    Correct. That does not mean they are good enough or that their prognostication is correct. The lack of better models isn’t proof that your models are good enough. That needs validation.

    [Response: Why do you appear to think that this is at all at issue? Of course models need to build credibility – and they do that by having a track record of skillful predictions beyond what you would be able to do statistically or otherwise. No is (or should be) claiming that models are useful just because they are complicated or because people have worked on them for a long time – it is the other way around. Models show skill but are not perfect and that encourages people to work on them. It would be absurd otherwise. – gavin]

    Comment by Jacob — 19 Sep 2013 @ 11:51 AM

  127. Jacob,

    Link from ‘start here’ at top of page:

    Frequently Asked Question 8.1
    How Reliable Are the Models Used to Make Projections of Future Climate Change?
    https://www.ipcc.unibe.ch/publications/wg1-ar4/faq/wg1_faq-8.1.html

    Don’t be thrown by physicists’ fetish for quality control. They’ll always want better information no matter what. It’s a good thing — not like the unscrubbed dreck you’ll find out in the denial echo chamber.

    If models are like telescopes, then some are better than others, but working instruments can all spot the moon.

    Comment by Radge Havers — 19 Sep 2013 @ 11:52 AM

  128. “Jacob, there are things we know.”

    Do you know them because that is what the models show, or did you know them before you did any models?

    Comment by Jacob — 19 Sep 2013 @ 11:57 AM

  129. We are always as confident as we choose to be.
    For example, there is a busy street in front of my house. I am NOT confident that a car will hit me when I cross it and therefore I never look left or right crossing it.

    Comment by Retrograde Orbit — 19 Sep 2013 @ 9:26 PM

  130. This is the same thing as Evolution vs. Creation. No matter how much evidence we provide in support of evolution, the diehard opponents will disregard it and hold on to preconceived notions.
    And in both cases the proponents of the inconvenient truth are villainized. The proponents of evolution because they are destroying our faith in God, the proponents of global warming because they are subversives destroying our fat, happy society.

    Comment by Retrograde Orbit — 19 Sep 2013 @ 9:37 PM

  131. “…the proponents of global warming because they are subversives “destroying our fat, happy society.”

    So, “destroying our fat, happy society.” is the goal of the “proponents of global warming”?

    I thought global warming was about the physics of the atmosphere…

    Comment by Jacob — 20 Sep 2013 @ 3:38 AM

  132. Feynman’s name gets bandied about a lot, but his real comments were aimed at the “social sciences” which he somewhat despised as failing to come up with the type of “laws” he admired.

    I read an anecdote that Feynman and Gell-Mann published a new theory of beta decay, sat back and waited for the experimental results.

    The first results were from a well-known European expermental physicist: Negative.

    Gell-Mann asked Feynman: “What do we do now?”

    Feynman just shrugged. “We wait”, he said.

    A few months later, new results came in. The experimental physicist had ironed out some bugs in his apparatus, and the new results tended towards confirmation.

    Moral: Feynman was a complete pragmatist where his own work was concerned.

    Comment by Toby — 20 Sep 2013 @ 6:10 AM

  133. #129–“So, “destroying our fat, happy society.” is the goal of the “proponents of global warming”?”

    In the minds of considerable numbers of commenters on various blogsites and news sites with whom I’ve discussed related matters, yes. Those concerned about climate change are frequently described as ‘fanatical green ideologues out to create a new world order’–or else greedy scamsters out to extract money via evil carbon taxes and redistributive payments of all sorts. Quite often both at the same time, though that doesn’t seem highly congruent.

    “I thought global warming was about the physics of the atmosphere…”

    Me, too. But conceptually as well as physically, ‘what happens in the atmosphere doesn’t stay in the atmosphere.’ There’s always some sort of fallout, apparently…

    Comment by Kevin McKinney — 20 Sep 2013 @ 7:54 AM

  134. “So, “destroying our fat, happy society.” is the goal of the “proponents of global warming”?”

    Yes, in exactly the same way as the goal of evolutionary biologists are to destroy faith in God, Retrograde Orbit’s other example in the very same sentence.

    Do you realize that even if one model has all of the essential physics, very precise and accurate forcings, an exact or even close match of the actual climate trajectory (when El Ninos occur, when cool years occur in China, when droughts in the southwest USA occur, etc) is not going to happen? This is a characteristic of the physics involved, and mismatch in and of itself does not indicate a deficiency in the model. What you do test is the parts of the model, how they interact, and how they match in behavior with observations (of which we need more and better also). For example, research on understanding the physics and chemistry of clouds.

    Comment by t_p_hamilton — 20 Sep 2013 @ 8:14 AM

  135. Re- Comment by Jacob — 20 Sep 2013 @ 3:38 AM

    Jacob, you have misread Retrograde Orbit. He is saying that the “destroying our fat, happy society” claim is the game of the denialists.

    Steve

    Comment by Steve Fish — 20 Sep 2013 @ 9:33 AM

  136. > Jacob, you have misread Retrograde Orbit.
    “It’s not the trolling, it’s the biting.”

    You don’t have to reply to someone who’s clearly looking for an argument.

    Comment by Hank Roberts — 20 Sep 2013 @ 10:19 AM

  137. Jacob @ 126 said:

    “That needs validation.”

    Gavin responded well, but at this point I have a strong suspicion that you aren’t absorbing what people are saying to you. Go back and look at the validated predictions that Ray pointed you to above: http://bartonpaullevenson.com/ModelsReliable.html

    “If there is no known relation between the model and the external data your insight is worthless.”

    Again you’re not listening. There is a relation. It’s refining precisely that relation that drives the push for better models. Does this make sense to you?

    The models are built on real physics, real chemistry, real observation, and skill. ‘Skill’ is an important word in climate circles:

    What do you mean when you say a model has “skill”?

    ‘Skill’ is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skillful if it works better than just assuming that each day is the same as the last (‘persistence’). It should be noted that ‘persistence’ itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn’t necessarily an analogue for ‘persistence’ in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a ‘no change from the present’. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures – but only if natural and anthropogenic forcings are used – compared to an expectation of no change. Standard climate models don’t show skill at the interannual timescales which depend heavily on El Niño’s and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).
    http://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/

    If you aren’t happy with the answers you’re getting: Learn, Get Specific (stop arguing generalities skimmed off the top of your head), and most of all…
    Ask Better Questions

    More here:
    http://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/2/

    Comment by Radge Havers — 20 Sep 2013 @ 10:45 AM

  138. Hi all – I have a technical question on the CMIP5. I am looking through, for example, the documentation provided on http://cmip-pcmdi.llnl.gov/cmip5/index.html . The source of my question is to clarify the actual results of the CMIP5 exercise so far, and what might be planned. Forgive me, but I need it dumbed down to layman’s terms. I have seen things on blogs where people try to jam together (by visual estimation of published graphs) previous forecasts of global temperature against actuals (eg HADCRUT).

    Within the CMIP5 documentation, there are a bunch of different types of runs specified for various error and sensitivity checks. I see one of them tries to initialized the “world” for several months circa 1979 and then run forward 1979-2008.

    Along those lines, has the following been tried (again, forgive if I’m asking something with an obvious answer published somewhere):
    1) pick starting projection dates and subsequent run paths
    2) example for (1): start 1980, run forward 5 years; start 1982, run forward 5 years; start 1984 (run to 1989) etc etc
    3) at each start we proceed as with the 1979 directive; ie calibrate with several months of starting year data
    4) thus the latest such (example) run where we could compare against actual data would be an initialization in 2008 and run forward for 5 years to 2013
    5) the advantage of the above (and I recognize that there is a huge amount of work involved in crunching these simulations) is that we could see the starting temp and 5 year projections against the historical record for a number of overlapping segments.

    Comment by MFriesen — 20 Sep 2013 @ 11:30 AM

  139. This thread has been interesting, especially in the surge of postings from usual and unusual suspects, apparently gathering at RC to share in the demise and gaze at the corpse.
    They seem to presume that Gavin’s primary point in this thread is to rationalize why climate models are so wrong compared to reality, especially now they know for sure:

    1.there is no global warming (only the last 15 years count)
    2.it all happened before just like this without people
    3.arctic ice is in recovery and/or melted before, just like this
    4.’Cycles’ (said with deep authority and wave of wand)
    5.we didn’t do it
    6. “!Squirrel!”
    7.some combination
    Dr Punnet’s comment captures that presumption (#72,16 Sep 2013 at 1:18 AM) when he imagines a tipping point, where climate scientists will feel free to follow J Curry’s lead in speaking out. Clear implication is that the scientists at RC lead a cabal that is not so honest, nor courageous, while the the rest of us are too deluded or invested to comprehend that some new proof beyond denial is in.

    Trouble here includes the fact we are often speaking different languages, using the same words. “Certainty”, “uncertainty”, “proof”, “confident”, “wrong”, “robust”, etc carry very different meanings across camps. The “scientific method” is reduced to one of many religious options (lost among Scientology, Christian Science, various other Methodists). My choice of faith determines my beliefs, and reality will comply, if only my faith is strong enough.

    Comment by Phil Mattheis — 20 Sep 2013 @ 12:07 PM

  140. Re- Comment by Hank Roberts — 20 Sep 2013 @ 10:19 AM

    Hank, I think you are wrong on this. I have been working on honest very short responses (note mine to Jacob at two sentences, two lines)and you can see this technique in the Monty Python skit.

    Steve

    Comment by Steve Fish — 20 Sep 2013 @ 3:57 PM

  141. How about a stab at a ‘useful map’ of why there seems to be such disbelief at the response to the mismatch. It won’t be correct but hopefully useful.

    It can be teased out of Ray’s above comment:

    “For the purposes of climate prognostication, the best models we have are GCM. They say without exception that as we add more CO2 it is going to get warmer. They say that that warming will have some adverse consequences (drought, sea-level rise, increased impulsive precipitation, and so on). No one has come up with any convincing science that suggests otherwise–and certainly they have not come up with a model that comes close to reproducing anything resembling Earth’s climate that incorporates any such mechanism. The insight: it’s gonna warm and we’re in for a wild ride.”

    1. There is an implicit assumption that the models are very close to resembling Earth’s climate and just need tweaking, but no acknowledgement that the system is probably much more complex than we know or can model.

    2. If model makers happen to not audaciously think that they are close to modelling the SYSTEM then why does Ray jump straight to his negative loaded insight without qualification.

    3. There is a distinct backing away from the way models have been used to generate fear and characterise the science as settled (catastrophic etc).

    4. The scientist as advocate model is on show in this thread (mismatch = increased uncertainty, no wait … uncertainty has not increased, everything is still certain … we use paleo anyway) and Gavin would claim his personal beliefs do not affect his science.

    So Ray, another stab at an insight, something is going on here, it may be a small tweak and a useful lesson for modellers OR it could point to a fundamental problem with our assumptions or ability to understand the VERY complex climate system. The insight scientists have will have implications for where research is directed etc. and it is important for it to be more balanced and not continually certain.

    [Response: The odd thing is that people seem to think if there is some huge unknown in climate that models don’t capture that this makes things better somehow. It really doesn’t – with just paleo-climate to guide us and no quantification possible of the implications of CO2 levels approaching (or exceeding) Pliocene levels (with ~20 meters of sea level rise), the uncertainties grow much larger, and uncertainty is not our friend. – gavin]

    Comment by NickC — 20 Sep 2013 @ 6:37 PM

  142. It goes beyond that. I feel like we reached a saturation point. Anybody who can accept the reality of AGW already has accepted it, and those who have not accepted by now never will.

    Jacob for example. He is not actually listening to anything. Because in his mind he knows there is no global warming. Therefore anybody who supports it is a liar. And he has to stand strong and reject those lies. This is not about “terminology” or explaining to skeptics what models are and why errors are inevitable. This is ideological.
    You know the highway administration only installs guard rails after somebody dies. I am afraid the same will happen with climate change.

    Comment by Retrograde Orbit — 20 Sep 2013 @ 7:36 PM

  143. #23 – Yes I was interested on any comments on the Fyfe, Gillett, and Zwiers paper. I’m not sure whether their findings were because the CMIP5 models were significantly worse than CMIP3 at matching the observed global temperature over the last 20 years or if you compared CMIP3 models in the same way they would perform similarly. My impression was the CMIP3 models on average were running warm compared to observations, but still well within the range of model runs ?

    Comment by PeteB — 21 Sep 2013 @ 2:52 AM

  144. #141–

    “1. There is an implicit assumption that the models are very close to resembling Earth’s climate and just need tweaking, but no acknowledgement that the system is probably much more complex than we know or can model.”

    No, there is no such ‘assumption.’ Model studies involve a process called ‘validation’, the essence of which is testing how well the model(s) reproduce known climatic features. And everybody acknowledges that the system is indeed more complicated than we can make the models. (A map is simpler than the terrain it depicts, too–but you can’t put the landscape in your pocket, or quickly view it from different perspectives, or conveniently use it to calculate route distances, all of which the simpler map enables.)

    2. “…why does Ray jump straight to his negative loaded insight without qualification.”

    Perhaps because it’s clearly correct? See point 1–since the models do in fact reproduce Earth’s climate pretty well with the ‘ingredients’ now on offer, the well-known principle of Occam’s Razor says that we shouldn’t be needlessly bringing in additional factors. When looking for something and not finding it, there’s always the (theoretical) possibility of success, if only one looked a little harder or a little longer. But when there’s no particular reason to think the sought-for item exists at all, there’s a presumption that you are wasting your time that increases proportionately to the length of the search.

    3. Sez you. Evidence, please?

    4. A complete misreading of the post. Again–the point is to consider carefully why the mismatch occurs and what it means. That’s not a matter of ‘faith’, but of method.

    Comment by Kevin McKinney — 21 Sep 2013 @ 6:31 AM

  145. Yes Gavin, and there is another psychological twist that I find baffling:
    In the ears of skeptics “uncertain” always rings like “less”. It never occurs to them that it might actually mean “more”. How ideological bias can play tricks with otherwise rational human minds is fantastic.

    Comment by Retrograde Orbit — 21 Sep 2013 @ 7:22 AM

  146. > 135, 136
    Steve Fish, I agree and thanks. My 135 was meant as a reply to Jacob, to say if he was not looking for an argument, he needn’t bite when someone’s trolling tasty bait (and that if he is looking for argument: Monty Python).

    Your (Steve’s) two-line followups are good examples of how to turn a comment back into useful conversation. Appreciated.

    Comment by Hank Roberts — 21 Sep 2013 @ 11:18 AM

  147. I see NickC has trouble reading for content.

    1)I make no contention that the model is “very close to resembling Earth’s climate”. I simply say that there are some things we know
    2)WTF–I don’t even know what a “negative-loaded insight” is.
    3)Citation fricking needed.
    4)Ditto.

    Nick, have you ever even known a fricking scientist? Have you ever even read a scientific paper? Or is your lack of understanding of what scientists and models really say also attributable to your inability to read for content?

    Comment by Ray Ladbury — 21 Sep 2013 @ 1:10 PM

  148. Gavin-

    Isn’t it fair to say that the discussion of what level of confidence we have in model predictions and what the models can and cannot do is central to making a case for action on climate change?

    [Response: No. The case for action comes from basic science and observations, models just attempt to quantify it more completely. – gavin]

    The relationship between per-capita energy use and standard of living is well documented. Any effort to restrict the use of fossil fuels will therefore have a quantifiable impact on the ability of the poorest people on the planet to lift themselves out of poverty. In order to make the case for action then, you must be able to quantify the negative impacts of climate change with a certain degree of accuracy. One must be able to make a convincing argument that the negative impacts of climate change far exceed the negative impacts of restricting fossil fuel use. In other words, it is not enough to simply say “we know that increasing CO2 levels will make the world warmer and that is a bad thing”, you need to be able to specify how much warmer and quantify the impacts with some level of precision.

    [Response: You are setting an special (and probably impossible) bar for this issue that is never set for any other decision. Are the consequences of current economic policies (which have a much bigger impact on the poor) being worked out at the ‘level of precision’ you are demanding? The answer is obviously not. Additionally, I am unclear as to where your confidence in exactly what impact climate policies will have on the poor comes from. It is easy to imagine policies that price carbon and use the funds to reduce regressive taxation that is a huge burden. Similarly, the poverty that you rightly condemn in Africa and other areas is far more associated the very endemic institutional issues than it is with energy poverty. That is a symptom, not a (primary) cause. But even there, there are large parts of rural Africa where local solar (for instance) is already cheaper than extending the grid to supply fossil energy. The idea that climate must be balanced directly and exclusively against the global poor is a false dichotomy. And none of this has anything to do with models. – gavin]

    Given the limitations that you describe here, I cannot imagine that is possible at this point. Yes, no sensible person is now denying that temperatures are rising and humans are largely to blame. However, the difference in impacts between a 1.0C rise in global temperatures and a 5.0C rise is tremendous. When you consider that the regulations currently being proposed would have a trivial effect on climate (I’ve seen estimates as low as a 15% reduction in global temperatures), it seems that one would need a great deal more confidence in what the actual climate will look like before claiming those regulations are a good idea.

    [Response: There is of course a big difference between 1º and 5º C – the first we are already committed to, and the latter we should work to avoid. That is precisely the point. – gavin]

    Comment by Bradley McKinley — 22 Sep 2013 @ 1:50 AM

  149. Tamino has done a pair of posts on the work of Kosaka and Xie

    http://www.nature.com/nature/journal/vaop/ncurrent/full/nature12534.html

    http://tamino.wordpress.com/2013/09/02/el-nino-and-the-non-spherical-cow/

    …which seems to me to explain a heck of a lot of mismatch.

    http://tamino.files.wordpress.com/2013/09/pogah_cru.jpeg

    Have I understood correctly that the models do not predict the ENSO phenomena but simply place it randomly, and that when it is placed correctly the model works THAT well?

    My other understanding is that ENSO is triggered by wind conditions that we don’t yet predict well enough.

    respectfully
    BJ

    Comment by bjchip — 22 Sep 2013 @ 8:29 AM

  150. Bradley McKinley,
    You pile false premises on top of false premises. First, as Gavin points out, the need for action is not predicated on the success or failure of GCMs. It is predicated on observations and established science–that CO2 is a greenhouse gas, that humans are increasing CO2 in the atmosphere and that doubling CO2 will raise global temperatures by about 3 degrees. This last fact is established beyond reasonable doubt by multiple lines of evidence independent of GCMs.

    Your second false premise is that prosperity is inherently dependent on fossil fuel consumption. Not only is this false, there is plenty of reason to believe that growth will be increased in developing a new, sustainable energy infrastructure.

    Your third false premise is that current measures are meant as a “solution” to the problem. They are not. They merely buy us time–and time is the commodity we need above all else, since we have squandered 30 years due to the unreasonable demands for certainty from the likes of you.

    Your most cynical false premise is that we have no choice but to destroy the world of our progeny to maintain prosperity in our own. I refuse to even contemplate such treachery against generations yet unborn.

    Comment by Ray Ladbury — 22 Sep 2013 @ 8:52 AM

  151. Gavin-

    You say:

    You are setting an special (and probably impossible) bar for this issue that is never set for any other decision. Are the consequences of current economic policies (which have a much bigger impact on the poor) being worked out at the ‘level of precision’ you are demanding? The answer is obviously not.

    I’m not sure I understand the basis for this claim. Cost benefit analysis is a standard requirement whenever a commitment of resources of this magnitude is contemplated. There is simply no other way for governments to determine how to best allocate the resources at their disposal. The fact that numerous peer reviewed studies have attempted to quantify impacts (e.g. Yohe et al. [2007]) demonstrates that even if you do not understand the need to conduct this type of analysis, others do.

    [Response: Ha. I’m actually well aware of the literature on cost-benefit studies and that most (even from Nordhaus) suggest that action is warranted. But the point I am making is somewhat different; namely, I do not think that cost-benefit analyses are anything but a rough guide to different pathways. Damage functions in these analyses are simplistic and mostly untested for the magnitudes of change projected, their ability to balance economic costs and social costs involve ethical judgements that are both opaque and not universal. The other point is that government decisions are made all the time with huge consequences with no such modeling being done – was the TARP bail-out assessed for it’s impact on the African poor? Is the EU CAP? etc. So while I am all for policy-specific impacts modeling, I recognise it’s limitations. – gavin]

    Additionally, I am unclear as to where your confidence in exactly what impact climate policies will have on the poor comes from. It is easy to imagine policies that price carbon and use the funds to reduce regressive taxation that is a huge burden.

    The confidence comes from the fact that third world countries have consistently voiced their objections to certain mitigation efforts such as carbon taxes on grounds that they, not the industrial world, would bear the brunt of the burden.

    [Response: Where does this come from? Most actions being contemplated are related to western use of fossil fuels for western consumption of transport and electricity. Impacts of a carbon price in the US say, are not going to be predominantly felt in Somalia. – gavin]

    Similarly, the poverty that you rightly condemn in Africa and other areas is far more associated the very endemic institutional issues than it is with energy poverty. That is a symptom, not a (primary) cause. But even there, there are large parts of rural Africa where local solar (for instance) is already cheaper than extending the grid to supply fossil energy.

    I agree that energy poverty is a symptom of the institutional issues. However, you cannot deny that anything that makes energy more expensive will have severely negative impacts on the third world. If local solar is indeed cheaper than fossil fuels, there is no need for carbon taxes to make fossil fuels more expensive–market forces will ensure the switch all by themselves.

    [Response: Not true. The biggest handicap in rural poverty is lack of access to capital – even if new projects are enormously beneficial on even a standard cost/benefit analysis, they still do not get done. The idea that market forces automatically do the best thing in these cases is a fallacy. – gavin]

    The idea that climate must be balanced directly and exclusively against the global poor is a false dichotomy. And none of this has anything to do with models.

    I never claimed that climate must be balanced directly and exclusively against the global poor.

    [Response: Good. – gavin]

    Comment by Bradley McKinley — 22 Sep 2013 @ 11:15 AM

  152. Tamino’s on a roll. Referring to the 15-year span from 1992 – 2006 when the rate of surface warming was greater than the long-term trend,

    Does this mean that global warming is wrong? That the computer models are utter junk? That this whole climate science thing is just a hoax, a nefarious scheme to cheat us all out of tax dollars in order to support the lifestyle of gaudy luxury that we all know scientists wallow in? (Science: money for nothin’ and your chicks for free…)

    How did these evil denizens of global warming react? Did they use that result to push world government based on soshalism [re-spelled to get past the spam filter; he spells it correctly], so that they could destroy our economy by taxing the super-rich out of some of their hardly-earned riches? Did they run screaming through the streets yelling about how we’re all going to suffer spontaneous combustion by the year 2100?

    “Hardly-earned riches”. I am so stealing that.

    Comment by Mal Adapted — 22 Sep 2013 @ 12:05 PM

  153. Bradley McKinley #148

    …it is not enough to simply say “we know that increasing CO2 levels will make the world warmer and that is a bad thing”, you need to be able to specify how much warmer and quantify the impacts with some level of precision.

    We can quantify past impacts based on the sea level feedback (cryosphere response). This tells us that with the current climate forcing we established at least a Pliocene climate response, which was triggered by anthropogenic induced energy imbalance. And that is a very bad thing, because paleoclimate data tells us that the Pliocene regime 5 mil years ago had 15 meter higher sea level. Further do we know from proxy records (corals) that the slow ice sheet feedback includes non-linear episodes, when ice sheets disintegrate and abruply cause SLR.

    James Hansen explains Earth energy imbalance

    Earth’s energy balance
    In response to a positive radiative forcing F (see Appendix A), such as characterizes the present-day anthropogenic perturbation (Forsteret al., 2007), the planet must increase its net energy loss to space in order to re-establish energy balance (with net energy loss being the difference between the outgoing long-wave (LW) radiation and net incoming shortwave (SW) radiation at the top-of-atmosphere (TOA)). Assuming that this increased energy loss is proportional to the surface temperature change T, we can write F = λT + Q (1) where λ is the climate feedback parameter.

    Complete restoration of the planetary energy balance (and thus full adjustment of the surface temperature) does not occur instantaneously due to the inherent inertia of the system, which lies mainly in the slow response times of the oceans and cryosphere. Therefore, prior to achieving a new equilibrium state, there will be an imbalance, Q, between radiative forcing and climate response. This imbalance represents the net heat flux into the system, with nearly all of this heat flux at present going into the ocean (Levitus et al., 2005).

    Link

    Comment by prokaryotes — 22 Sep 2013 @ 12:40 PM

  154. bjchip:

    Have I understood correctly that the models do not predict the ENSO phenomena but simply place it randomly, and that when it is placed correctly the model works THAT well?

    That’s what I’ve gathered from KX13 and discussions of it (especially John Nielsen-Gammon’s), and I’ve been using it against a couple of “15-year pause” parrots elsewhere. I’m eager to read RC’s post on it, even if I find out I’ve got it backwards 8^}.

    Comment by Mal Adapted — 22 Sep 2013 @ 1:21 PM

  155. re: 151 and Cost Benefit Analysis

    I remember the famous 2% Doctrine of the Bush era, i.e. if there were a 2% chance of WMDs, we had to intervene.

    And then there’s Luke 19:22.

    Comment by Jeffrey Davis — 24 Sep 2013 @ 10:05 AM

  156. Gavin wrote: “… there are large parts of rural Africa where local solar (for instance) is already cheaper than extending the grid to supply fossil energy.”

    Bradley McKinley: “If local solar is indeed cheaper than fossil fuels, there is no need for carbon taxes to make fossil fuels more expensive – market forces will ensure the switch all by themselves.”

    Carbon taxes have been proposed in developed countries as a mechanism to internalize the environmental and public health costs of burning fossil fuels, which are currently externalized and foisted off on the public. As such, carbon taxes ARE a “market force”. They simply “correct” the market by forcing it to reflect the true cost of fossil fuels. Failure to do that amounts to a massive public subsidy to fossil fuels, which is why they appear to be (but actually are not) “cheaper” than renewable energy.

    None of which has anything to do with poor people in rural Africa that Gavin wrote about — people who use little or no fossil fuels, and who would therefore not pay any carbon taxes, even if someone were to impose such taxes in rural Africa, which no one has suggested doing.

    The reality is that many of those people, and millions of others like them throughout the developing world who have NO access to electricity, will NEVER have access to fossil-fuel-fired electricity because no one is ever going to build the centralized power plants and the grid to deliver electricity to them. The ONLY way they will ever have electricity is with distributed renewable energy, principally photovoltaics — which are vastly less expensive than building a whole new grid-based electricity generation and distribution infrastructure.

    And in fact, although it doesn’t get a lot of media attention, there is an ongoing revolution in rural solar electrification in the developing world, where low-cost systems as simple as a few solar panels and batteries are providing power for electric lights, refrigeration, medical equipment, radios, cell phones, satellite TV and Internet access, and more, to entire villages.

    Comment by SecularAnimist — 24 Sep 2013 @ 11:03 AM

  157. The Truth About Global Warming – Science & Distortion – Stephen Schneider

    Comment by prokaryotes — 24 Sep 2013 @ 9:03 PM

  158. Arguments about impact on the poor due to CO2 mitigation ring hollow, as usual, because of the choices we make on a daily basis to blow off massive amounts of hard cash on the useless and the pointless .

    Shiny for us equals starvation for others but it’s a choice we make every single day. The expensive but useless layer of chrome applied to the plastic grille of an automobile is an opportunity cost. Can you eat hair gel? How many are fed with the $23 billion spent in the US per year on cosmetics? How does a 32″ television taste and what vitamins does it provide, compared to a 70″ set? What’s the nutritional value of $80/month for ESPN Sports Center?

    Imagining just a little bit of restraint in our self-indulgence makes the costs of dealing with CO2 mitigation appear rather smaller.

    Meanwhile, money doesn’t vanish; the person now operating the machinery to produce aerosol cheese could be employed doing something useful, such as installing solar energy collection apparatus.

    Strip away the moralizing about hurting the poor with CO2 mitigation and it’s true that the argument here is still actually about the money vector. The trouble is, the desired outcome is antithetical to what the moralizers are talking about.

    Comment by Doug Bostrom — 25 Sep 2013 @ 3:18 PM

  159. Re- Comment by Doug Bostrom — 25 Sep 2013 @ 3:18 PM

    What you say is so inconvenient.

    Steve

    Comment by Steve Fish — 25 Sep 2013 @ 9:01 PM

  160. Another mismatch with an interesting possible explanation:
    http://onlinelibrary.wiley.com/doi/10.1002/2013EO390007/abstract

    Research Spotlight
    Gravity waves could explain powerful thermospheric cooling
    Colin Schultz
    online: 24 SEP 2013
    DOI: 10.1002/2013EO390007
    Eos, Transactions American Geophysical Union
    Volume 94, Issue 39, page 348, 24 September 2013

    For the past few decades the upper reaches of Earth’s atmosphere have been cooling much faster than researchers anticipated. While the rising atmospheric concentration of carbon dioxide is heating the air near the ground, that same increase is expected to cool the thermosphere—the atmospheric band that stretches from around 80 kilometers altitude to the exosphere at 500 kilometers—by emitting heat into space. However, while carbon dioxide should theoretically cool the thermosphere by around 2 kelvins per decade, the observed cooling was around 10 times this rate. Building on recent theoretical and modeling work, Oliver et al. lay out a mechanism that could explain the observed cooling.

    (This is about the Thermosphere — very thin up there, not many molecules involved — so I doubt a geoengineering control knob would be possible here.)

    Comment by Hank Roberts — 26 Sep 2013 @ 4:21 AM

  161. Very hollow indeed. Especially because we probably won’t live to see any negative consequences of our current use of fossil fuels – only our children and grandchildren will.
    Having said that though, at least it’s rational. To say “let’s just continue to do what is best for us (the well off, not the poor …) and leave any negative consequences for our children and grandchildren to solve” is a perfectly rational point of view. Cold-hearted maybe, but rational.

    Comment by Retrograde Orbit — 26 Sep 2013 @ 8:45 PM

  162. The problem is not philosophical or even scientific it’s political, the graphs of the models were extensively used to form government policy they were widely circulated as hard predictions

    Politicians in Australia and Germany have to take into account the brickbats thrown at them , this above discussion is ,for them ,vacuous in the extreme.

    [Response: I strongly doubt that policy was made on the assumption that models are perfect (evidence?). Uncertainties have been discussed in respect to climate predictions since the beginning of the policy debate. – gavin]

    Comment by Jeannick — 1 Oct 2013 @ 4:20 AM

  163. In my comments I never talked about global warming and it’s effects. I talked only about the topic of this post, which is: the mismatch betweem model results and observations, and it’s implication for model uncertainty (since the mismatch cannot be attributed to observation errors). I also stated that the wide spread of model results further increases uncertainty.

    Gavin implicitly agrees about model uncertainty and states that climate change is also proven by other lines of eivdence – like paleoclimatology. Fine.

    Then Gavin says:
    “The odd thing is that people seem to think if there is some huge unknown in climate that models don’t capture that this makes things better somehow. ”

    Uncertainty is what it is. It’s not a matter of “better” or “worse”. (“Better” for what?)
    There is a high degree of uncertainty in model results. We should be able to agree (or dissagree) on this, based on the model facts, without reference to our general opinion about global warming.

    Comment by Jacob — 2 Oct 2013 @ 5:02 AM

  164. Jacob says:”Uncertainty is what it is. It’s not a matter of “better” or “worse”. (“Better” for what?)”

    Uncertainty IS worse, because when you plan for the worst, and you think the worst is worse than it really is, you spend more effort than needed.

    Comment by t_p_hamilton — 2 Oct 2013 @ 7:35 AM

  165. Jacob,

    “There is a high degree of uncertainty…”

    So you assert. Yet other than being alarmed by the word ‘mismatch’, I don’t see you backing that up or putting it into some proportional context — which is why I, for one, am starting to doubt that you have any idea about what level of uncertainty best characterizes what is known and how it is taken into account in research and literature reviews.

    FUD: Fear, UNCERTAINTY, Doubt.
    A well known set of rhetorical tactics used by fake AGW skeptics to paralyze sensible discussions, mischaracterize scientists, and generally poison the political atmosphere.

    Perhaps if you asked some good, straightforward, specific questions and did a little less lecturing and maneuvering, the discussion might advance more smoothly.

    Comment by Radge Havers — 2 Oct 2013 @ 9:14 AM

  166. Retrograde Orbit wrote: “we probably won’t live to see any negative consequences of our current use of fossil fuels – only our children and grandchildren will”

    What in the world are you talking about?

    We are already seeing negative consequences of our use of fossil fuels, on a massively destructive and costly scale, all over the world.

    Which is not to say that “our children and grandchildren” will not experience far worse consequences.

    Comment by SecularAnimist — 2 Oct 2013 @ 10:44 AM

  167. #60 “[Response: Nice thought. If such principles can be found, they might indeed be useful. However, I am not optimistic – the specifics of the small scale physics (aerosol indirect effects on clouds, sea ice formation, soil hydrology etc.) are so heterogeneous that I don’t see how you can do without calculating the details. The main conservation principles (of energy, mass, momentum etc) are already incorporated, but beyond that I am not aware of anything of this sort. – gavin]”

    Well, something must be there, because main conservation principles are unable to account for the observed extremely precise match between all sky hemispheric albedoes, while clear sky albedoes are vastly different. It must be an emergent phenomenon fuelled by phenomena on all scales. Whenever one runs into a symmetry of this kind, it suggests the possibility of simplification in theory, which is not utilized yet.

    Journal of Climate, Volume 26, Issue 2 (January 2013)
    The Observed Hemispheric Symmetry in Reflected Shortwave Irradiance
    Aiko Voigt, Bjorn Stevens, Jürgen Bader and Thorsten Mauritsen

    “Climate models generally do not reproduce the observed hemispheric symmetry, which the authors interpret as further evidence that the symmetry is nontrivial.”

    Comment by Berényi Péter — 11 Oct 2013 @ 6:20 AM

Sorry, the comment form is closed at this time.

Close this window.

0.575 Powered by WordPress