RealClimate logo


Technical Note: Sorry for any recent performance issues. We are working on it.

On mismatches between models and observations

Filed under: — gavin @ 13 September 2013

It is a truism that all models are wrong. Just as no map can capture the real landscape and no portrait the true self, numerical models by necessity have to contain approximations to the complexity of the real world and so can never be perfect replications of reality. Similarly, any specific observations are only partial reflections of what is actually happening and have multiple sources of error. It is therefore to be expected that there will be discrepancies between models and observations. However, why these arise and what one should conclude from them are interesting and more subtle than most people realise. Indeed, such discrepancies are the classic way we learn something new – and it often isn’t what people first thought of.

The first thing to note is that any climate model-observation mismatch can have multiple (non-exclusive) causes which (simply put) are:

  1. The observations are in error
  2. The models are in error
  3. The comparison is flawed

In climate science there have been multiple examples of each possibility and multiple ways in which each set of errors has arisen, and so we’ll take them in turn.

1. Observational Error

These errors can be straight-up mistakes in transcription, instrument failure, or data corruption etc., but these are generally easy to spot and so I won’t dwell on this class of error. More subtly, most of the “observations” that we compare climate models to are actually syntheses of large amounts of raw observations. These data products are not just a function of the raw observations, but also of the assumptions and the “model” (usually statistical) that go into building the synthesis. These assumptions can relate to space or time interpolation, corrections for non-climate related factors, or inversions of the raw data to get the relevant climate variable. Examples of these kinds of errors being responsible for a climate model/observation discrepancy range from the omission of orbital decay effects in producing the UAH MSU data sets, or the problems of no-modern analogs in the CLIMAP reconstruction of ice age ocean temperatures.

In other fields, these kinds of issues arise in unacknowledged laboratory effects or instrument calibration errors. Examples abound, most recently for instance, the supposed ‘observation’ of ‘faster-than-light’ neutrinos.

2. Model Error

There are of course many model errors. These range from the inability to resolve sub-grid features of the topography, approximations made for computational efficiency, the necessarily incomplete physical scope of the models and inevitable coding bugs. Sometimes model-observation discrepancies can be easily traced to such issues. However, more often, model output is a function of multiple aspects of a simulation, and so even if the model is undoubtedly biased (a good example is the persistent ‘double ITCZ’ bias in simulations of tropical rainfall) it can be hard to associate this with a specific conceptual or coding error. The most useful comparisons are then those that allow for the most direct assessment of the cause of any discrepancy.”Process-based” diagnostics – where comparisons are made for specific processes, rather than specific fields, are becoming very useful in this respect.

When a comparison is being made in a specific experiment though, there are a few additional considerations. Any particular simulation (and hence diagnostic from it) arises as a result from a collection of multiple assumptions – in the model physics itself, the forcings of the simulation (such as the history of aerosols in a 20th Century experiment), and the initial conditions used in the simulation. Each potential source of the mismatch needs to be independently examined.

3. Flawed Comparisons

Even with a near-perfect model and accurate observations, model-observation comparisons can show big discrepancies because the diagnostics being compared while similar in both cases, actually end up be subtly (and perhaps importantly) biased. This can be as simple as assuming an estimate of the global mean surface temperature anomaly is truly global when it in fact has large gaps in regions that are behaving anomalously. This can be dealt with by masking the model fields prior to averaging, but it isn’t always done. Other examples have involved assuming the MSU-TMT record can be compared to temperatures at a specific height in the model, instead of using the full weighting profile. Yet another might be comparing satellite retrievals of low clouds with the model averages, but forgetting that satellites can’t see low clouds if they are hiding behind upper level ones. In paleo-climate, simple transfer functions of proxies like isotopes can often be complicated by other influences on the proxy (e.g. Werner et al, 2000). It is therefore incumbent on the modellers to try and produce diagnostics that are commensurate with what the observations actually represent.

Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world. Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading – since it is obvious why there is going to be a discrepancy, albeit one that doesn’t have much implications for our understanding.

Implications

The implications of any specific discrepancy therefore aren’t immediately obvious (for those who like their philosophy a little more academic, this is basically a rephrasing of the Quine/Duhem position on scientific underdetermination). Since any actual model prediction depends on a collection of hypotheses together, as do the ‘observation’ and the comparison, there are multiple chances for errors to creep in. It takes work to figure out where though.

The alternative ‘Popperian’ view – well encapsulated by Richard Feynman:

… we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong.

actually doesn’t work except in the purest of circumstances (and I’m not even sure I can think of a clean example). A recent obvious counter-example in physics was the fact that the ‘faster-than-light’ neutrino experiment has not falsified special relativity – despite Feynman’s dictum.

But does this exposition help in any current issues related to climate science? I think it does – mainly because it forces one to think about the other ancillary hypotheses are. For three particular mismatches – sea ice loss rates being much too low in CMIP3, tropical MSU-TMT rising too fast in CMIP5, or the ensemble mean global mean temperatures diverging from HadCRUT4 – it is likely that there are multiple sources of these mismatches across all three categories described above. The sea ice loss rate seems to be very sensitive to model resolution and has improved in CMIP5 – implicating aspects of the model structure as the main source of the problem. MSU-TMT trends have a lot of structural uncertainty in the observations (note the differences in trends between the UAH and RSS products). And global mean temperature trends are quite sensitive to observational products, masking, forcings in the models, and initial condition sensitivity.

Working out what is responsible for what is, as they say, an “active research question”.

Update: From the comments:

“our earth is a globe
whose surface we probe
no map can replace her
but just try to trace her”
– Steve Waterman, The World of Maps


References

  1. M. Werner, U. Mikolajewicz, M. Heimann, and G. Hoffmann, "Borehole versus isotope temperatures on Greenland: Seasonality does matter", Geophysical Research Letters, vol. 27, pp. 723-726, 2000. http://dx.doi.org/10.1029/1999GL006075

167 Responses to “On mismatches between models and observations”

  1. 101
    Ray Ladbury says:

    Jacob: “The trouble with climate models is – we don’t know, maybe even can’t know, if models really represent climate, and what the relation is between model and nature (climate), i.e. what is the extent of the match, or, in which area is the model more reliable and in which less.”

    That is horsecrap. Of course you cannot discern the goodness of the model by looking only at a single parameter, however important. What you need to do is look at the verified predictions made by climate models. There the accomplishments are rather more impressive.

    The models are extremely useful–that is why we use them.

    http://bartonpaullevenson.com/ModelsReliable.html

    Read and learn.

  2. 102
    SecularAnimist says:

    Jacob wrote: “… we don’t know, maybe even can’t know, if models really represent climate, and what the relation is between model and nature (climate), i.e. what is the extent of the match, or, in which area is the model more reliable and in which less”

    None of that is true. With all due respect, you are simply projecting your own personal ignorance into a universal axiom.

  3. 103
    Jeannick says:

    .
    I work with instruments ,we “model a response then look at the error
    the error is the important thing ,
    it tell us how well we understand the process
    often it tell us where to change our parameters to get a better fit
    IE we had a gas mass flow error ,
    it was very clearly a temperature compensation problem
    the error curve was so clear it pointed out the solution

    My beef is not with the models being wrong , that’s to be expected
    the issue is with people trying to gloss over the discrepancy as a minor issue
    When David Hataway , leading light of solar physic got his prediction spectacularly wrong on Solar cycle 24 .
    he just wiped the eggs of his face , chucked his models out
    and went back to the drawing board ,
    that’s a scientist I can respect ,
    science is about good predictability , the facts are never wrong

  4. 104
    Hank Roberts says:

    Another way to say it: the model doesn’t produce a map. It produces many possible maps.

    Start with a few hundred identical Earths in so many hundred identical Solar systems, all starting at the same time.

    They wouldn’t wind up all the same after a few hundred million years.

    The models for climate don’t give us a map; the results are a varied range of more or less likely probabilities.

    Best we know so far none of the likely paths for Earth go either totally icebound or totally Venus (the reality check is, we’re here). And the planets stay in their approximate relations too, Velikovsky notwithstanding).

    Models don’t say what climate -will- be, they say what likely outcomes are given what we know about the forces involved.

    Looking at model runs — a big brush fanning out from a common starting point — the real climate falls somewhere reasonable among those possibilities.

    Or not, of course.

    Then look at paleo work — how much of it tell us what was living then? Presumably we start off assuming what’s living on the planet in say previous interglacials (primarily in the oceans, most of the time) doesn’t make much difference. That works back to when plankton expanded from shallow coastal to deep-ocean habitats, taking the food chain with it. Before that, life was very different.

    Or not, of course.

  5. 105
    Alan Millar says:

    Gavin @77

    “Hmm, an interesting and testable argument. Well, let’s go to the tape:

    Umm… no obvious sign of some huge increase in fidelity prior to 2000. So that would be a “no” then. – gavin]”

    Hmm… well the graph you posted only dates from 1980 but clearly up to the year 2000 the hindcast is a much better match than the forecast for the 21st century and pretty close to the temperature record.

    Of course you should have posted the models hindcasts for the whole of the 20th century and then you would see the excellent short to medium term correlation between model output ant global temperature record.

    [Response: Again, testable:

    ..and again, no. - gavin]

    However, I again reiterate, this is impossible for an accurate model. The two outputs are not measuring the same thing and an apple can’t equal an orange no matter how you cut it.

    Accurate models MUST mismatch the temperature record in the short to medium term unless the ‘weather’ signal remains neutral over the whole period.

    Alan

  6. 106
    Retrograde Orbit says:

    And that is part of the crux of the climate debate.
    Many skeptics don’t understand that models are a representation of the laws of nature.
    Instead they think researchers create models by collecting historic climate data, then pick a random curve that fits all the historic data points while making the assumption that CO2 drives global warming, and extrapolate that into the future. And bingo, that’s the model!
    Which make statement like “climate models have been falsified” more understandable. Yes if models were simply a curve fitting exercise, then any difference between predicted and actual climate would falsify a model.

  7. 107
    prokaryotes says:

    Why trust climate models? It’s a matter of simple science
    How climate scientists test, test again, and use their simulation tools.

  8. 108
    Jacob says:

    “The models for climate don’t give us a map; the results are a varied range of more or less likely probabilities.”

    Ok. but if the range of results is too big, spread over all possible outcomes – then the models are quite useless. To be useful they need to narrow down the range of results (future temps).

    Putting it another way: it is impossible that all models (i.e. all results) are “correct”. Some of them are obviously wrong. How do we tell which ?

  9. 109
    Jacob says:

    Continuing the map analogy:
    The models are not one map, but many maps (many models), all different.
    The question is: how do we know which of the maps matches best the terrain ?
    We can assume that none of them matches perfectly, and even if there was a near perfect match, we cannot identify now which one it is.
    Is that correct ?

  10. 110
    Retrograde Orbit says:

    Gavin has been arguing throughout this post that all models are incorrect yet all models are useful.
    That sounds like gibberish unless we understand that models are a representation of ( – our understanding of – ) the laws of nature.
    They are not designed to predict the future, they are designed to represent reality. And of course, as such, they are useful to predict the future, but they will never be 100% accurate doing so.
    But what’s the alternative? Close your eyes and wait for fate to strike you?

  11. 111

    #108–” but if the range of results is too big, spread over all possible outcomes – then the models are quite useless…”

    Sure, but that’s a counterfactual. The range of the results is quite narrow enough to let us know that we’re not acting safely nor sagely.

    We’ve seen about .8 C warming so far, and there is robust data showing weather responding to that small change in ways that are already quite expensive in blood and treasure. Specifically, there is good data on the increase in extreme precipitation–as seen very recently in Colorado, and also this summer in Alberta and in India–and drought–as seen last year over much of the US and Mexico. (Though, to be fair, there is some debate about best metrics in assessing this.)

    Extreme heat waves have also been a recurring feature–this summer East Asia got nailed:

    The big weather story for Asia (and perhaps the world) during the past month, was the unrelenting heat in Eastern China, Japan, and other surrounding areas. Japan saw its all-time national record beaten when the temperature peaked at 41.0°C (105.8°F) at Shimanto on August 12th (see this blog post for details not only about the Japanese record but for the Chinese records as well). The heat wave in Japan lasted until August 23rd during which every day at least one site in the country broke its temperature record. Taipei, Taiwan also set its all-time heat record with a reading of 39.3°C (102.7°F) on August 9th besting the former record of 38.8°C (101.8°F) set on August 9th, 2003.

    (h/t weatherunderground: http://www.wunderground.com/blog/weatherhistorian)

    My personal informal estimate is that extreme weather events over the last decade which at least are more probable under future regimes have cost in excess of 100,000 lives and $100 billion US. That’s an appalling cost–though small compared to global population and a decade’s worth of global economic output.

    Given that impacts don’t scale linearly–that’s true both because of the statistics of normal distributions, which imply that (damaging) extremes become much more frequent with small shifts in the mean, and because significant breakpoints such as melting points for sea ice, wet-bulb temperatures too high for human survival, and heat tolerance for the most significant human food crops are all ‘in play’–the model forecasts using reasonable emissions inputs ought to be more than enough for anyone using sensible risk analysis to know that we making very bad choices right now.

  12. 112
    Jacob says:

    “models are a representation of the laws of nature.”

    Model are a hypothesis about how the physical processes interact in a chaotic system.
    Like each hypothesis, they need to be validated.

  13. 113
    flxible says:

    Jacob appears to be confused about what climate models are and do. Climate models have no relation to maps. Climate model results are not a list/map of future weather conditions and temperatures at points in time. Climate models are tools to identify the important variables and their interaction in an extremely complex physical system.

    If a climate model was a map, when you input the “business as usual” continuous increase of CO2, human population, and environmental degradation, the output/result would be “Here Be Dragons”.

  14. 114
    Radge Havers says:

    Jacob,

    OK maybe I’m just simple minded, but it seems to me that you can tie yourself in knots arguing from generalities. 

    Perhaps your first question should be, are the models useful? The evidence as spelled out in this thread (predictions, interesting questions raised) is yes. Plain and simple.

    Then maybe you might want to know how that happens. Is it a fluke? So you would have to look at the specific mechanics of the systems involved instead of thrashing around in uninformed epistemology. 

    Here’s an off the wall example (i.e., don’t make too much of it). What if I said that a map could be more useful for being spatially inaccurate? Even intentionally so? You would have to look a little deeper before deciding if those statements really made sense or not.

    Note the evolution of the Tube Map:
    http://theoinglis.tumblr.com/post/9009986470/the-evolution-of-the-london-underground-map

    Here it was recognized that loosened physical tolerances just happened to allow for clarity in helping people get where they need to go. 

  15. 115
    Hank Roberts says:

    > if the range of results is too big, spread over
    > all possible outcomes – then the models are quite useless…”

    No, you still haven’t understood that a model generates a scenario when you run the model. Then you run the model again and generate another scenario. Do that a handful of times, or a dozen, or a hundred.

    What you get isn’t “all possible” outcomes — it’s a specific number, one scenario per run, one scenario from each time you run the model.

    Look back at an early article at RC:
    Is Climate Modelling Science?

    One of the most important features of complex systems is that most of their interesting behaviour is emergent. It’s often found that the large scale behaviour is not a priori predictable from the small scale interactions that make up the system. So it is with climate models. If a change is made to the cloud parameterisation, it is difficult to tell ahead of time what impact that will have on, for instance, the climate sensitivity. This is because the number of possible feedback pathways (both positive and negative) is literally uncountable. You just have to put it in, let it physics work itself out and see what the effect is.

    2005 was a long time ago. Don’t take that as the current best information. But it’s a good place to start to get an idea why a model is not a map, and what a model is and can do.

    Remember computers now are a wee mite faster than they were in 2005 and models rather larger.

  16. 116
    t_p_hamilton says:

    Models reproduce the physical structure and chemical composition of the atmosphere, oceans, precipitation, ice, temperatures, winds, hurricanes, oscillatory events and atmospheric waves, etc. There are no skeptic models, for a good reason – it can’t be done, or else it would have been done long ago. For example, warming is much greater in the arctic than at the equator. Models explain why, as for “skeptics” – have they ever mentioned this inconvenient truth and their explanation for it? Jacob, have you ever heard of this issue? It is just one of many the models reproduce fairly well (not perfectly by any means).

  17. 117
    Icarus62 says:

    In models run with the GISS forcing data, the ‘natural+anthropogenic’ temperature evolution matches observations very well for a climate sensitivity of 0.75°C/W/m², which agrees with the value derived from palaeoclimate data. Take away the anthropogenic forcings and it still matches well up to about 1940 or 1950, suggesting that early 20th Century warming was almost entirely natural, and not significantly exacerbated by coal burning, deforestation etc.

    [Response: 'GISS forcing data' is neither specific nor infallible. Depending on what you are looking at, it could have a bottom up estimate of aerosol forcing or aerosol forcings from a residual calculation - neither of which really have the range of uncertainty. If you want to estimate the impact of different forcings on past trends you should look at the single forcing ('historicalMisc') simulations. What you find is that early century trends had combinations from multiple factors, and given the uncertainties - particular in the aerosol and solar terms - it's hard to put specific % on the natural/anthropogenic combination. The latter part of the century is easier because aerosols and solar are going the other way and we have additional datasets (such as OHC) to constrain mechanisms. - gavin]

  18. 118
    Hank Roberts says:

    > Jacob says:
    > 18 Sep 2013 at 3:37 AM
    > Continuing the map analogy:
    > The models are not one map, but many maps
    > (many models), all different.

    No.
    Models are not maps.

    Think of the analogy this way:
    A sausage grinder isn’t a sausage.

  19. 119
    Windchaser says:

    #108 — “Putting it another way: it is impossible that all models (i.e. all results) are “correct”. Some of them are obviously wrong. How do we tell which ?”

    You start with checking the individual parts. Say that you’ve figured out that the thing you’re trying to study (like the climate) has several different components (like GHG, ice albedo, and aerosols).

    Get as much, and as good, experimental data as you can. Does the model accurately reproduce some basic phenomena that happens in the real world when you change the GHG, or the aerosols, solar radiation, or ice albedo? Yes? Good. (For instance, do the models predict cooling after big volcanic eruptions? Do they predict yearly seasons as the Earth’s tilt changes, or tropospheric warming but stratospheric cooling with increasing GHG?)

    Basically, what we’re trying to do is to break down the problem into separate pieces, check that we understand them correctly as distinct pieces, then put them back together. A model might be “okay” if it mistreats a minor piece of the puzzle, but if it gets a major piece wrong, our chance of drawing any solid conclusions are likely doomed.

    As we go on, as our model improves and we get more data, we often tweak existing pieces of the puzzle or add more. Eventually, the model is “good enough” to handle the original problem, and we move on to other (usually related) problems.
    For instance, maybe we have a good fix on the 30-year climate predictions, but now we want good 10- and 20-year predictions. Or maybe we want to get better regional climate predictions. Etc. These often require that we get the 30-year picture, the “big picture”, correct first, before we can move on to these ‘details’.

  20. 120
    Jacob says:

    “Models reproduce the physical structure and chemical composition of the atmosphere, oceans, precipitation, ice, temperatures, winds, hurricanes, oscillatory events and atmospheric waves, etc. ”

    Well, that is surely their intention, but do they?
    There are too many variables in the real world, not all of them well known or measured, some even unknown.

    Gavin said, correctly, that models are not the reality, they are a simplification of some part of it. We can’t reproduce the atmosphere in models.

    The question of the relation of the model to reality (that which it tries to model) is an open question.

    We don’t know to what extent it does indeed achieve what it is intended – replicate the processes in a way that is somehow similar to what happens out there.

    Everyone repeats the dictum: all models are wrong but some are useful. Well, how can they be useful, if they (some of them at least) miss the mark, i.e. the relation to the real word?

    We must know that what the model shows is in some way related to the world out there, if not it cannot be useful. We cannot claim a-priori that all models are useful, just because we intend them to be so.

    It cannot be that all the models (for example, those shown in #77) are equally correct, or equally useful.

  21. 121
    Jacob says:

    #114:
    ” So it is with climate models. If a change is made to the cloud parameterisation, it is difficult to tell ahead of time what impact that will have on, for instance, the climate sensitivity. This is because the number of possible feedback pathways (both positive and negative) is literally uncountable. You just have to put it in, let it physics work itself out and see what the effect is.”

    What you see is the effect in the model.
    “let it physics work itself out” – the physics of the model.
    Doing models sure is fun, but what we want to know is what the effect would be in the real world.

    How do we know that the model correctly replicates the processes of the real world and produces the same (or similar) effects ?

    [Response: You test it in as controlled a situation as possible and compare to observations - that is what this whole thread is about. - gavin]

  22. 122
    Ray Ladbury says:

    Oh ferchrissake! Jacob, where are you getting this crap. It is clear from your posts that you do not understand climate, climate modeling or scientific modeling in general.

    I quote Hamming again: “The purpose of computing is insight, not numbers.”

    Read it again. When you understand that one quote, you will understand when a model is useful. Tamino’s 2-box model doesn’t have nearly the complexity and fidelity to physics found in a GCM. However, it is useful in that it provides insight into the interplay of the forcing mechanisms. Its simplicity makes it easy to see the interaction, despite its divergence from physical reality. Fitting a linear trend (a model) to data is useful even if the data do not trend linearly.

    For a scientist, the model we use depends on the insight we are trying to obtain. If we have enough experience with the system we are studying, we can extract the insight from the model results even as we comprehend the shortcomings of the model.

    For the purposes of climate prognostication, the best models we have are GCM. They say without exception that as we add more CO2 it is going to get warmer. They say that that warming will have some adverse consequences (drought, sea-level rise, increased impulsive precipitation, and so on). No one has come up with any convincing science that suggests otherwise–and certainly they have not come up with a model that comes close to reproducing anything resembling Earth’s climate that incorporates any such mechanism. The insight: it’s gonna warm and we’re in for a wild ride.

  23. 123
    Jacob says:

    #121
    Sure, you test it – i.e. compare it’s result to measured observations – as much as you can.

    Then you find mismatches. This is a natural process of refining and improving the models. They need to be always validated against some external data.

    The question is – what confidence can we have that the models, in the current state, at this point in the refining process, do represent reality and provide credible or good projections.
    And which of the many different projections of the different models, is more credible?

  24. 124
    Hank Roberts says:

    > what confidence
    Considerable

    > which
    Depends

    http://physicstoday.org/journals/doc/PHTOAD-ft/vol_60/iss_1/72_1.shtml

    “Weather concerns an initial value problem: Given today’s situation, what will tomorrow bring? Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so. Climate is instead a boundary value problem—a statistical description of the mean state and variability of a system, not an individual path through phase space. Current climate models yield stable and nonchaotic climates, which implies that questions regarding the sensitivity of climate to, say, an increase in greenhouse gases are well posed and can be justifiably asked of the models. Conceivably, though, as more components—complicated biological systems and fully dynamic ice-sheets, for example—are incorporated, the range of possible feedbacks will increase, and chaotic climates might ensue….”

  25. 125
    Ray Ladbury says:

    Jacob: “what confidence can we have that the models, in the current state, at this point in the refining process, do represent reality and provide credible or good projections.”

    Actually, Jacob, that is two very different questions. The answer to the first is that the models never “represent reality”. They are always a simplification, becuase reality is often too complicated to yield clear insight into what is going on. The answer to the second question–whether we can trust the projections–is more nuanced. If your question is whether adding CO2 will make things warmer than they would be otherwise–I’d say we have about 99% confidence on that. If the question is whether that warming will have adverse effects, I’d say we’ve got 95% confidence that it will. Ask which effects will bite us hardest and fastest and the crystal ball is becoming murky.

    Jacob, there are things we know. There are things we are pretty sure of. There are things we suspect strongly–and then there are “active areas of research”. What we know and what we are pretty sure of are sufficient to compel us to act if we are prudent.

  26. 126
    Jacob says:

    ” “The purpose of computing is insight, not numbers.”

    Insight into what ? Into the computer process ? I don’t think so.
    If there is no known relation between the model and the external data your insight is worthless.

    “For the purposes of climate prognostication, the best models we have are GCM.”
    Correct. That does not mean they are good enough or that their prognostication is correct. The lack of better models isn’t proof that your models are good enough. That needs validation.

    [Response: Why do you appear to think that this is at all at issue? Of course models need to build credibility - and they do that by having a track record of skillful predictions beyond what you would be able to do statistically or otherwise. No is (or should be) claiming that models are useful just because they are complicated or because people have worked on them for a long time - it is the other way around. Models show skill but are not perfect and that encourages people to work on them. It would be absurd otherwise. - gavin]

  27. 127
    Radge Havers says:

    Jacob,

    Link from ‘start here’ at top of page:

    Frequently Asked Question 8.1
    How Reliable Are the Models Used to Make Projections of Future Climate Change?
    https://www.ipcc.unibe.ch/publications/wg1-ar4/faq/wg1_faq-8.1.html

    Don’t be thrown by physicists’ fetish for quality control. They’ll always want better information no matter what. It’s a good thing — not like the unscrubbed dreck you’ll find out in the denial echo chamber.

    If models are like telescopes, then some are better than others, but working instruments can all spot the moon.

  28. 128
    Jacob says:

    “Jacob, there are things we know.”

    Do you know them because that is what the models show, or did you know them before you did any models?

  29. 129
    Retrograde Orbit says:

    We are always as confident as we choose to be.
    For example, there is a busy street in front of my house. I am NOT confident that a car will hit me when I cross it and therefore I never look left or right crossing it.

  30. 130
    Retrograde Orbit says:

    This is the same thing as Evolution vs. Creation. No matter how much evidence we provide in support of evolution, the diehard opponents will disregard it and hold on to preconceived notions.
    And in both cases the proponents of the inconvenient truth are villainized. The proponents of evolution because they are destroying our faith in God, the proponents of global warming because they are subversives destroying our fat, happy society.

  31. 131
    Jacob says:

    “…the proponents of global warming because they are subversives “destroying our fat, happy society.”

    So, “destroying our fat, happy society.” is the goal of the “proponents of global warming”?

    I thought global warming was about the physics of the atmosphere…

  32. 132
    Toby says:

    Feynman’s name gets bandied about a lot, but his real comments were aimed at the “social sciences” which he somewhat despised as failing to come up with the type of “laws” he admired.

    I read an anecdote that Feynman and Gell-Mann published a new theory of beta decay, sat back and waited for the experimental results.

    The first results were from a well-known European expermental physicist: Negative.

    Gell-Mann asked Feynman: “What do we do now?”

    Feynman just shrugged. “We wait”, he said.

    A few months later, new results came in. The experimental physicist had ironed out some bugs in his apparatus, and the new results tended towards confirmation.

    Moral: Feynman was a complete pragmatist where his own work was concerned.

  33. 133

    #129–”So, “destroying our fat, happy society.” is the goal of the “proponents of global warming”?”

    In the minds of considerable numbers of commenters on various blogsites and news sites with whom I’ve discussed related matters, yes. Those concerned about climate change are frequently described as ‘fanatical green ideologues out to create a new world order’–or else greedy scamsters out to extract money via evil carbon taxes and redistributive payments of all sorts. Quite often both at the same time, though that doesn’t seem highly congruent.

    “I thought global warming was about the physics of the atmosphere…”

    Me, too. But conceptually as well as physically, ‘what happens in the atmosphere doesn’t stay in the atmosphere.’ There’s always some sort of fallout, apparently…

  34. 134
    t_p_hamilton says:

    “So, “destroying our fat, happy society.” is the goal of the “proponents of global warming”?”

    Yes, in exactly the same way as the goal of evolutionary biologists are to destroy faith in God, Retrograde Orbit’s other example in the very same sentence.

    Do you realize that even if one model has all of the essential physics, very precise and accurate forcings, an exact or even close match of the actual climate trajectory (when El Ninos occur, when cool years occur in China, when droughts in the southwest USA occur, etc) is not going to happen? This is a characteristic of the physics involved, and mismatch in and of itself does not indicate a deficiency in the model. What you do test is the parts of the model, how they interact, and how they match in behavior with observations (of which we need more and better also). For example, research on understanding the physics and chemistry of clouds.

  35. 135
    Steve Fish says:

    Re- Comment by Jacob — 20 Sep 2013 @ 3:38 AM

    Jacob, you have misread Retrograde Orbit. He is saying that the “destroying our fat, happy society” claim is the game of the denialists.

    Steve

  36. 136
    Hank Roberts says:

    > Jacob, you have misread Retrograde Orbit.
    “It’s not the trolling, it’s the biting.”

    You don’t have to reply to someone who’s clearly looking for an argument.

  37. 137
    Radge Havers says:

    Jacob @ 126 said:

    “That needs validation.”

    Gavin responded well, but at this point I have a strong suspicion that you aren’t absorbing what people are saying to you. Go back and look at the validated predictions that Ray pointed you to above: http://bartonpaullevenson.com/ModelsReliable.html

    “If there is no known relation between the model and the external data your insight is worthless.”

    Again you’re not listening. There is a relation. It’s refining precisely that relation that drives the push for better models. Does this make sense to you?

    The models are built on real physics, real chemistry, real observation, and skill. ‘Skill’ is an important word in climate circles:

    What do you mean when you say a model has “skill”?

    ‘Skill’ is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skillful if it works better than just assuming that each day is the same as the last (‘persistence’). It should be noted that ‘persistence’ itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn’t necessarily an analogue for ‘persistence’ in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a ‘no change from the present’. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures – but only if natural and anthropogenic forcings are used – compared to an expectation of no change. Standard climate models don’t show skill at the interannual timescales which depend heavily on El Niño’s and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).
    http://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/

    If you aren’t happy with the answers you’re getting: Learn, Get Specific (stop arguing generalities skimmed off the top of your head), and most of all…
    Ask Better Questions

    More here:
    http://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/2/

  38. 138
    MFriesen says:

    Hi all – I have a technical question on the CMIP5. I am looking through, for example, the documentation provided on http://cmip-pcmdi.llnl.gov/cmip5/index.html . The source of my question is to clarify the actual results of the CMIP5 exercise so far, and what might be planned. Forgive me, but I need it dumbed down to layman’s terms. I have seen things on blogs where people try to jam together (by visual estimation of published graphs) previous forecasts of global temperature against actuals (eg HADCRUT).

    Within the CMIP5 documentation, there are a bunch of different types of runs specified for various error and sensitivity checks. I see one of them tries to initialized the “world” for several months circa 1979 and then run forward 1979-2008.

    Along those lines, has the following been tried (again, forgive if I’m asking something with an obvious answer published somewhere):
    1) pick starting projection dates and subsequent run paths
    2) example for (1): start 1980, run forward 5 years; start 1982, run forward 5 years; start 1984 (run to 1989) etc etc
    3) at each start we proceed as with the 1979 directive; ie calibrate with several months of starting year data
    4) thus the latest such (example) run where we could compare against actual data would be an initialization in 2008 and run forward for 5 years to 2013
    5) the advantage of the above (and I recognize that there is a huge amount of work involved in crunching these simulations) is that we could see the starting temp and 5 year projections against the historical record for a number of overlapping segments.

  39. 139
    Phil Mattheis says:

    This thread has been interesting, especially in the surge of postings from usual and unusual suspects, apparently gathering at RC to share in the demise and gaze at the corpse.
    They seem to presume that Gavin’s primary point in this thread is to rationalize why climate models are so wrong compared to reality, especially now they know for sure:

    1.there is no global warming (only the last 15 years count)
    2.it all happened before just like this without people
    3.arctic ice is in recovery and/or melted before, just like this
    4.’Cycles’ (said with deep authority and wave of wand)
    5.we didn’t do it
    6. “!Squirrel!”
    7.some combination
    Dr Punnet’s comment captures that presumption (#72,16 Sep 2013 at 1:18 AM) when he imagines a tipping point, where climate scientists will feel free to follow J Curry’s lead in speaking out. Clear implication is that the scientists at RC lead a cabal that is not so honest, nor courageous, while the the rest of us are too deluded or invested to comprehend that some new proof beyond denial is in.

    Trouble here includes the fact we are often speaking different languages, using the same words. “Certainty”, “uncertainty”, “proof”, “confident”, “wrong”, “robust”, etc carry very different meanings across camps. The “scientific method” is reduced to one of many religious options (lost among Scientology, Christian Science, various other Methodists). My choice of faith determines my beliefs, and reality will comply, if only my faith is strong enough.

  40. 140
    Steve Fish says:

    Re- Comment by Hank Roberts — 20 Sep 2013 @ 10:19 AM

    Hank, I think you are wrong on this. I have been working on honest very short responses (note mine to Jacob at two sentences, two lines)and you can see this technique in the Monty Python skit.

    Steve

  41. 141
    NickC says:

    How about a stab at a ‘useful map’ of why there seems to be such disbelief at the response to the mismatch. It won’t be correct but hopefully useful.

    It can be teased out of Ray’s above comment:

    “For the purposes of climate prognostication, the best models we have are GCM. They say without exception that as we add more CO2 it is going to get warmer. They say that that warming will have some adverse consequences (drought, sea-level rise, increased impulsive precipitation, and so on). No one has come up with any convincing science that suggests otherwise–and certainly they have not come up with a model that comes close to reproducing anything resembling Earth’s climate that incorporates any such mechanism. The insight: it’s gonna warm and we’re in for a wild ride.”

    1. There is an implicit assumption that the models are very close to resembling Earth’s climate and just need tweaking, but no acknowledgement that the system is probably much more complex than we know or can model.

    2. If model makers happen to not audaciously think that they are close to modelling the SYSTEM then why does Ray jump straight to his negative loaded insight without qualification.

    3. There is a distinct backing away from the way models have been used to generate fear and characterise the science as settled (catastrophic etc).

    4. The scientist as advocate model is on show in this thread (mismatch = increased uncertainty, no wait … uncertainty has not increased, everything is still certain … we use paleo anyway) and Gavin would claim his personal beliefs do not affect his science.

    So Ray, another stab at an insight, something is going on here, it may be a small tweak and a useful lesson for modellers OR it could point to a fundamental problem with our assumptions or ability to understand the VERY complex climate system. The insight scientists have will have implications for where research is directed etc. and it is important for it to be more balanced and not continually certain.

    [Response: The odd thing is that people seem to think if there is some huge unknown in climate that models don't capture that this makes things better somehow. It really doesn't - with just paleo-climate to guide us and no quantification possible of the implications of CO2 levels approaching (or exceeding) Pliocene levels (with ~20 meters of sea level rise), the uncertainties grow much larger, and uncertainty is not our friend. - gavin]

  42. 142
    Retrograde Orbit says:

    It goes beyond that. I feel like we reached a saturation point. Anybody who can accept the reality of AGW already has accepted it, and those who have not accepted by now never will.

    Jacob for example. He is not actually listening to anything. Because in his mind he knows there is no global warming. Therefore anybody who supports it is a liar. And he has to stand strong and reject those lies. This is not about “terminology” or explaining to skeptics what models are and why errors are inevitable. This is ideological.
    You know the highway administration only installs guard rails after somebody dies. I am afraid the same will happen with climate change.

  43. 143
    PeteB says:

    #23 – Yes I was interested on any comments on the Fyfe, Gillett, and Zwiers paper. I’m not sure whether their findings were because the CMIP5 models were significantly worse than CMIP3 at matching the observed global temperature over the last 20 years or if you compared CMIP3 models in the same way they would perform similarly. My impression was the CMIP3 models on average were running warm compared to observations, but still well within the range of model runs ?

  44. 144

    #141–

    “1. There is an implicit assumption that the models are very close to resembling Earth’s climate and just need tweaking, but no acknowledgement that the system is probably much more complex than we know or can model.”

    No, there is no such ‘assumption.’ Model studies involve a process called ‘validation’, the essence of which is testing how well the model(s) reproduce known climatic features. And everybody acknowledges that the system is indeed more complicated than we can make the models. (A map is simpler than the terrain it depicts, too–but you can’t put the landscape in your pocket, or quickly view it from different perspectives, or conveniently use it to calculate route distances, all of which the simpler map enables.)

    2. “…why does Ray jump straight to his negative loaded insight without qualification.”

    Perhaps because it’s clearly correct? See point 1–since the models do in fact reproduce Earth’s climate pretty well with the ‘ingredients’ now on offer, the well-known principle of Occam’s Razor says that we shouldn’t be needlessly bringing in additional factors. When looking for something and not finding it, there’s always the (theoretical) possibility of success, if only one looked a little harder or a little longer. But when there’s no particular reason to think the sought-for item exists at all, there’s a presumption that you are wasting your time that increases proportionately to the length of the search.

    3. Sez you. Evidence, please?

    4. A complete misreading of the post. Again–the point is to consider carefully why the mismatch occurs and what it means. That’s not a matter of ‘faith’, but of method.

  45. 145
    Retrograde Orbit says:

    Yes Gavin, and there is another psychological twist that I find baffling:
    In the ears of skeptics “uncertain” always rings like “less”. It never occurs to them that it might actually mean “more”. How ideological bias can play tricks with otherwise rational human minds is fantastic.

  46. 146
    Hank Roberts says:

    > 135, 136
    Steve Fish, I agree and thanks. My 135 was meant as a reply to Jacob, to say if he was not looking for an argument, he needn’t bite when someone’s trolling tasty bait (and that if he is looking for argument: Monty Python).

    Your (Steve’s) two-line followups are good examples of how to turn a comment back into useful conversation. Appreciated.

  47. 147
    Ray Ladbury says:

    I see NickC has trouble reading for content.

    1)I make no contention that the model is “very close to resembling Earth’s climate”. I simply say that there are some things we know
    2)WTF–I don’t even know what a “negative-loaded insight” is.
    3)Citation fricking needed.
    4)Ditto.

    Nick, have you ever even known a fricking scientist? Have you ever even read a scientific paper? Or is your lack of understanding of what scientists and models really say also attributable to your inability to read for content?

  48. 148
    Bradley McKinley says:

    Gavin-

    Isn’t it fair to say that the discussion of what level of confidence we have in model predictions and what the models can and cannot do is central to making a case for action on climate change?

    [Response: No. The case for action comes from basic science and observations, models just attempt to quantify it more completely. - gavin]

    The relationship between per-capita energy use and standard of living is well documented. Any effort to restrict the use of fossil fuels will therefore have a quantifiable impact on the ability of the poorest people on the planet to lift themselves out of poverty. In order to make the case for action then, you must be able to quantify the negative impacts of climate change with a certain degree of accuracy. One must be able to make a convincing argument that the negative impacts of climate change far exceed the negative impacts of restricting fossil fuel use. In other words, it is not enough to simply say “we know that increasing CO2 levels will make the world warmer and that is a bad thing”, you need to be able to specify how much warmer and quantify the impacts with some level of precision.

    [Response: You are setting an special (and probably impossible) bar for this issue that is never set for any other decision. Are the consequences of current economic policies (which have a much bigger impact on the poor) being worked out at the 'level of precision' you are demanding? The answer is obviously not. Additionally, I am unclear as to where your confidence in exactly what impact climate policies will have on the poor comes from. It is easy to imagine policies that price carbon and use the funds to reduce regressive taxation that is a huge burden. Similarly, the poverty that you rightly condemn in Africa and other areas is far more associated the very endemic institutional issues than it is with energy poverty. That is a symptom, not a (primary) cause. But even there, there are large parts of rural Africa where local solar (for instance) is already cheaper than extending the grid to supply fossil energy. The idea that climate must be balanced directly and exclusively against the global poor is a false dichotomy. And none of this has anything to do with models. - gavin]

    Given the limitations that you describe here, I cannot imagine that is possible at this point. Yes, no sensible person is now denying that temperatures are rising and humans are largely to blame. However, the difference in impacts between a 1.0C rise in global temperatures and a 5.0C rise is tremendous. When you consider that the regulations currently being proposed would have a trivial effect on climate (I’ve seen estimates as low as a 15% reduction in global temperatures), it seems that one would need a great deal more confidence in what the actual climate will look like before claiming those regulations are a good idea.

    [Response: There is of course a big difference between 1º and 5º C - the first we are already committed to, and the latter we should work to avoid. That is precisely the point. - gavin]

  49. 149
    bjchip says:

    Tamino has done a pair of posts on the work of Kosaka and Xie

    http://www.nature.com/nature/journal/vaop/ncurrent/full/nature12534.html

    http://tamino.wordpress.com/2013/09/02/el-nino-and-the-non-spherical-cow/

    …which seems to me to explain a heck of a lot of mismatch.

    http://tamino.files.wordpress.com/2013/09/pogah_cru.jpeg

    Have I understood correctly that the models do not predict the ENSO phenomena but simply place it randomly, and that when it is placed correctly the model works THAT well?

    My other understanding is that ENSO is triggered by wind conditions that we don’t yet predict well enough.

    respectfully
    BJ

  50. 150
    Ray Ladbury says:

    Bradley McKinley,
    You pile false premises on top of false premises. First, as Gavin points out, the need for action is not predicated on the success or failure of GCMs. It is predicated on observations and established science–that CO2 is a greenhouse gas, that humans are increasing CO2 in the atmosphere and that doubling CO2 will raise global temperatures by about 3 degrees. This last fact is established beyond reasonable doubt by multiple lines of evidence independent of GCMs.

    Your second false premise is that prosperity is inherently dependent on fossil fuel consumption. Not only is this false, there is plenty of reason to believe that growth will be increased in developing a new, sustainable energy infrastructure.

    Your third false premise is that current measures are meant as a “solution” to the problem. They are not. They merely buy us time–and time is the commodity we need above all else, since we have squandered 30 years due to the unreasonable demands for certainty from the likes of you.

    Your most cynical false premise is that we have no choice but to destroy the world of our progeny to maintain prosperity in our own. I refuse to even contemplate such treachery against generations yet unborn.


Switch to our mobile site