Por que os Modelos Climáticos Globais não Fornecem uma Descrição Realista do Clima Local

A portuguese translation is available here.

125 comments on this post.
  1. Joe:

    Any comments on this?

    James C. McWilliams
    Irreducible imprecision in atmospheric and oceanic simulations
    PNAS | May 22, 2007 | vol. 104 | no. 21 | 8709-8713
    http://www.pnas.org/cgi/content/full/104/21/8709

    It emphasises that the problem is not just one of decreasing grid size.

    What about the specific proposition that:

    “No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.”

    Is this a problem of principle, like sensitive dependence on initial conditions, or something that enough years of hard work might crack?

  2. Bruce Tabor:

    Thanks Rasmus,

    I did not fully grasp the concept of skilful scale. Are you saying that GCMs do not reliably represent climate to a resolution of 1 grid point and that to achieve accurate representation you need to average over about 8 grid points? Hence the “skilful scale” is 8 grid points.

    [Response:This is basically the point, yes. But there has not been much discussion about what the skilful scale has been lately, so I'm not sure if it is still true. -rasmus]

    Do climate scientists expect any surprises as resolution increases? Were there “surprises” between the 1980s GISS models and the latest models? I note that in our region (Australia) your minimum scale map leaves out Bass Strait (between Australia and Tasmania) and Torres Strait (between Australia and PNG). These are significant water ways for local climate and ocean currents. They are also about 150 km wide – close to 200 km – so why would they be omitted?

    [Response:One Japanese model does have a very high spatial resolution, but I don't think there are any particular surprises. Perhaps an improved resolution may provide a better rpresentation of the MJO and the monsoon system as well as cyclones. The very high resolution model makes very realistic pictures of the cloud and storm systems, and the guys presenting the results are fond of showing animations which look very much like satellite pictures. Quite impressive. -rasmus]

    Do GCMs capture coarse topographic features, eg the Tibetan Plateau?

    [Response:apparently not well enough. -rasmus]

  3. mankoff:

    I agree that for quantitative studies GCMs cannot be used at a regional or single gridpoint scale. However, the results can be considered valid at a qualititave level.

    [Response:I think this is true too, but downscaling should in general add value to the simulations. -rasmus]

    I have used a VERY coarse resolution GCM (8 degrees by 10 degrees) to help plan a vacation, and it worked quite well. The data provided by the GCM was better than any guide book or even the CIA World Factbook. In addition, the GCM provided more than just temp + precip, including other useful variables like cloud cover (tanning), soil moisture and ground wetness (camping) and wind speed and direction (windsurfing).

    See my EdGCM writeup on my vacation planning for more details: http://edgcm.columbia.edu/outreach/showcase/cambodia.html

  4. Charles Raguse:

    It would be helpful to provide a useful definition of what is meant by “local”.

  5. Charles Raguse:

    No, no, no. Why doesn’t the AUTHOR OF THE ARTICLE give a useful definition of LOCAL climate!

  6. Charles Raguse:

    What is meant by LOCAL climate?

    [Response:I'm getting the message, don't worry. To me, the local climate is the climatic characteristics which have are directly relevant to my perceptions. This would normally be on a smaller scale than a grid-box for a GCM and smaller than 'meso scales' referred to in meteorology (more like 'meso-gamma') and smaller than minimum scale of most RCMs (which typically have spatial scales of ~50km, although some go down to ~10km). I define regional scales as somewhat larger, that whch characterises a larger region (e.g. at meso scale to synoptic scales). -rasmus]

  7. DocMartyn:

    How do you manage to convert heat, as in watts m-2, into temperature?
    What is the relationship between the steady state input of energy, in watt m2, and total global atmospheric volume, pressure and temperature?

    [Response:First law of thermodynamics, but this is done in the models. -rasmus]

  8. bender:

    Thanks for this post.

    You say:
    “Most GCMs are able to provide a reasonable representation of regional climatic features such as ENSO, the NAO, the Hadley cell, the Trade winds and jets in the atmosphere. They also provide a realistic description of so-called teleconnection patterns, such as wave propagation in the atmosphere and the ocean.”

    I would like to know, given that these (and other) regional features have not been studied all that long, how stable they are, in the mathematical sense of the term.

    A second question is: what do you mean by “*reasonable* representation”? Any links to the primary literature on either question would be appreciated.

    Thanks again. I look forward to more on the topic of how GCMs are constructed and parameterised.

    P.S. There are a couple of typos in the article: aggrigated, parametereisation

    re #5: “local” is an intentionally flexible concept. Often sub-continental, sometimes smaller, somewhere between 100 and 1000 miles.

    [Response:The best link is probably to the IPCC AR4 chapter 8. -rasmus]

  9. neal rauhauser:

    There are distributed computing projects for SETI, protein folding, and crypto cracking … but none for global climate change? OK, I’ve Googled after writing that line and I find http://ClimatePrediction.Net … but I don’t have the skill to determine if what they’re doing is valid or not.

    What doesn’t RealClimate have a distributed computing initiative? This place certainly has the pull to get people interested in providing computing power.

    And thusly (and quite tangentially) the model resolution could improve.

    [Response:Climateprediction.net is a SETI-inspired initiative where GCMs are run as screen-savers. These GCMs are coarser than the 'normal' GCM run on super-computers, but the vast number of runs provide high 'statistical power' (a very large ensemble yields a large statistical sample). I don't know of any initiative where distributed computing has been used to for one high-resolution GCM (i.e. splitting the world into managable chunks of computation), and I think that would be very unpractible as this requires a high rate of data exchange. -rasmus]

  10. bender:

    Thanks for this post. You say:
    “Most GCMs are able to provide a reasonable representation of regional climatic features such as ENSO, the NAO, the Hadley cell, the Trade winds and jets in the atmosphere. They also provide a realistic description of so-called teleconnection patterns, such as wave propagation in the atmosphere and the ocean.”

    I would like to know, given that these (and other) regional features have not been studied all that long, how stable they are, in the mathematical sense of the term. A second question is: what do you mean by “*reasonable* representation”? Any links to the primary literature on either question would be appreciated.

  11. Earl Killian:

    Don’t some of the GCMs (e.g. NCAR’s CCSM3) try to address coarse grids by having models for different terrain within the grids, and then weighting them according to their area for the grid cell? Does this not help things?

    You don’t say much about what “skilful scale” means. Is this the same as some other supercomputer techniques when the grid size is not constant but instead varies as needed?

    [Response:I don't think the literature is very clear on this (try googeling 'skilful scale'; I only got 20 hits!), and I thought it would be interesting to bring it up in this forum. -rasmus]

  12. bender:

    Re #5: “local” is an intentionally flexible concept, something smaller in scale than “regional”. The author defines this term in the opening paragraph in human terms, as the spatial extent over which most of us live most of our lives. Seems to me he’s talking about something the size of a state or smaller. Doesn’t much matter, as his point is that GCMs don’t work well at those small scales. Hence the attempt to define the minimum scale over which the models are skillful. “Local” is therefore anything smaller than that.

  13. Guillaume Gay:

    Thank you for the great job of this site.
    I’ve got a question somewhat related to this article. You gave insights on the spatial resolution of the climate models. I am wondering what you can say on their temporal resolution. More precisely, climate models are predicting the mean temperature evolution. Are there any precision on the evolution of the thermal amplitudes, and on what time scale?
    Thank you again.
    G. Gay

    [Response:One important consideration is the size of the model's time-step (order of minutes), and then the type of time-stepping (integration) scheme matters. But I'm not sure what the exact answer is to this (others?). -rasmus]

  14. James:

    Re #6: [There are distributed computing projects for SETI, protein folding, and crypto cracking...]

    This comes down to a fundamental difference in computational methods. The problems you mention can be broken down into many computationally independent pieces. Each of the pieces can be worked on independently, and the results collected and combined whenever they’re done.

    Climate models (and many other problems) work on a grid. At each timestep, values are computed within each cell. Those computations depend on the values of the previous timestep, and the values in adjacent cells. That means there’s a lot of data exchange going on. This might happen in memory on a single processor machine (very fast), or via a dedicated high-speed network on a parallel machine (e.g. IBM BlueGene). But if you tried to do this over the internet, the communication time would be very much larger than the time needed to do the computations themselves.

  15. Joseph O'Sullivan:

    I have been interested in the regional and local climate changes partially because as Rasmus writes that is what effects people’s daily lives, but also because regional and climate changes will need to be better predicted to determine the ecological changes caused by climate change.

    Are there any barriers that prevent better regional and local climate predictions? For example are there problems in regional vs global models like the difference between climate and weather prediction (ie weather is chaotic and therefore less predictable than climate) that make it impossible to make better local/regional predictions, or is it just a question of researching more and developing better models?

    [Response:Good question. I suppose in theory, one could always go down in scale, and when taking it to the extreme, to the scale of atoms (quantum physics). At one point, I expect the downscaling will becom impractical, at least. -rasmus]

  16. Hank Roberts:

    Google: climate distributed computing
    First two Results of about 1,130,000 for climate distributed computing. (0.20 seconds)

    BBC – Science & Nature – Climate Change …experiment used a technique called distributed computing to utilise users’ spare computing power to predict future climate. http://www.bbc.co.uk/sn/climateexperiment/theexperiment/distributedcomputing.shtml

    Distributed computing tackles climate change. Posted by Stephen Shankland. Oxford University and the BBC have launched a partnership …. news.com.com/8301-10784_3-6041683-7.html

  17. John Mashey:

    re: #6
    #10 is right, but in addition:

    The problems for which @home distribution works have some other characteristics as well:

    EITHER:
    1) There are a large number of independent input cases, each of which can be analyzed separately, using a modest amount of input, and yielding a simple answer that is easy to verify:

    a) YES/NO (“hello Earth. Why aren’t you answering us?”
    or INTERESTING/NOTINTERSTING, as particle physicists have long done, i.e., send an event to each free machine, have it crunch for a while, and say whether or not something interesting happened worth further analysis.

    b) A few numbers, as in crypto-cracking: “here are the prime factors”, which make take a long time to find, but are trivial to verify by multiplying them back together.

    c) A number, which is mainly of interest if it’s the bet found so far, i.e., as in Monte Carlo approaches to protein folding or Traveling Salesman routing problems.

    OR
    2) One is doing a Monte Carlo simulation where there is no right answer, but one is interested in generating an ensemble of results, and analyzing the distribution, i.e., “Do you have enough money to retire?” A delightful short piece on such is Sam Savage’s “The Flaw of Averages”: http://www.stanford.edu/~savage/flaw/

    I think the ClimatePrediction.net effort is of this sort, and it may be useful, but it doesn’t help the topic discussed in the (nice) original posting.

    Note that going from a 100km grid to a 10km grid means a 10×10 = 100X more elements (2D), and if it were general 3D, that would be 1000X. In many disciplines, people using such methods have had to do non-uniform-sized subgridding to improve results with a given level of computing. I.e., for some parts of a model, a coarse grid gives reasonable results, but for others, one needs a much finer grid.

  18. J Bloom:

    This is all very interesting but as far as I’m concerned fails to give an accurate enough picture of the processes at work behind the differences in predictions.

    What I would like to see is a much more detailed run-through of the differences in local predictions by models and concurrently, the unique ways in which they simulate natural processes.

    If 8 grid points represents a skillful representation and 1 does not, can you give a relevant example of how regional dynamics cancel each other out at that scale?

    [Response:I don't think it's necessary a matter of 'cancelling out' but rather giving a 'blurred' picture or for instance a spurious geographical shift of a few grid points due to e.g. approximations of local grid-point scale spatial gradients. -rasmus]

  19. John:

    I am running a distributed model for climateprediction.net. I would just as happily run one for realclimate. I am not a scientist and real climate has given me the insight and the links to further education which enable me to refute the flat earth rightwingers on the political board I frequent. They are no longer as dismissive and abusive as they were two years ago, when I began studying the information on your website. Thanks and if you need some of my cpu cycles, I will be glad to donate.

    [Response: Watch this space... - gavin]

  20. Ike Solem:

    I’m curious about this example of a regional-local prediction:

    Model Projections of an Imminent Transition to a More Arid Climate in Southwestern North America, Seager et al. Science 2007
    Abstract: “How anthropogenic climate change will impact hydroclimate in the arid regions of Southwestern North America has implications for the allocation of water resources and the course of regional development. Here we show that there is a broad consensus amongst climate models that this region will dry significantly in the 21st century and that the transition to a more arid climate should already be underway. If these models are correct, the levels of aridity of the recent multiyear drought, or the Dust Bowl and 1950s droughts, will, within the coming years to decades, become the new climatology of the American Southwest.”

    This is within the ‘skillful scale’. However, if one wants to know the effect on California’s Central Valley (an area of massive agricultural production) and on Sierra Nevada snowpack levels, it seems the models still lag behind the observations – but water districts should probably be focusing on long-term conservation strategies right now.

    This paper, based on observations, seems to provide support for Seager et al :Summertime Moisture Divergence over The Southwestern US and Northwestern Mexico, Anderson et al GRL 2001

    They note that moisture divergence over the American Southwest increased in 1994, in line with model predictions of persistent drought.

    Droughts are a common feature of the ‘unperturbed climate system’ but this change appears attributable to anthropogenic climate change. Seager et al report that the persistent drying is due to increased humidity, which is changing atmospheric circulation patterns and leading to a poleward expansion of the subtropical dry zones. Increases in atmospheric humidity are a long-standing prediction of the effects of increased anthropogenic greenhouse forcing.

  21. wayne davidson:

    Surely very high resolution experiments were done on smaller time scales covering wide regions, wasn’t there any significant change in results?

  22. Johnno:

    Local drying would be good to predict for urban planning; for example can Las Vegas USA or Adelaide Australia sustain their current populations? A year or two ago AGW was interpreted as ‘wetter everywhere’ but now that doesn’t seem to be the case.

  23. DocMartyn:

    How do you manage to convert heat, as in watts m-2, into temperature?
    What is the relationship between the steady state input of energy, in watt m2, and total global atmospheric volume, pressure and temperature?

    [Response:First law of thermodynamics, but this is done in the models. -rasmus]

    Is that sarcasm?

    How do you calculate the amount of heat in the atmosphere, in terms of volume, pressure and temperature ?

    [Response:You can use the ideal gas laws or the radiative balance models, depending on the situation. Besides, you may in some cases need to consider the latent heat associated with vapour concentrations and the condensation processes. But I'm not sure that I understand your point. -rasmus]

  24. Christopher Subich:

    Re: Climate versus weather

    One way of thinking about the results of a climate model is that “climate is what you expect, weather is what you get.”

    Consider a simple box-model for the Earth: heat goes in, heat goes out. There’s only one number for temperature, and that spans the “grid” of the entire Earth. Everything else, like the difference between Nunavut and Napa Valley, would be “weather” to the model.

    As computer models become better, we can see more detail, both in time and space. Parts of “classical” climate, like the ENSO, start showing up accurately — but this would still be “weather” to the whole-Earth-box.

    If we had a superfine model, that simulated at whatever timestep and resolution you wanted to name, then we’d be able to refine the definitions. Climate would be what happens over an ensemble of initial conditions, and weather would be what happens if you plug in conditions as-of-now. Until that point, we have to look at averages over both initial conditions and over insufficient resolution.

  25. Christopher Subich:

    Re: Distributed climate models

    Unfortunately, climate models are just plain big. The Earth is a big place by itself, and a research-quality climate model needs a relatively fine resolution over the surface.

    Even though they’re physically separated, different parts of the Earth also affect each other, climate-wise. In terms of weather, just consider the “butterfly effect.” It’s still present in average climate, albeit (generally) less pronounced.

    Combine both of these points, and you have a lot of data (for the globe) that depends on the entire lot of it at prior times. This sort of communication is what distributed systems like SETI@Home are really bad at. That’s why full-resolution climate models are most suited for a single supercomputer or a cluster of tightly-connected machines.

    That’s not to say that there’s nothing left for researchers without massive computer budgets. As the article hinted at, there’s a lot of work done in the “subgrid-scale” modelling. That encompasses most of the physics that actually happen, only on a physical scale that would be otherwise invisible to the global model. They’ll never be perfect (because they’re estimated, rather than directly simulated), but improving those models would go a long way to helping the accuracy of global climate models.

  26. Barton Paul Levenson:

    [[How do you manage to convert heat, as in watts m-2, into temperature?
    What is the relationship between the steady state input of energy, in watt m2, and total global atmospheric volume, pressure and temperature?
    [Response:First law of thermodynamics, but this is done in the models. -rasmus]
    ]]

    Don’t forget the ideal gas law.

  27. DocMartyn:

    Do you have any evidence that the Earths air pressure and volume has remained constant over the last 200 years?
    Do you have any evidence that the Earths atmosphere has the same water content over the last 200 years ?
    How do you calculate virtual heat, along the atmospheres vertical axis, give the large chages in air temperature and pressure, when you convert from a non-ideal gas when applying the ideal gas law ?

    [Response:You mean, do I have any evidence that the mass of Earth's atmosphere has changed over the last 200 years (the sea level pressure provides a measure of the air masses above and I'm presuming you are not suggesting that Earth's gravity, i.e. that that Earth's mass has changed appreciably)? What do you mean by 'the air volume'. The atmospheric extent? This is irrelevant if you are looking at the air property in a unit volume (m^3). I still don't get your point; how is this relevant to the GCMs ability to describe small-scale climatic features? -rasmus]

  28. Nigel Williams:

    This scale problem is mirrored in many other disciplines. In transportation modelling, for example, we can run very general models with just a few well-understood parameters, and get results that are generally reasonable matches with observations. So many households, workers etc gives so many trips in a day within a region.

    As we move to increasingly disaggregated models (finer scale) we have to consider more detailed parameters (about which we have progressively less confidence as we strive for finer detail) and we produce increasingly more detailed outputs which look very plausible, but which have a progressively higher chance of giving the wrong answer about what is going on in the street outside your house at dinner time on Thursday.

    At the end of the model re-fining process we are trying to observe electrons, for which (it seems) we can either know one state or another, but not both at the same time.

    Im sure the climatologists are aware of this. Its easy to produce a fancy model that looks good, but as the level of detail increases the rubbish-in-rubbish-out factor increases significantly. Its better to run with a robust broad-scale model, and apply common sense to the interpretation of sub-regional effects, than to run a rubbishy micro-model that cannot be shown to be exactly true anyplace at any given time.

    It�s a matter of confidence, and understanding. If you cannot mentally get you head around the input parameters with a high level of understanding and confidence, then its very hard to defend the results. Take it easy.

    Besides, we really do know all we need to know already, dont we. Whether its going to rain at your place on 1 July or not makes no difference to the main conclusions. Its going to get hotter, the ice is going to melt, the sea is going to rise – even if we can hit 2050 emission targets. The horse has bolted for the hills, all we can do is figure out how to run after it!

  29. Hank Roberts:

    >Earth’s air pressure
    Torricelli

    See “expansion tectonics” or “expanding Earth” for the argument that the Earth’s air pressure has changed — along with its mass, and diameter. It’s a religious alternative to geology.

  30. Timothy Chase:

    Nigel Williams (#28) wrote:

    This scale problem is mirrored in many other disciplines. In transportation modelling, for example, we can run very general models with just a few well-understood parameters, and get results that are generally reasonable matches with observations. So many households, workers etc gives so many trips in a day within a region.

    Oh, I am not sure you want to compare this to transportation modeling – or at least not highway performance modeling.

    In forecasting future traffic patterns with increased populations, traffic modellers fall back on a formula which works quite well except in periods of high congestion – exactly where you would want to have it work the best – assuming you are concerned with vehicle hours of delay or the ratio of freeflow speed versus projected actual speed. The problem is that they calculate speed as a function of volume (vehicles per hour passing a given point). In high congestion, there is a point you hit where the volume will remain constant but the speed drops by several factors for several hours. When congestion finally begins to fall, volume will remain constant with speed gradually rising until a point is hit at which traffic volume begins to fall and a one-to-one relationship between volume and speed is reestablished.

    Additionally, if anyone does the post-processing other than the modellers, you might want to be sure that they know how to properly calculate normalized performance measures. In my state, the fellow who implemented those calculations was under the impression that the averages were largely subjective – which didn’t help any. For example, he did a weighted averaging of speed by vehicle miles.

    In case some don’t the problem with this, imagine that you are dealing with just a single car where there are two legs to given trip. In the first leg, the car travels 50 mph for one hour, then in the second it travels 10 mph for 5 hrs. Using his weighted averaging, the car’s average speed is 30 mph. However, the car has travelled 100 mph in 6 hrs, giving a real average speed of 16.67 mph.

    Thanks to their unusual way of doing math, by the time you aggregated by both time and location, all but one out of over a hundred performance measures were wrong. The only thing they were calculating correctly was length of highway. Even the lane-mile calculations were off: they were calculating it as the maximum of the two directions.

    I came in as a temp programmer, identified the problem with their calculations and then reworked the functions. (The rule is to always calculate in aggregates at each level of aggregation, then at the very last step calculate the normalized measures as simple functions of the aggregates – generally through addition and division.) Incidently, I didn’t have any background in the area, either. But I noticed the inconsistencies – and then thought about it. It was clear, for example, that average speed had to be vehicle miles over vehicle hours, and after that the rest started to fall into place. The rework made quite a difference: all of the sudden they could see the congestion in the urban areas at the aggregate level – and the rural areas were looking far better.

  31. James:

    Re #27: [Do you have any evidence that the Earths air pressure and volume has remained constant over the last 200 years?]

    The first barometers were invented in the 1700s (or maybe even earlier?) Certainly by the mid-1800s they were extremely sensitive. Not long ago I was reading a book about the erruption of Krakatoa (the title of which I’ve forgotten, alas), which described how recording barometers around the world detected air pressure waves from the explosion for many days afterwards.

    Then there’s all sorts of indirect evidence, as for instance gas laws & conservation of mass: you seem to be suggesting that the actual amount of atmosphere might somehow have changed. If so, then where did it come from/go to?

  32. DocMartyn:

    O.K. I will make it very simple, adding heat to the atmosphere can change the temperature, the pressure and the volume.

    When the model an increase in the amount of heat in the atmosphere, what is the relationship bettwen heat input and temperature, pressure and volume?
    Secondly, have you manages to observe changes in the volume or pressure of the atmosphere; do thses changes match your models?

    Finally, finally, how can you model using the ideal gas law, when the atmosphere is a mixture of gasses, some of which do not follow the ideal gas law even when studied in isolation, much less mixtures?

    [Response:The pressure is more or less given in this case by the mass of the atmosphere above the point of interest. However, if you say that the air expands so much (due to an increase in the temperature) that the gravitational forces are reduced for the top of the atmosphere, then possibly there may be an effect. But keep in mind that the atmoshpere is in essence already a very thin shell of gas anyway. I guess this effect would be very minor, if at all noticable. Exactly which gases do not follow the ideal gas law? -rasmus]

  33. JohnLopresti:

    I appreciate the link to Chapter 8, and likely could learn much at that website. Reading the author r‘s development here on RC, I thought some fractal math could provide a differential layer for drilling into the overarching model’s output, or, rather, could form the granular infrastructure from which base the model could grow, topologically placing the GCM layer as the outer, visible depiction for the fractally defined substrate. But all this probably is in the math literature for the climate-weather models in existence.

  34. power leveling:

    Thanks to their unusual way of doing math, by the time you aggregated by both time and location, all but one out of over a hundred performance measures were wrong. The only thing they were calculating correctly was length of highway. Even the lane-mile calculations were off: they were calculating it as the maximum of the two directions.

    That’s not to say that there’s nothing left for researchers without massive computer budgets. As the article hinted at, there’s a lot of work done in the “subgrid-scale” modelling. That encompasses most of the physics that actually happen, only on a physical scale that would be otherwise invisible to the global model.

    anyway,Thank u very much for your sharing

  35. Dick Veldkamp:

    #32 Atmosphere modelling

    Doc, like some others here I am puzzled as to what you’re trying to say.

    If all relevant physical (gas) laws are in the GCMs (which they no doubt are), I suppose that these must be some pressure / volume change in the atmosphere if you heat it (probably small, for example 1 deg heating would give 0.3% volume change if pressure is constant). But this does not seem to be very important if we are interested in temperature (which can be verified directly against measurements).

    And it seems to me that at pressures of 1 bar max the ideal gas law (PV=RT) is perfectly adequate for the problem at hand.

  36. DocMartyn:

    “The pressure is more or less given in this case by the mass of the atmosphere above the point of interest. However, if you say that the air expands so much (due to an increase in the temperature) that the gravitational forces are reduced for the top of the atmosphere, then possibly there may be an effect.”

    The composition of gas in the vertical cross section of the atmosphere depends on the absolute T and the molecular weight of the gas. It should also be apparent that the atmosphere is “V” shaped, any expansion of the atmosphere (in response to an increase in energy) could have a disproportionate effect on pressure. I have a little sketch calculation looking at what would happen in terms of volume, pressure and temperature of an idealized atmosphere, without water vapor, and found it was very hard to do. I ended up with a system that was rather like a ballon, you can add heat into it and it inflates as the pressure increases. So does the atmosphere act like a ballon? Does it expand and pressurize when you add heat into it in your models?
    Another point if it does expand, its surface are should increase and so its radiation into space should also increase.

    ‘Exactly which gases do not follow the ideal gas law?”

    The only gases that follow the ideal gas laws are monoatomic (uniatomic), He, Ne e.t.c. Both di and biatomic gases lose (at least) one degree of freedon along an axis, and triatomic gases like CO2 behave with a large degree of non-ideal character.
    The people who really know about this stuff are into air conditioning and refidgeration, the basics is here:-

    http://www.zanderuk.com/tech_Centre.asp?chapter=1&section=2_Compression_3.htm&getIndex=false

    there quantum mechanics of simple triatomic gases, like CO2, has also been explored (but is way over my head) and is a little odd to say the least.

    On a practical level the ratio of the constant-volume and constant-pressure heat capacities, CV and CP, depends on the degree of freedom and is different in mono,bi and triatmic gases.

    http://www.whfreeman.com/college/pdfs/halpernpdfs/part04.pdf

    The real problem is that gamma can change at phase transitions, the real fly in the ointment must be the modeling CV/CP ratios at, or near dew points.

    So just how do you manage to model it?

    [Response:We are not talking about airconditioners and compression in the case of the atmosphere, and most of the atmosphere consists of gases with biatomic molecule s. CO2 is a trace gas with ~380 parts per million in volume (ppm), and doesn't play a big role in terms of volume and pressure (it's more important when it comes to radiative properties). So, as far as I know, the ideal gas laws still provide a good approximation to the atmosphere's behaviour. It seems to work for numerical weather models used in daily weather forecasts :-) -rasmus]

  37. Timothy Chase:

    JohnLopresti (#33) wrote:

    I appreciate the link to Chapter 8, and likely could learn much at that website. Reading the author’s development here on RC, I thought some fractal math could provide a differential layer for drilling into the overarching model’s output, or, rather, could form the granular infrastructure from which base the model could grow, topologically placing the GCM layer as the outer, visible depiction for the fractally defined substrate. But all this probably is in the math literature for the climate-weather models in existence.

    I at least know they have been thinking something along those lines…

    Introduction

    The fractal nature of the rainfall processes is an accepted behaviour and numerous studies have been published during the last decades. It can be cited Lovejoy and Mandelbrot (1985), Rodriguez-Iturbe et al. (1989), Olsson et al. (1993), Hubert et al. (1993), Tessier et al. (1996), Harris et al. (1996), Veneziano et al. (1996), Svensson et al. (1996), Lima and Grasman (1999), Mazzarella (1999), Mazzarella and Tranfaglia (2000), Sivakumar (2001a, b), Sivakumar et al. (2001) and Salas et al. (2005), among many others. These references include a set of concepts (multifractality, chaotic behaviour, time persistence, predictability) applied to a variety of topics as rain intensity, annual amounts, precipitation caused by convective storms, characterisation and comparison of different climates and design and improvement of rain-gauge networks.

    Lacunarity, predictability and predictive instability of the daily pluviometric regime in the Iberian Peninsula
    M. D. Martinez, X. Lana, A. Burgueno, and C. Serra
    Nonlin. Processes Geophys., 14, 109â??121, 2007

    … for a while…

    Fractal analysis of climatic data: Mean annual temperature records in Hungary
    L. Bodri
    Theoretical and Applied Climatology, Volume 49, Number 1 / March, 1994

    Temporal and spatial persistence in rainfall records from Northeast Brazil and Galicia (Spain)
    Miranda, et al.
    Theoretical and Applied Climatology, Volume 77, Numbers 1-2, March 2004, pp. 113-121(9)

    Likewise, when it comes to simulating the evolution of past climates, they will use slightly pink noise in the proxies, although it is much closer to white noise – about 15% red or less, if I remember correctly, where red noise would be the 1/f scale-free variety, similar to that found in music and falling rain. This was brought up in a guest commentary not too long ago:

    24 May 2006
    How Red are my Proxies?
    Guest commentary by David Ritson
    http://www.realclimate.org/index.php/archives/2006/05/how-red-are-my-proxies

    Pink noise realistically mimics the actual noise found in the record, I believe.

  38. Nigel Williams:

    Timothy, your …all but one of over one hundred performace measures were wrong.. comment supports my point. You can make simple models work well and give reasonable answers within their scope, but as the number of parameters increases then the chances of the inputs and hence the outputs being wrong in detail increases. So we start being able to speak quite confidently about one thing, and in the end we are able to say nothing certain at all about absolutely everything. Nuf said.

  39. Dave Blair:

    #32 and #35. If 0.3% is the expected change in atmospheric pressure for each degree change then it would not be minor and it should be measurable.

  40. Sean O:

    This is an excellent overview of the technology and its limitations. I will suggest that my readers on my site (http://www.globalwarming-factorfiction.com) jump over here to read this. I have been trying to explain the limitations of climate modeling piece by piece for some time and this article does a great job of explaining the entire spectrum.

    I agree with others that have commented that it would be great to have a SETI type distributed process. I understand that the current implementations of this type of project minimize some of that advantage but the SETI techniques are quite old and distributed computing algorithms have evolved quite a bit over the last 3-5 years. If NOAA (or another government agency) would put some effort in this area, I think we would be suprised. Perhaps a campaign to write our elected officials to fund that might make sense. I may do this on my site if I can figure out the details.

  41. Hank Roberts:

    Wait. Adding heat to air makes it heavier?
    What’s the source for “0.3% is the expected change in atmospheric pressure for each degree change”?

  42. graham dungworth:

    Re#27&31. These comments haven’t been adequately answered.
    The question concerns whether the mass of the atmosphere has been conserved over a 200 year timeframe. It obviously hasn’t over geological history but that is not relevant here. How sensitive is temperature to pressure and I assume that at ca. 1 bar+/- 0.1 bar the atmosphere behaves ideally as noted PV=nRT.
    The adiabatic lapse rates are considerable-
    http://commons.wikimedia.org/wiki/Image:795px-Emagram.gif Were The mass of the atmosphere 10% greater at sea level ie. 110kPa one would extrapolate mean temperatures ca. 8 celsius higher than at present.
    A direct empirical observation would be the mean temperature regime along the Dead Sea, well below present sea level.
    As temperatures rise the oceans and soils will degass according to Henry’s Law and atmospheric pressure will arise not solely because of PVT but also because the mass of the atmosphere is no longer conserved.
    I presume that climate models assume only that the mass of the atmosphere is conserved over centuries, ie. ca.5.1*10exp21g and this mass doesn’t change upon degassing.

  43. Paul M:

    Would a butterfly flapping its wings make a model diverge to the point of being useless. that is, a model really can’t go down too small unless there is both computing power and data that would make the model somewhat accurate.

  44. Timothy Chase:

    Paul M. (#38) wrote:

    Would a butterfly flapping its wings make a model diverge to the point of being useless. that is, a model really can’t go down too small unless there is both computing power and data that would make the model somewhat accurate.

    Seek and ye shall find…

    Although ultimately chaos will kill a weather forecast, this does not necessarily prevent long-term prediction of the climate. By climate, we mean the statistics of weather, averaged over suitable time and perhaps space scales (more on this below). We cannot hope to accurately predict the temperature in Swindon at 9am on the 23rd July 2050, but we can be highly confident that the average temperature in the UK in that year will be substantially higher in July than in January. Of course, we don’t need a model to work that out – historical observations already give strong evidence for this prediction. But models based on physical principles also reproduce the response to seasonal and spatial changes in radiative forcing fairly well, which is one of the many lines of evidence that supports their use in their prediction of the response to anthropogenic forcing.

    4 Nov 2005
    Chaos and Climate
    By James Annan and William Connolley
    http://www.realclimate.org/index.php/archives/2005/11/chaos-and-climate

    I used the website’s search engine, but I had seen it before.

  45. ray ladbury:

    DocMartyn,
    A debating hint: Condescencion is usually much more effective when one has a clue what one is talking about. To a first approximation, most gases behave as ideal gases, so any effects would be 2nd order. And yes, I would expect that the atmosphere does indeed expand due to increasing temperature–you’d expect this just from the fact that the molecules have increased energy. However, the atmosphere does not heat equally, and I’m really don’t see what it has to do with the subject at hand–or much of anything else. As to temperature vs. radiation field, see
    http://en.wikipedia.org/wiki/Stefan-Boltzmann_Law
    and
    http://en.wikipedia.org/wiki/Albedo

  46. FurryCatHerder:

    Please, keep in mind that the earth’s atmosphere is NOT in a sealed container. In 1998 when solar heating dramatically warmed the upper atmosphere, the atmosphere expanded and the result was an increase in the drag on orbiting satellites. Solar flares and other events are known to cause changes in the upper atmosphere that affect the thickness of the atmosphere. This has been fairly well studied as satellites require more effort to maintain orbit when solar activity increases.

    My guess is that what should be looked at, rather than pressure, is density. I’m sure that PV = nRT hasn’t been repealed just yet …

  47. bender:

    Re #38 Higher model resolution and better data on initial conditions do not solve the problem of chaos (sensitivity to small changes in initial conditions) in weather models, if that’s what you’re asking. As far as the climate models go, chaos in the time domain is not the issue – not for something like a global mean, anyways. What is an issue is the related problem of structural instability, which is sensitivity of model output to small changes in model formulation. i.e. Switch one flavor of module for another, say, better one, and there is no guarantee the new ensemble will behave in a predictable manner, let alone better. Not only will the output change metrically, but it will also change topologically. That means that entirely new circulatory features (ENSO, PDO, NAO, etc.) could emerge from the model as a result of highly altered attractor geometry. Same butterfly, totally different impact, all because of a change in one module. If these changes are deemed unacceptable by the modeler then the model will presumably need to be recalibrated by tuning the free parameters available from the non-physics-based portion of the model.

    Of course this explanation comes from a non-expert, so I would pass it by someone more qualified before taking any of it on faith.

    If you are trying to assert that butterflies can’t alter the global circulation, then I would ask for some hard proof of that assertion. Those that argue that “weather is chaotic, climate is not” would probably agree with you, but then again I’m not sure they’ve really made their case. If I’m wrong I would be happy to review some material from the primary literature. After all, we haven’t been observing the global circulation all that long. How do we know empirically which circulatory features are stable and which are unstable?

  48. Dick Veldkamp:

    #41 Hank

    0.3% just follows from the gas law pV=RT. If T goes from 288 (Earth temperature) to 289 K and p=C, then V must necessarily increase by 1/288 = 0.3%.

    Of course it’s more complicated than that (for example: the atmosphere does not heat uniformly, and is colder the higher you go, etc), but this was just to show that atmospheric expansion is probably not a big deal.

  49. Toby Sherwin:

    Thanks for this explanation. The way I describe the problems of resolution to the lay person is to relate it to the result of a football match. If Celtic (top of the SPL this season) were to play East Stirling (bottom of the 3rd division)we could all predict the result with some confidence. That’s where I guess GCMs are right now. Some experts might want to hazard the more difficult task of predicting the score. That I guess is where decadal regional models would like to be. But if anyone said that they could forecast not only the score, but also the scorer and the time of each goal with confidence then we’d all say they were nuts.

  50. BR:

    From the text above: “We were hoping for important revelations and final proof that we have all been hornswoggled by the climate Illuminati”

    As a climate scepticus I’d like to comment that it is never “conspiracy” that we suspect to lay at the basis of the CO2-hype.

    It is a time spirit working here, one of rebellion against the industrial revolution, a romantic longing to a virgin world, that brings people to target our capitalist consumption society.

    A time spirit, not a conspiracy.

  51. Nigel Williams:

    As if the truth was not enough…! Gavin, I presume you will lead us through this latest seminal paper by J Hansen et al.

    Dangerous human-made interference with climate: a GISS modelE study:-

    http://www.atmos-chem-phys.net/7/2287/2007/acp-7-2287-2007.pdf

    Quote: ‘CO2 emissions are the critical issue, because a substantial fraction of these emissions remain in the atmosphere ‘forever’, for practical purposes (Fig. 9a). The principal implication is that avoidance of dangerous climate change requires the bulk of coal and unconventional fossil fuel resources to be exploited only under condition that CO2 emissions are captured and sequestered. A second inference is that remaining gas and oil resources must be husbanded, so that their role in critical functions such as mobile fuels can be stretched until acceptable alternatives are available, thus avoiding a need to squeeze such fuels from unconventional and environmentally damaging sources.

    No problem.

  52. bender:

    Timothy Chase in #44 cites the essay by Annan & Connolley as an authority on the subject of chaotic climate. But that essay is curiously written, the first part very good, the latter part containing errors, and curiously chosen qualifiers… [cut]

    [Response:If you think there are errors, do feel free to mention them; I'm not aware of any. For a fuller view of my opinions on this, see http://mustelid.blogspot.com/2005/06/climate-is-stable-in-absence-of.html - William]

    I think that the jury is still out on this question. Correct me if I’m wrong. We haven’t been observing the climate system long enough at high enough resolution to be able to say with confidence whether abrupt regional shifts can occur unpredictably in response to the internal dynamics of global heat transfer. My hunch FWIW is they can, and I sense this was Pielke’s point in that thread… [cut]

    Maybe the experts here can answer me a related question. Are there potentially multiple global circulatory ‘modes’ (for lack of a better word), and if so, would all these modes be equal in their capacity to dissipate global heat to space? My hunch is that different modes are possible, and that there is no reason to expect them to be equal in dissipative efficiency. If so, then abrupt climate change (regional, if not global) can be expected through internal chaotic dynamics alone.

    [AGW alarmists, please note I am not denying anything about the 20th c. temperature trend. I am referring to past climate, something we are forced to view through a foggy lens that becomes increasingly foggy the further we look back.]

    To be concrete, consider the example of a coastal city whose climate is warmed by a warm ocean current. If the circulation were to shift, and that current now ran cold, that climate would shift abruptly colder and drier. First, I do not believe the occurrence of such shifts is predictable. Second, I believe they can arise dynamically, without the assistance of any forcing agent. Third, if such shifting were to take place at multiple locations at inter-continental scales, there’s no telling in advance how the global temperature might change. I invite you to overturn these belief statements.

    Apologies in advance for awkward use of climatological language.

  53. Barton Paul Levenson:

    Attention pressure-change folks — as long as the Earth’s volume is unconstrained, and it is, its surface pressure is not going to vary very much. Basically the surface pressure is going to be:

    P = (M / A) g

    where M is the mass of the atmosphere (say, in kg), A is the Earth’s surface area (square meters), and g the surface gravity (meters per square second). The answer comes out in Pascals. Working it backwards from sea-level pressure you get a figure of 5.27 x 1018 kg for the mass of the atmosphere. The actual figure is about 5.14 x 1018 kg, because some of the volume that would be atmosphere is taken up by surface relief.

    Canonical values for those who want to play with the equation: reference atmospheric pressure is 101,325 Pascals, the Earth’s surface area is about 5.1007 x 1014 m2, and g averages 9.80665 m s-2.

    There are pressure variations due to local weather and such physical effects as Bernoulli’s law, which relates pressure and wind velocity. But it’s never very much.

    Over geological time, the Earth’s air pressure has probably varied significantly. A primordial hydrogen-helium atmosphere may have given way to a massive steam atmosphere due to the heat of accretion and outgassing, which in turn gave way to a carbon dioxide atmosphere, etc. But the Earth’s atmosphere has had a very consistent makeup for the last several million years at least.

  54. Barton Paul Levenson:

    Bender,

    Your whole argument is fundamentally flawed. You are ignoring the fact that the models have already made a number of predictions that have panned out. They predicted global warming, polar amplification, stratospheric cooling, and the magnitude of the cooling from the eruption of Mount Pinatubo, all of which have been confirmed empirically. When we can see that the models work, arguments that they can’t work are out of court from the beginning. As Heinlein put it, when you see a rainbow you don’t stop to argue the laws of optics. There it is, in the sky.

  55. copyworld:

    People just need to realise about this problem

  56. graham dungworth:

    Re#45 Ray misses the point, the question isn’t about intensive properties.
    There is a mean baromatic pressure at my locale, averaged over many years, let’s say exactly 760mm or 1 bar. If I diligently collect daily data for 1 year and the average pressure for that year is 765mm, I would presume that one interpretation might be that there has been fairer weather there than usual for that particular year.

    Who measures the possible change in the absolute air pressure globally?

    The composition of the atmosphere as ppm by volume may remain constant, apart from the mean increase year on year of ca.delta 2ppm by volume of CO2. Hence, if you refuse to discuss it fair enough. I presume you assume the mass of the atmosphere is conserved.I would be surprised if it were conserved even over a short period such as a century. Were it also increasing, although it wouldn’t figure in the radiation balance, it certainly would further enhance the GHG effect.
    [edited]

  57. Ray Ladbury:

    B.R. Says in #50:”It is a time spirit working here, one of rebellion against the industrial revolution, a romantic longing to a virgin world, that brings people to target our capitalist consumption society.”
    Sorry, this is absolute horse puckey. B.R., do you even know any scientists? I doubt most of these guys even own a copy of Walden Pond! And if they do, it’s probably on their iPod. There is no spirit of rebellion. This is not about ideology, but rather about evidence. The science is pretty much incontrovertible–we are changing the climate. What is less certain is what the effect of these changes will be. However, we will be better able to deal with those changes if we manage the rate at which they occur by limiting our greenhouse gas emissions. If you want to protect our “capitalist society”, you had better act now before draconian measures are needed.

  58. DocMartyn:

    “Attention pressure-change folks — as long as the Earth’s volume is unconstrained, and it is, its surface pressure is not going to vary very much.”

    if P = (M / A) g, will not V, T and P all vary during the day/night cycle?
    How do you model the changes in water rich gas V,T and P when the day/night cycle involves water pricipitation?
    This is an actual question, a point scoring? What happens to the gas law assumptions when you are dealing with the transition from a single to two phase system, that is water as a gas and then water in the form of droplets. Are the changes manifest in temperature, pressure or volume?
    Is a model of the Earth atmosphere as a ballon reasonable?

  59. Ray Ladbury:

    DocMartyn, in terms of global properties like whether the atmosphere obeys the ideal gas law etc, consider that ~80% of the atmosphere is N2, with most of the rest being O2, Argon… Most of these molecules are rather inert and stable, so any deviations from ideality are small and mainly only observable at very high or very low pressures.
    The atmosphere behaves as a fluid held in place around Earth by the gravitational field. The geomagnetic field is what insulates it from the solar wind so the outer layers are not ripped away. In terms of expansion, I would think that this would be dominated by the outer atmospher–which accounts for most of the volume and little of the mass. The thing is we know how gases behave as a function of temperature and pressure. There’s no new physics here.

  60. Timothy Chase:

    Nigel Williams (#38) wrote:

    Timothy, your …all but one of over one hundred performace measures were wrong.. comment supports my point. You can make simple models work well and give reasonable answers within their scope, but as the number of parameters increases then the chances of the inputs and hence the outputs being wrong in detail increases. So we start being able to speak quite confidently about one thing, and in the end we are able to say nothing certain at all about absolutely everything. Nuf said.

    Hardly.

    What it supports is the conclusion that you shouldn’t have a local yahoo and total nitwit in charge of the equations. They work very well when they are the correct ones, – as long as the gravity model does. And people want to save time going from point A to point B, therefore the gravity model works quite well.

    However, in modeling traffic they run into a real problem because they are basing their equations off of an equation that calculates speed as a function of volume – which assumes a one-to-one. That assumption breaks down in the context of high congestion where a given volume (cars per hour passing a given point) corresponds to a continuem of speeds.

    And what can we conclude from that? Volume isn’t a very good variable for calculating speed in times of high congestion. Unfortunately it is pretty much all they have. Density (vehicles per lane mile) would work far better at all speeds. Unfortunately, it is far more difficult to measure density – you can’t just run a weight sensitive rubber pipe across a road to measure that.

    However, in climate modeling, which is admittedly far more complicated, you have a great deal more variables to play with – and a great deal of climate modeling is based upon principles of physics – quite possibly all of it. Of course, one might ask whether there might be some unknown or perhaps even unknowable force which will suddenly cause our models to break down – something that no matter how well they might work reality will take a left when we were expecting a right even though our equations were quite accurate all the way up to that point, accurately describing a whole host of phenomena in large variety of contexts up until that magical point.

    But one could say the same thing about physics.

    There are problems with the climate models we currently have. We know that their estimates are conservative. We haven’t taken into account all of the positive feedbacks relating to ice or the carbon cycle. But virtually all of the feedbacks that we know we are missing are positive feedbacks, and as such, we are able to say that the models are conservative. Given the positive feedbacks, it is quite likely that projections are rosier than what we will actually face, and therefore the urgency with which we should act is greater than what they currently project. However, the carbon cycle is an area of active research as is the cryosphere, so it is quite likely that models will even more accurate in the future.

    This rises above politics, gentlemen. I am uncomfortable with environmentalism, I often think it is taken too far, but this science. Given what is at stake, we must learn to work together despite our differences.

  61. Dan Pawlak:

    When you say “skillful scale” do you just mean effective resolution?

    Think of a sine wave. Regardless of the grid size (or equivalent grid size for a spectral model), you need two grid lengths (or three grid points) to minimally resolve half of the wave. You need four grid lengths (five grid points) to minimally resolve a full wave pattern. In numerical weather prediction, the effective resolution is typically taken to be about ten grid lengths. Phenomena smaller than that are not well-resolved at grid scale, so they should be parameterized.

    So, a regional numerical weather prediction model with grid resolution of 10 km can adequately resolve meteorological phenomena of length scale ~100 km.

    GCM’s these days may have resolutions of around 1 deg or 111 km, so their effective resolution would be more than 1000 km. That’s the length scale of features that they can adequately resolve.

  62. Hank Roberts:

    Bzzt! The link behind the name in #55 is another “search engine optimizer” and “global warming awareness” competition counter.

  63. graham dungworth:

    Re#53 Barton’s quote “But the Earth’s atmosphere has had a very consistent makeup for the last several million years at least.”
    By “make up” I presume you mean, firstly, composition eg. O2 20.95% by volume or Ar 0.93% by volume.
    The O2 content and N2 contents are biologically controlled, way above thermodynamic values. The Ar isn’t the known cosmic abundance, which is pertinent to 36Ar. 36Ar and the other noble gases Ne up to Xe are million folds depleted on Earth, lost during planetary accretion whereas water H20 was presumably retained as hydrates. Atmospheric 40Ar has been built up slowly by radioactive decay of 40K. This is the one atmospheric gas that one could say, secondly,that “it’s mass balance has been effectively constant over millions of years”.

    Where’s the evidence that the M has remained constant for millions of years let alone hundreds? Plenty of scientists make this assertion but where is the empirical evidence to support it? It’s one thing to cite CO2 concentrations, tens of thousands of years ago as ppm by volume but in doing so you make the hidden assumption that mass is conserved.
    Geochemical abundances of elements in the Earth’s crust are notoriosly innaccurate in several instances. For instance the molecule water, or water budget, in oceans and lakes etc neglects the more than 10% mass that has been subducted at plate boundaries. Years ago Walker upset the carbon budget by a wild unsubstantiated claim that the mantle contained contained at least seven fold the mass of carbon estimated for the crust, yet granites and basalts are vastly depleted in volatiles.
    Does anyone know of anyone who calculates the year on year mean sealevel atmospheric pressure for the globe inorder to check its constancy? Since 40Ar is essentially constant in mass it would be nice to see the other ephemeral gases related to it before paying sole attention to intensive properties. If the mass balance changes, due to N2 and O2 variations, the effect on GHG ppm by volume concentrations certainly will also vary, even though the diatomic gases don’t contribute to GW.

  64. Ray Ladbury:

    Re 63. Graham, Sorry, but if there was a point in there I missed it. I rather doubt that the biosphere impacts the 80% nitrogen content of the atmosphere. I don’t even know what you mean when you say the geochemical abundances of elements in Earth’s crust are inaccurate–with respect to what. They can certainly be measured to arbitrary accuracy, don’t you agree? Do you mean that they vary from place to place? As to H2O, it is important in mantle chemistry, but volcanos also give off a lot of water vapor–it’s probably balanced. I can see why O2 might change wrt very large changes in biomass, but not Nitrogen. And as far as conservation of mass, can we at least stipulate the laws of physics?

  65. Hank Roberts:

    There’s one for the standards committee.
    “…. a standard atmosphere at sea level is approximately equal to 760 millimeters of mercury, 29.92 inches of mercury, 1.013 bars, 1013 millibars, 14.70 pounds force per square inch, 2116 pounds force per square foot, or 101.325 kilopascals.”
    (Quote from a page with one of the better rants I’ve seen about measurement units:
    http://ourworld.compuserve.com/homepages/Gene_Nygaard/hectopas.htm )

    So, it makes sense to define atmospheric pressure at sea level, because, well, that’s the bottom of the atmosphere, ignoring places like Death Valley that are basins below sea level. What difference will it make when sea level rises? This is where the contemporary rate of change — so much faster than anything anticipated when the standards were defined — can be a puzzle.

    Now raising your barometer ten meters is certainly going to show a difference, if it’s a good tool. My old Thommen altimeter certainly detects that change.
    But if you then also raise sea level by ten meters to catch up with the barometer — moving the whole atmosphere up the same distance as the barometer — how much difference do you get? The atmosphere’s a thin spherical shell and you’ve increased the inside radius of the atmosphere ten meters.

  66. Ray Ladbury:

    Actually, Hank, it’ll get more complicated than that. We’re changing solid H2O to liquid, so it will flow toward the equator to balance centripetal acceleration. Earth may become more oblate and sea level may rise more in the tropics than at the poles; the rate of rotation might change (albeit very slightly). I’ll leave that to my buddies with the atomic clocks and platinum-paladium blocks to figure out.

  67. Alastair McDonald:

    Re #65 Hank,

    For an increase in altitude of 300 feet the temperature decreases by 1 deg F. So raising sea level by 10 m (approx. 30 feet)will raise the temperature everywhere that land still exists by 0.1 deg. F.

    However, barometric pressure depends on the weight of the column of air above that point, and since there is no change to the total air covering the Earth when sea level rises, the barometer readings will increase by the amount of air in a 10 meter column at any fixed point at or above the new sea level.

    Cheers, Alastair.

  68. bender:

    Re #54 How about Arctic warming in the 1930s-40s?

    [Response:See the Delworth and Knutson (2000) article in Science. They use simulations of a coupled model to show that this could easily have arisen from the intrinsic natural variability of the climate at multidecadal timescales. -mike]

  69. bender:

    Re: reply in #68
    And so the exact same thing – an unusually large realization of internal multidecadal variability of the coupled ocean-atmosphere system – can not possibly be occuring today?

    [Response: No. Precisely the same thing could of course be happening today. However, such internal variability (both in this model, and all other current generation coupled climate models) is unable to generate a century-long trend in global mean temperature anywhere close to that observed for the past century. Indeed, it is precisely this issue which is addressed in a rigorous, quantitative manner by model-based detection and attribution studies. See our previous review of the topic. -mike]

    Please note: the conclusion here would depend on one’s uncertainty surrounding the interactions among the model’s estimated parameters. (I don’t take these estiamtes as error-free.) So … what’s the chance that the CO2 sensitivty forcing coefficient is off by 10% 20% 50%?

    I’ve read AR4. Please don’t assume I haven’t.

    [Response: I wouldn't assume you not to have read the report. However, based on your comments above I might call into question your comprehension of its content. -mike]

  70. DocMartyn:

    “Alastair

    For an increase in altitude of 300 feet the temperature decreases by 1 deg F. So raising sea level by 10 m (approx. 30 feet)will raise the temperature everywhere that land still exists by 0.1 deg. F.”

    I was under the impression that ice was less dense than liquid water, so melting ice into liquid water will reduce the surface/water volume of the Earth.

    Moving the atmospheres 5.14 x 10^18 kg a distance of 10 meters requires a potential energy of 5×10^16 J.

  71. graham dungworth:

    Re#64 Ray, we agree where O2 in the atmosphere comes from. It comes partly from the tiny fraction of reduced carbon in biomass that is buried and becomes fossil carbon ie. as coal, kerogen(from which come hydrocarbon gases and oil).The ratio of reduced carbon to carbonate carbon in the crust is thought to be ca. 1 part in 4 or 5. Free O2 also comes from the incorporation of sulphide(and free sulphur from bacterial reduction) as pyrite, also coeval with kerogen formation. At the same time sulphate(as anhydrite or gypsum)is buried but not in similar environments. The ratio of sulphate to reduced sulphur is thought to be ca. 1 to 1. What limits O2 increase is oxidation. Ferrous iron is dominant in the crust,8.6% by mass it rusts through to ferric hydroxide hydrate,(Fe(OH)3.5H20.
    In terms of abundances in the crust, oxygen like Si and Al is huge. For every 1 atom of carbon in the crust in general there are ca. 30 each for O, Si and Al. For every 1 atom C there is thought to be ca. 0.5 atoms S in the crust, but the sulphur abundance is imprecisely known(ca. 20% error!). For every carbon atom there are 4 atoms of iron Fe(largely ferrous)and for calcium and magnesium also.
    Distribution of elements is not even throughout sedimentary basins, unlike the basaltic crust of the young ocean floors. Thus 10% of the salt(NaCl)content of the oceans is buried beneath the Mediterranean; this figure probably includes associated anhydrites(sulphates), known as sabhkas in the present day Arabian/Persian Gulf. The limited areas of sedimentation are controlled by biomass.
    Nitrogen is very low in igneous and basic rocks, it tends to be recycled in kerogens and coals. Whereas atmospheric abundances for O and C are tiny in comparison with the crust, nitrogen accumulates in the atmosphere. It is highly unstable there as every storm produces nitrate, a limited nutrient in the oceans and soils as it is rapidly incorporated as biomass in DNA and proteins. Nitrogen is under as much control by biomass as is oxygen! So for 1 atom of C in the crust how much nitrogen is there in the crust plus atmosphere? No-one knows the error is likely to be several fold. Although not a noble gas, it is depleted relative to cosmic abundance. On a reduced accreting Earth N as ammonia? was lost.
    Our Earth, re- climate change, it is often presented as a giant acid/base system(are the oceans becoming more acdic. For millions of years the silicate and borate buffers help maintain ph at ca. 8.2, even with the introduction of volcanic acid volatiles, largely SO2 cf. CO2)). The oceans never become too saline due to evaporite basin formation.
    Urey portrayed the acid plus base =salt plus water reaction as the reaction of – calcium carbonate(CaCO3) plus silica(SiO2) to give wollastonite(CaSiO3,igneous at depth) plus CO2 and its reverse.
    The Earth’s crust is also a giant oxidation/reduction system-
    C + 2Fe2O3 reacts to give 4FeO plus CO2. Those ferric atoms chelate 10 molecules of water, that so happens if you multiply up for the 2.2*10^22kg crust to give 1.46* 10^21kg of water,very close to the known mass of water on Earth.
    Up to 36% of the mass of the Earth is free iron(Fe). Homogenise everything and we would return to a highly reduced planet. Brian Mason in the 60′s calculated that since the last 570 million years(Phanerozoic)biomass has recycled a total mass of elements equivalent to more than 50 fold the Mass of the Earth! The mass and composition of the atmosphere is under total control by the biosphere.
    Re- mass conservation. Geochemists use an identical atmospheric mass, namely 5.12*10^18kg, dry air, as the rest of you, but we don’t know what it was 100years ago let alone thousands or millions.
    Cosmophysicists, cosmochemists and physicists express solar abundances and cosmic abundances as atoms per 1000 atoms Si; climate scientists probably don’t bother.Let’s call these silicon chauvinists. Carbon biomass chauvinists, who aren’t short and vice versa are interested in climate change. Were one to approach the atmosphere,crust and oceans from the biomass point of view it would look like on an atom per atom basis, as I introduced earlier-
    C(1)1, S(0.5), Ca4, Fe4,Mg4,H2O(10)……..Si30,O30,Al30
    or
    S1, C2, Ca8,Fe8 etc.. if you are a sulphur chauvinist.
    When I read that the CO2 content of the atmospheree. say in the Younger Dryas at ca. 10000yr was 270ppm by volume, I automatically think but what was its total mass in the atmosphere? Is the atmospheric mass conserved over that time period? I don’t know; I just hope someone wouldstart to measure it with the motivation that Keeling had, and not to continually assert it is conserved without measuring it.

  72. bender:

    Re: reply in #69
    “models … unable to generate a century-long trend in global mean temperature anywhere close to that observed for the past century”

    I understand the reasoning, but why “century-long”?

    You have the 1930s-40s warmth – an a posteriori “fit” as an anomaly, the 1960s-70s aerosol cooling (another a posteriori “fit”, uncertainty about which is well-known), and the recent 1998-2007 leveling-off of global mean temperature. That leaves two only *two* decades, not ten decades, that need explaining: 1980s-90s. So why not *probable*, why only “possible” that this was an anomalous warming pulse similar to that of the 1930s-40s? 20 years’ worth of anomaly is not as unlikely as 100 years’ worth. Especially relevant considering PDO went inexplicably positive in 1976. That’s presumably a good chunk of the puzzle?

    Anomaly after anomaly after anomaly. That’s chaos, no?

    Thanks for the replies. Much appreciated. We need to get this right and there’s not much time.

  73. FurryCatHerder:

    Re #71:

    Re- mass conservation. Geochemists use an identical atmospheric mass, namely 5.12*10^18kg, dry air, as the rest of you, but we don’t know what it was 100years ago let alone thousands or millions.

    Atmospheric pressure times surface area is, by definition, the mass of the atmosphere. There are no magical forces holding it up, and the only force pulling it down is gravity, which exerts a force proportional to the mass. We know that barometric pressure has been reasonably constant (on average, excluding storms, etc.) since the invention of the barometer by Torricelli nearly 400 years ago. Given the climate swings in that length of time, it’s safe to assume that the mass of the atmosphere is reasonably constant over century-long periods and over periods of cooling as well as warming.

  74. Hank Roberts:

    > 1998-2007
    How much do you believe a nine-year trend is more reliable than a five-year trend?
    Do you believe it’s any different when the first year of the nine years picked is an El Nino year?
    http://scienceblogs.com/stoat/upload/2007/05/5-year-trends.png

  75. ray ladbury:

    “Anomaly after anomaly after anomaly. That’s chaos, no?”

    No, anomaly after anomaly after anomaly is called giving up. Persisting in investigating the anomalies until you understand their cause–that’s called science.
    Actually there were may atmospheric scientists in the 50s-70s that attributed the lack of warming to aerosols from combustion of fossil fuels–the models now say that was a highly credible hypothesis. The anomalous warming in the late 30s and early 40s is still considered anonmalous. However, I do not consider this a satisfactory “resolution”. It may have been a more or less local effect, but even that is not known at present.
    Physicists like me get nervous when people attribute warming to “natural variability”. The energy has to come from somewhere–particularly as much energy as we are seeing dumped into the climate system in the current warming epoch. Again, climate scientists have a self-consistent and highly plausible picture. It cannot be the sun, as solar output has not increased enough. It cannot be water vapor–too variable. Next in line is CO2. If you dismiss that, well, you got a source with a few yottajoules (always wanted to use that prefix!) sitting around?

  76. Timothy Chase:

    ray ladbury (#75) wrote:

    Actually there were may atmospheric scientists in the 50s-70s that attributed the lack of warming to aerosols from combustion of fossil fuels–the models now say that was a highly credible hypothesis.

    Pinatubo basically cinched it for a great many scientists that aerosols could produce global dimming and mask the effects of global warming. Likewise, we know that there was a drop in temperature as the result of flights being grounds after 9/11 – from what I understand. The effects of aerosols is measurable and regular.

    After the fact? Nope. Ad hoc? Certainly not.

    Then of course, if one takes any one decade of the twentieth century in isolation, few are statistically significant. Any one year by itself is even less “significant.” But the overall trend is quite significant – and in line with the positive feedback relationship we have seen between CO2 and temperature over, what is it now?, one million years? A heckofalot to explain away as coincidence – and that is basically what one is doing when one attempts to explain fairly well-defined patterns simply by reference to “chaos.”

  77. DocMartyn:

    FurryCatHerder
    “We know that barometric pressure has been reasonably constant (on average, excluding storms, etc.) since the invention of the barometer by Torricelli nearly 400 years ago.”

    Why should pressure remain constant when temperature changes?
    What is the basis for this statement?

  78. ray ladbury:

    DocMartyn, think about the physics. What is the cause of the pressure? The column of air above the point on the ground. Even if the air expands, you will have the same weight of air above that point, so the pressure (weight of the air column divided by area) will remain the same.

  79. bender:

    #74, #75, #76: You guys are parsing the text (#72) and poking holes without attacking the argument – and in doing so you’re, to some extent, making my argument for me. Shall I explain, or are your minds made up already?

  80. Ike Solem:

    RE#72, bender, As far as the PDO going ‘inexplicably positive’ in 1976, do you want to provide a reference to that?

    This argument, that all climate variability is due to various ‘natural cycles’, all of which just happen to be in a positive phase (the AMO, the PDO, etc.), isn’t supported. Every time that there is some unusual warming trend, the skeptics immediately claim it was due to some cycle – warm winter in the US? It must be El Nino, no matter how weak. Record warmth in Russia? It must be the AMO, or the NAO. Warmer sea surface temperatures in the Atlantic? It must be that the AMO is in a positive phase. Or it’s the natural sunspot cycle.

    So, how do you tell if you are looking at a trend or at a cyclic phenomena, or a cyclic phenomena superimposed on a trend? You can throw your hands up in the air and claim it’s all chaos – or you can take another look at the link posted in the response to your post: Attribution of 20th Century climate change to CO2.

    You run the model without the anthropogenic greenhouse forcing and see what the internal variability is. Then you add in the anthropogenic forcing – then you go and compare the model runs to observations.

    From Delworth and Knutson: (you edited their results – the part you left out is in bold: “…the warming of the early 20th century could have resulted from a combination of human-induced radiative forcing and an unusually large realization of internal multidecadal variability of the coupled ocean-atmosphere system.” Bit deceptive, isn’t that?

    We now assess whether internal variability alone can account for the observed early 20th century warming, or if the radiative forcing from increasing concentrations of GHGs is also necessary. Over the period 1910-1944 (which encompasses the warming of the 1920s and 1930s), there is a linear trend of 0.53 K per 35 years in observed global mean temperature. If internal variability alone can explain this warming, comparable trends should exist in the control run. Linear trends were computed over all possible 35-year periods, using the last 900 years of the control run (i.e., years 101-135, 102-136, …, 966-1000). For each 35-year segment, the time-varying distribution of observed data over the period 1910-1944 was used to select the model locations for calculating the global mean. The maximum trend in any 35-year period of the control run is 0.50 K per 35 years. This suggests that internal model variability alone is unable to explain the observed early 20th century warming.

    If internal variability can’t explain the weaker trend earlier in the century, why would you expect it to now, when the trend is much stronger? Keep in mind also that no amount of internal variation can increase the global temperature – that’s just conservation of energy.

    [Response: You're last statement is not quite correct. Internal variations that lead to either a net change in albedo or to greenhouse effects can lead to a global temperature changes. For instance, El Nino events have a global warming impact, and modelled changes in ocean circulation (such as in the N. Atl.) - even though they mostly redistribute heat - can effect sea ice extent and clouds to produce a net cooling. So, while these changes are small compared to the forced changes, they are not zero. - gavin]

  81. Hank Roberts:

    The basis for the statement? Depends. What are you holding constant, what are you varying, what are you measuring?
    http://www.chemistry.ohio-state.edu/betha/nealGasLaw/frb4.3.html

    Torricelli’s instrument measures the weight of the atmosphere. http://www.imss.firenze.it/vuoto/index.html

    Temperature (and latitude) correction tables for using a mercury barometer anywhere in the world (with interpolation) here:
    Handbook of Chemistry and Physics: http://216.149.237.254/ei-chemnet/
    Temperature Correction for Barometer Readings 83 Ed., p. 15-23

  82. Timothy Chase:

    Ike Solem (#80) wrote:

    From Delworth and Knutson: (you edited their results – the part you left out is in bold: “…the warming of the early 20th century could have resulted from a combination of human-induced radiative forcing and an unusually large realization of internal multidecadal variability of the coupled ocean-atmosphere system.” Bit deceptive, isn’t that?

    Ya gotta remember – parsing an argument into points so that you can respond to them – that is what is really unfair. Parsing quotes into pieces that are useful despite what an author actually wrote – is useful. And we shouldn’t give anyone problems if they find that they have to parse the evidence – so that they never have to acknowledge its full weight. It just isn’t about the evidence. Its about winning – and reality would be an unfair advantage – so it can’t be about that, either.

    Come on and give the guy a chance…

  83. Timothy Chase:

    A personal view – for what it is worth…

    The denialists aren’t out to destroy the world.

    What motivates them is something much more mundane: defending their turf or their tribe against their enemies – in the context of an “us vs. them” view of the world. At root, this seems to be a form of primitive tribalism. But what they seem to have forgotten is that they are part of a larger tribe. To be fully human requires you to recognize the humanity in others – even when they don’t seem to recognize it in themselves. If they are truly honorable warriors, if they honor reality, truth and their own humanity, they will come to the defense of their greater tribe – when it needs them most.

  84. bender:

    Layer upon layer upon layer of unsupported presumptions, it’s hard to know where to begin.

    -It’s not MY model that can’t explain the 1930s-40s Arctic warmth; it’s Hansen’s. Don’t ask me to explain what they can’t.
    -mike in reply to #69 is arguing that the anomalous trend is “century long”; but the only trendy part that’s inexplicable is the 1980s-90s portion. You are parsing #72, taking exception to the wording, but not addressing the argument there.
    -Ike thinks one can estimate the system’s internal variability by removing the forcings from the model. That’s the hope. But that presumes the models are structurally appropriate. THAT is what I’m disputing. That’s the argument.
    -Hank thinks I’m cherry-picking 1998 to exaggerate my point about recent temperatures plateauing; I’m not. They have flattened by any measure, and whether that’s a cyclic deviation or not Hank hasn’t explained why these deviations from a smooth trend happen. They’re not MY bumps; they’re Hank’s. I’m not claiming they’re “natural variability”; Hank is. Natural variability is not something I like to dismiss. It’s something one is sometimes forced to set aside.
    -Ike accuses me of selectively quoting for the purposes of deception – which is not an honorable thing to accuse someone of. I’m trying to keep my arguemnts relatively clean to keep them brief. The extra bit doesn’t refute my argument at all; it just helps puts it in context. I’m ok with that.
    -Timothy Chase claims reality is on his side, giving him an unfair advantage. Well, Timothy, if the science is SO settled where does mike’s “could, of course” come from in #69? You do not have a monopoly on the truth. mike sees a small crack there. You do not?
    -Many here show an unflappable faith in the models, which suggests to me you have no experience in modeling complex dynamic stochastic systems.
    -If the weather system is chaotic, then the climate system is chaotic too. I know you don’t agree with this, but I think you may be wrong. Read the full thread by Annan & Connolley and pay close attention to the comments by Pierrehumbert and Held. (That material, though good, is two years old, so also check the more recent literature.)
    -If the climate system is (even “sometimes”) chaotic (yes, I know: you dispute even this), then how do you correctly parameterize the numerically stable models such as to mimic the unstable climate? This is a serious problem; it is no joke. My cartoon name is a joke; these points I raise are not. Get past the labels.
    -”anomaly after anomaly after anomaly” is me “giving up”? My friend, ray ladbury, do you know what irreducible uncertainty is? It is the futility of what you are suggesting: never give up. When you hit the wall of chaos, you had better know it, and you had better be ready to give up to save your sanity. Hansen et al 2007 suggest exactly this: giving up on the Arctic warming of the 1930s-40s. Why don’t you criticize them?
    -PDO dynamics: google PDO will get you started. Hare coined the term. Start there.

    Gentlemen, please examine your assumptions. I’m not trying to win a debate or convince anyone of anything. I just think you are not being sufficiently self-critical when it comes to these models, how they are parameterized and what they do and don’t allow you to infer. Remember that attribution is fundamentally a modeling exercise. This argument that weather is chaotic and climate is not – think about it. Chaos is not just inexplicable ups and downs. Only in temporal models does it take that simplistic form. Chaos in spatiotemporal models is marked by bizarre spatial structures (circulatory pathways) that persist for a while, but fade as inexplicably as they emerged. If your models don’t produce that kind of behavior, are they trustworthy? Open question.

    That is enough for now. I apologize for the rambling. I think you are trying hard, but maybe have lost your objectivity. There’s lots to pick at in this comment, I know. Please don’t pick. Please hit the argument square on. Think “home run”.

    [Response: Your big question concerns the potential structural instability of models, and by extension the structural instability of the real climate. That's fine, but none of the variations in climate over the last century fall outside of the envelope of forced+internal variation of the structurally stable system so provide no evidence for a deeper instability. Similarly, over the Holocene, with its large precessional trend, there is no such evidence. We do see evidence for threshold behaviour - the drying of the Sahara around 5500 BP that was likely caused by vegetation/climate interactions, but this is still a forced response to the insolation change. Only during the glacial periods (with the Dansgaard-Oeschgar cycles and Heinrich events) do you have evidence for spontaneous and large changes in climate - and even then, this was centered on the North Atlantic and almost certainly involved ice sheet dynamical instabilities.

    Thus for the system we have encapsulated in GCMs today (which don't generally contain either dynamic vegatation or ice sheet dynamics), there is no strong evidence that this system is chaotic anywhere near the part of phase space where we happen to lie. What about evidence from the models themselves? In my experience, I have never seen a GCM demonstrate significant structural instability for any kind of physically valid tweak (coding errors are another story of course). The closest you get is something like THC hysteresis seen in some of Stefan's work, but again, there is no evidence that we are near those transitions today. However, probably the best argument for structural stability is simply that where the forcings are the same, the response of the system is very similar. Orbital cycles etc. over the last million years, while they have caused enormous climate change, keep producing pretty much the same climate change.

    There is of course no possible 'proof' that the climate is not near some structural instability, but there is no need for this hypothesis to explain most of what is seen. However, I would hardly take comfort in than thought, and if anything, it might cause one to be more concerned about our future trajectory. - gavin]

  85. Alexi Tekhasski:

    Ike Solem stated: “You run the model without the anthropogenic greenhouse forcing and see what the internal variability is. Then you add in the anthropogenic forcing – then you go and compare the model runs to observations.”

    If the climate model is pre-determined to be a fixed-point equilibrium in the absence of external perturbations (as it was frequently stated or implied by their authors), no internal variability could be possibly observed or exist.

    [Response: The GCM equilibrium is in a statistical sense, as you well know, and there is plenty of internal variability. - gavin]

    “Keep in mind also that no amount of internal variation can increase the global temperature – that’s just conservation of energy.”

    I usually keep in mind that if the system is open (like Earth), energy is not conserved.

    Cheers.

  86. Mark:

    Hi, sorry to violate posting guideline “3) Only comments that are germane to the post will be approved…..”

    But something has been bugging me all day about the ~800 year temp/C02 lag in ice cores. Hoping someone at RealClimate can come through or point me in the right direction!

    Realclimate talks about positive feedback of CO2 released from ocean from solar forcing in the article “The lag between temperature and CO2. (Gore’s got it right.)”. But it begs the question, what is the _mechanism_ that stops this positive feedback loop once it’s started? And if the feedback effect tapers off over time would you not expect the CO2 and temperature to peak around the same time? Is this explained in more detail somewhere?

    Discussion is closed on that article, but hoping for an answer….

    Thanks.

    [Response: In the climate context feedbacks don't lead to unconstrained effects because of the eventual dominance of the long wave cooling (i.e. the fact that IR goes like T^4). The post http://www.realclimate.org/index.php/archives/2006/07/runaway-tipping-points-of-no-return/ tries to explain that. - gavin]

  87. graham dungworth:

    Many thanks re-Torricello’s expt, a fascinating read after babelfishing. I did the very experiment in high school physics back in the early 60′s, comes out about 29.5″ Hg in average weather no doubt, the error is in the 3rd place.
    I still have the 75th anniversary edition of the Handbook of Chemistry and Physics, 1988-189(69th edition)when everyone was still using 1952 data, done in Arizona for the dry air presumably.

    Quote-”FurryCatHerder
    “We know that barometric pressure has been reasonably constant (on average, excluding storms, etc.) since the invention of the barometer by Torricelli nearly 400 years ago.”end quote.lol

    We also know that the volume composition of CO2 in the atmosphere is 330ppm from the same edition, the error is in the third digit.lol

    It appears to me that the physicists are a bit too theoretical. Why don’t you actually go out or stay in(there’s no difference)and do the experiment and come up with a more modern and more accurate value instead of perpetually citing 760mm Hg as a standard atmosphere? You blithely state and assume that there’s no change in the mass of the atmosphere over hundreds or even thousands of years.
    Yes, we all know. The pressure at my locale got up to 788mm for a whole week! last autumn and as low as 738mm recently. Of course if I did the measurements over a long period, corrected for altitude, temperature and humidity and everyone else did so then we would get a weighted mean that could be compared with the last reported value established in 1952!

  88. Urs Neu:

    Bender

    Do you think that a chaotic system can not be forced from outside? Think of a pot with boiling water. The bubbles in the pot behave totally chaotic. You never can tell where the next bubble will come up. Complete chaos. However, if you turn down the heating of the hot plate to somewhere just below 100 degrees, there will still be bubbles, still chaotic, but you will see that in the mean they are smaller and there are less. You still can’t predict the behaviour of the bubbles, but you can predict a trend in number and extent. Just because there is an external forcing of the system and you know the physics. If you increase the heat, you can predict again that the bubbles will get bigger and more numerous.

    Conclusion: If you are not able to predict the behaviour of the chaotic system, it nevertheless might be possible to predict mean effects (or a change of statistical quantities of the system) of external forcings, using the knowledge of physical processes.

    The climate system is not too far away from that example, although much more complex. There are some sort of bubbles (like warm and cold air masses, eddies on the small scale, tropical storms on a larger scale) which are coming up and vanishing more or less randomly. There are some stores (like ocean or terrain), there are feedbacks, etc. Quite a complex system with plenty of chaos. However, if you turn up the hot plate, i.e. if you impose an external forcing, it might nevertheless be possible to predict some effect on the mean state of the system. Just because we know the physics.

    Do you think there is no effect on climate if solar radiation changes? Do you really think there is no effect of aerosols in the air (be it from a volcano or from combustion)? Do you think the physical measurements and knowledge about radiative effects of gases or particles in the air is just rubbish?
    If not, it is very weird to assume that changes in the external forcings have no effect on the climate system. If you turn up the hot plate, and that is what we are doing, there will be a predictable effect. Of course, we cannot predict exactly what will happen even in the mean state, because the system is very complex and there are a lot of feedbacks (i.e. we e.g. do not know the thickness of the pot and ist composition exactly), but we can still predict that there will be heating and to some extent the upper and lower boundaries of the heating.

    We know that we turn up the hot plate, we know the physics, and we see the effects that we expect from our knowledge. Of course you can come and say, oh well, look at this chaos in the pot, there are small and large bubbles, you can’t know anything about it. You cannot predict the bubbles with your model, so just forget about predicting, sometimes we see more bubbles, sometimes a little bit less, it is just chaos.

    Do you really want to tell us, that we have turned up the hot plate, we see much more and bigger bubbles and we should think that this has nothing to do with the hotter plate, that it is just chaos? That we do not know anything because we do not know everything?

  89. Alastair McDonald:

    Re #86 The continental ice sheet lie in the northern hemisphere and as they retreat the forests spread north. That together with the formation of peat bogs locks up much of the carbon. Meanwhile, the Milankovitch cycle moves on and a now cooling Pacific Ocean starts absorbing more CO2 which reinforces the cooling.

    But it is not ONLY CO2 which is driving the glacial interglacial cycles. Methane, water vapour, ice albedo, and the solar effects from the Milankovitch cycles all play a part. But where the land is ice covered, water vapour has little influence, so it is carbon dioxide with the ice albedo effect that plays the dominant role.

    You might conclude from that it is only the high latitudes that will suffer from an increase in CO2, but the if the polar regions warm then the the equatorial regions will have nowhere to dump their excess heat, and they too will warm.

    It is much too complicated for Al Gore to be able to explain it fully to the man in the street, but he is essentially correct.

  90. Barton Paul Levenson:

    [[Where's the evidence that the M has remained constant for millions of years let alone hundreds? Plenty of scientists make this assertion but where is the empirical evidence to support it? ]]

    Where is the empirical evidence that it was different? No change is the null hypothesis. If you want to prove it was different, you’ll have to come up with empirical evidence yourself. You can’t say, “it might have been different, therefore it probably was.” The burden of proof is on the affirmative; support your contention that M changed.

  91. Barton Paul Levenson:

    [[Re- mass conservation. Geochemists use an identical atmospheric mass, namely 5.12*10^18kg, dry air, as the rest of you, but we don't know what it was 100years ago let alone thousands or millions.]]

    We’ve had barometers since the 1600s, Graham.

  92. Barton Paul Levenson:

    [[Why should pressure remain constant when temperature changes?
    What is the basis for this statement?
    ]]

    The fact that there’s no lid on the atmosphere.

  93. Barton Paul Levenson:

    [[Gentlemen, please examine your assumptions. I'm not trying to win a debate or convince anyone of anything. I just think you are not being sufficiently self-critical when it comes to these models, how they are parameterized and what they do and don't allow you to infer. ]]

    And again, I point out that the models have successfully made at least four major predictions all of which have been empirically confirmed, so insisting that they’re unreliable is just stupid.

    “Heavier than air flight is impossible.”

    “Look, there’s an airplane!”

    “The pressure effects simply aren’t enough to lift the wing.”

    “It’s in the air! Look! There’s nothing under it!”

    “And redirection of the air flow simply can’t generate enough force to continuously life the airframe.”

    “It’s flying!”

    Etc.

  94. Barton Paul Levenson:

    [[It appears to me that the physicists are a bit too theoretical. Why don't you actually go out or stay in(there's no difference)and do the experiment and come up with a more modern and more accurate value instead of perpetually citing 760mm Hg as a standard atmosphere? You blithely state and assume that there's no change in the mass of the atmosphere over hundreds or even thousands of years.]]

    [edited]what part of “we’ve been measuring it for 400 years” do you not understand? We don’t “assume” the mass of the atmosphere has been constant, we bleeding measure it! BTW, scientists DO regularly do the calibration experiments for standard measurements. Google NIST and CODATA.

  95. Ray Ladbury:

    Bender, the argument you are making is fundamentally anti-scientific. First, there is no evidence climate is chaotic. Certainly there seem to be epochs in the history of the planet where the climate was more or less predictable, but on the whole you can say that climate certainly is predictable in a piecewise fashion. However, even if climate were chaotic, that does not make it a scientific no-man’s land. Energy is still conserved, and since energy is increasing–by yottajoules, there must be a source of that energy. If the source is not increased greenhouse activity from anthropogenic CO2, what is it? You seem to want to say it is a coupling between atmosphere and ocean. I think that if you look at the likelihood of enough coupling to bring about GLOBAL effects of the magnitude, you’ll find it is vanishingly small. We could compute the likelihood ratio of anthropogenic causation vs. atmosphere-ocean coupling as a test, but I’m pretty sure you’d come out on the losing end of that comparison.
    So, even if the climate were chaotic, the only possible cause for the added energy we are seeing is still anthropogenic CO2. We can still say some very general things about the dynamics of a chaotic climate. First, by adding energy to the system, we make more phase space available to it, making it LESS predictable. Second, the past 10000 years have been a period of exceptional climatic stability, so it the climate is chaotic, we must be near some sort of strange attractor–some quasi-stable equilibrium. By adding energy, we perturb the system away from that attractor, making it more likely to become MUCH LESS PREDICTABLE. Given that all of human civilization has developed during this period of climatic stability, I would see that as a pretty severe threat. So in some ways, the argument that we must do something about climate change is strengthened, not weakened, if climate is chaotic.
    You seem to see chaos as some sort of fetish. It is not. Chaotic systems still obey laws and in some ways the very fact that you can’t model them deterministically makes them easier to deal with. You just have different expectations.
    You also seem to feel that the conclusion that humans are changing climate rests in some way on the validity of the models. It does not. There simply isn’t another source of energy large enough to explain the increased energy we are seeing in the system. All the climate models do is tell us what some results of those changes may be. Without the climate models, we are flying blind, and I would think that would argue for greater caution rather than less.
    On the other hand, as I said, there’s no real evidence that climate is chaotic. The models have done an excellent job of getting the trends–and even many of the details–right. They’ve done much better than would be expected by chance. So it would appear that we have two different approaches. Your anti-scientific approach says climate cannot be modeled, whereas the climate scientists are modeling it.

  96. bender:

    Re #88
    “Do you think that a chaotic system can not be forced from outside?”
    No. In fact, I know that it can.

  97. FurryCatHerder:

    In Re #87:

    It appears to me that the physicists are a bit too theoretical. Why don’t you actually go out or stay in(there’s no difference)and do the experiment and come up with a more modern and more accurate value instead of perpetually citing 760mm Hg as a standard atmosphere? You blithely state and assume that there’s no change in the mass of the atmosphere over hundreds or even thousands of years.

    The weather guys do it every day. The pressure rises and falls with the passage of various storm systems and other such things. And yet, it keeps coming back to the same range of values for “high” and “low”. 26.5″ of mercury is still a pretty nasty hurricane, and 30″ of mercury is still a nice, clear high pressure system parked in your neighborhood. It’s been that way my entire life, and I’ve been 29 for quite a few years now ;)

  98. graham dungworth:

    87 contd. Just returned from the allotment. I’m an old retired scientist now but I keep experimenting.
    It’s about two generations ago that Preston Cloud Jnr(he kept the title into his old age and I’d claim is the father of paleoclimatology) reviewed paleo atmosphere data. I remember he used the size of Carboniferous spiders((mass goes as r^3, surface area goes as r^2,bugs breath through pores in their exoskeleton)to infer an atmospheric composition of oxygen at 30% by volume. He thought this was an upper limit for oxygen based upon the fact that wet vegetation would spontaneously combust. Before Cloud I think Berkner and Marshal related all atmospheric limits to PAL(Present atmospheric level)at 1 bar. Cloud perpetuated this as a Carboniferous maximum for O2 as 30%PAL.
    Berkner and Marshal,as a known consequence of photodissociation of water in the atmosphere set the Precambrian levels of oxygen max at 0.1% PAL. By the mid sixties it was noticed that connective tissue proteins, we know as collagen and their calcified representative in bone and shell, contained an amino acid(it’s one of the twenty common ones)called hydroxyproline that is synthesised from proline by a univeral enzyme called, you guessed it, hydroxyproline. This enzyme relies on the presence of free oxygen and if the cultures are exposed to levels of O2 less than 1-2%PAL at 1 bar they cannot synthesise skeletal tissue. Cloud used this information to extrapolate back to the Cambrian faunal explosion(ie.hard parts that were more easily fossilised)to predict that by end Cambrian times the oxygen conent of the atmosphere was at least 1% PAL.
    It wasn’t until the discovery of the 200bar PAL Cytherian atmosphere of Venus that scientists began to wonder how atmospheric pressure varied on Earth. Were one to oxidise all carbon on Earth and place it as CO2 in our atmosphere, then based upon known geochemical abundance, the answer came out at 90-100bar. Why did Venus, so alike as a sister planet, contain twice as much carbon as Earth? As I mentioned above, based upon the scarce abundance of diamonds, more common at 200km in the Upper Mantle, a well known geochemist stymied such calculations by implying that the mantle contains at least 7 fold more carbon than the crust!Geochemical abundances of elements are highly contentious.
    Could those bugs survive at much lower O2 levels at 2 bar total pressure or whatever pressure? There’s only one gas that fills the bill for the Phanerozoic, the last 570 million years of history and that is nitrogen. When we predict paleo CO2 contents as ppm by volume it is always related back to a 1 bar total pressure. Waters in the Jurassic seas of the UK, when the paleo dispostion of the lower half of the UK was subtropical were very warm , upper 20′s , we know this from oxygen isotope fractionation for conditions relevant to PAL 1 bar, and the temperatures do appear reasonable for the sub tropics.
    I’m an old guy now, retired to a field where there’s no competitive edge other than the voracious appetite of bugs. ” There’s no reason to believe there’s been any change of the mass of the atmosphere or I fail to see the relevance etc”. True I’m old but not too physically inconvenienced, I stay fit, and my mental processes don’t appear to be affected at all apart from my forgetfulness re- home objects. Time passes by too quickly now, hopefully it’s psychological time,so much so that I don’t remember how I found the time to put in 12 hr days at work. Reading some of the comments, pertinent to this rather peripheral topic I get the feeling that many, a generation younger than myself whose axes are still bright, talk as if they are an even older generation of Greek Scientists.
    Leave the arrogance and wisdom to the elderly.

    Could anyone with a spare GCM and DC force total pressure to 1.1 *PAL 1 bar just for the fun of it.

  99. Hank Roberts:

    >74, etc., “cherry picking” or somebody’s “bumps” is rhetoric —- get the numbers and do the math yourself, or see it done, but understand it:
    http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#more
    (black lines data, thicker black same but smoothed, thin straight lines non-sig trends; thick straight blue lines sig trends)

  100. FurryCatHerder:

    So … I’ve been playing with EdGCM overnight, and it’s interesting, but entirely too coarse for my tastes.

    Does anyone know of a free (as in freedom of the press, not free beer) GCM with a finer resolution that could produce reasonable model results in a month or so? Also, if the models will run on SMP Windows boxen that would be great — I have access to large amounts of idle time on some 8-way 3GHz Xeon machines.

  101. Timothy Chase:

    bender (#84) wrote:

    Layer upon layer upon layer of unsupported presumptions, it’s hard to know where to begin.

    -It’s not MY model that can’t explain the 1930s-40s Arctic warmth; it’s Hansen’s. Don’t ask me to explain what they can’t.

    Even if there are some things which fall out of the domain of what we can currently explain, that doesn’t mean that we can’t ever explain them. You understand as much – as the argument that, “Since we don’t know everything, we can’t claim to know anything” is an argument from solipsistic skepticism.

    mike in reply to #69 is arguing that the anomalous trend is “century long”; but the only trendy part that’s inexplicable is the 1980s-90s portion. You are parsing #72, taking exception to the wording, but not addressing the argument there.

    But this is explainable in terms of physics. It is called the greenhouse effect. The absorbtion of infrared by carbon dioxide and water vapor – in accordance with their lines of absorbtion? And there is a trend throughout the entire twentieth century. Plenty of decades which by themselves do not show statistically significant trends, and the same applies to an even greater degree to individual years, but the overall trend does is pretty obvious – and this is exactly what we would expect as a matter of physics.

    Pointing this out isn’t parcing – it is looking at the wider context. There is a principle in the philosophy of science and statistics which you should also recognize even if you were unfamiliar with either of these two topics – when a conclusion is justified by multiple, independent lines of investigation, the justification which it receives is far greater than that which it would receive from any one line of investigation considered in isolation. But in wanting to consider only isolated pieces, you were parcing the evidence. It is a technique I have seen quite often – among creationists.

    -Ike thinks one can estimate the system’s internal variability by removing the forcings from the model. That’s the hope. But that presumes the models are structurally appropriate. THAT is what I’m disputing. That’s the argument.

    It is the methodology of science. Just because we might make mistakes doesn’t mean that we are mistaken. What you are arguing is bad philosophy. Science makes mistakes, the possibility of making mistakes is an unavoidable consequence of the attempt to understand the world. But if one practices the appropriate methodology, one will eventually uncover the areas where one is mistaken and be able to correct them. I find it surprising that I should have to explain as much to you.

    Hank thinks I’m cherry-picking 1998 to exaggerate my point about recent temperatures plateauing; I’m not. They have flattened by any measure, and whether that’s a cyclic deviation or not Hank hasn’t explained why these deviations from a smooth trend happen. They’re not MY bumps; they’re Hank’s. I’m not claiming they’re “natural variability”; Hank is. Natural variability is not something I like to dismiss. It’s something one is sometimes forced to set aside.

    OK. There will be “the highest years,” and when those occur, typically what one will see is that the next year is lower than the previous year – otherwise each year would be higher than the previous one. But the fact that 1998 was a highest year is explicable in terms of the particularly strong El Nino we had that year. But after falling back, the trend towards higher temperatures continued right up to 2005. Then 2005 beat out 1998 as the warmest year from 1890 to 2005. And while 1998 had the benefit of an unusually strong El Nino, 2005 did not. Moreover, 2005 had the coolest solar year since the mid-1980s. 2005 was pretty unusual compared with the twentieth century – but part of a trend for the past five, ten, fifteen and twenty years.

    Ike accuses me of selectively quoting for the purposes of deception – which is not an honorable thing to accuse someone of. I’m trying to keep my arguemnts relatively clean to keep them brief. The extra bit doesn’t refute my argument at all; it just helps puts it in context. I’m ok with that.

    You don’t do that – particularly in internet debates – moreover, you most especially don’t do that with those parts of the same passage which tend to undercut the point which you are trying to make. Creationists do this so often that proponents of evolution gave it a name: they call it “quote-mining.” Its dishonest. And if you thought it was Ok at the time, I hope that you will avoid it in the future – whatever the topic.

    Timothy Chase claims reality is on his side, giving him an unfair advantage.

    When I see dishonesty, especially quote-mining, I point it out and I stress it. The individual who uses it should be shamed. Moreover, dishonesty is proof that the individual who is being dishonest realizes that reality isn’t on their side.

    Well, Timothy, if the science is SO settled where does mike’s “could, of course” come from in #69? You do not have a monopoly on the truth. mike sees a small crack there. You do not?

    Arguing from logical possibility of an alternative to the conclusion that the best explanation explains nothing is an approach which belongs to radical skepticism. And thinking of this as “an opening” belongs to debate in which one is no longer concerned with the truth but winning the argument.

    Many here show an unflappable faith in the models, which suggests to me you have no experience in modeling complex dynamic stochastic systems.

    Very few people are as familiar with stochastic systems as climatologists – they virtually invented the study of such systems.

    If the weather system is chaotic, then the climate system is chaotic too.

    When someone concludes that what is true of the parts of a whole must be true of the whole without there being adequate justification for the claim, they are commiting the fallacy of composition. I believe this accurately describes what you have just done.

    I know you don’t agree with this, but I think you may be wrong. Read the full thread by Annan & Connolley and pay close attention to the comments by Pierrehumbert and Held. (That material, though good, is two years old, so also check the more recent literature.)

    Provide a link or give the exact source – or else summarize what points you are trying to use in your argument.

    If the climate system is (even “sometimes”) chaotic (yes, I know: you dispute even this), then how do you correctly parameterize the numerically stable models such as to mimic the unstable climate? This is a serious problem; it is no joke. My cartoon name is a joke; these points I raise are not. Get past the labels.

    Even probablistic behavior is mathematical behavior, otherwise it would not be susceptible to mathematical description.

    “anomaly after anomaly after anomaly” is me “giving up”?

    As I have pointed out above, when you throw up your arms like this, you are using arguments belonging to solipsistic skepticism.

    My friend, ray ladbury, do you know what irreducible uncertainty is? It is the futility of what you are suggesting: never give up.

    If it is susceptible to mathematical description, then it is not irreducible – its causal.

    When you hit the wall of chaos, you had better know it, and you had better be ready to give up to save your sanity.

    One could make the same claim with respect to trying to understand anything which is not already understood – or anything where the degree of justification does not approach cartesian certainty.

    Hansen et al 2007 suggest exactly this: giving up on the Arctic warming of the 1930s-40s. Why don’t you criticize them?

    It should be particularly easy then for you to provide the quote – if it supports your argument. But don’t quote-mine.

    PDO dynamics: google PDO will get you started. Hare coined the term. Start there.

    Are you suggesting that wave-like behavior is inexplicable, or are you suggesting that wave-like behavior is inexplicable if it is not strictly regular?

    The rest of your argument is of the same pattern, and I need to start getting ready for work.

  102. Marcus:

    Bender, you claim that “They [temperatures] have flattened by any measure” – why don’t you go to the GISS website (http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt), plot temperatures from 1900 to 2006, then apply a 5 year (or 3 year, or 7 year, or whatever) moving average and plot them again? I don’t see any such so-called flattening…

    Also, it is fairly clear that even if the climate system is chaotic, historical evidence suggests that it is not chaotic in such a way to make it impossible to determine likely responses to a large induced forcing. Not all chaotic systems have the same properties!

  103. Rod B:

    Ray (77), the prefix I like, first heard from a CERN grunt a few years back, is “lotta-” as in lottajoules, followed by “wholelotta-” as in wholelottajoules.

  104. DocMartyn:

    graham dungworth
    Do we have any numbers of the levels of DMSO and MDS in the ice record, both totals of organic sulphur and the redox potential would be nice.
    With regard to lightening and N2, you are right about the sink rate. Again, do we have a good measure of nitrate/nitrite in any of the ice core data?

  105. Hank Roberts:

    Here’s one repository search you can click for the latter question. There’s more out there.
    http://adsabs.harvard.edu/cgi-bin/nph-abs_connect?return_req=no_params&db_key=PHY&text=Not%20Available+&title=Nitrate%20plus%20nitrite%20concentrations%20in%20a%20Himalayan%20ice%20core

  106. Walt Bennett:

    I am cross-posting this post from the Hansen topic, since this is (a) a fresher topic with more traffic and (b) it is about models.

    In a discussion of Hansen’s latest paper over at CS, Willis E. made the following comment regarding the veracity of current climate models:

    “Then we should perform Validation and Verification (V&V) and Software Quality Assurance (SQA) on the models. This has not been done. As a part of this, we should do error propagation analysis, which has not been done. Each model should provide a complete list of all of the parameters used in the model. This has not been done. We should make sure that the approximations converge, and if there are non-physical changes to make them converge (such as the incorrectly high viscosity of the air in current models), the effect of the non-physical changes should be thoroughly investigated and spelled out. This also has not been done.”

    I asked him for references showing that the above have not been done. His reply was that he has no source, but that he can find no reference to any of the above having been done.

    Does anybody know how true the above is? If it is in any part not true, are there any supporting references?

  107. ghost of z2a:

    Gavin,
    Thx for addressing bender in a head on fashion. I won’t comment on the
    tactics others used ( funny, most drawn from the skeptics handbag of obsfucation). I will just say, you showed some class and kept it civil.

  108. SomeBeans:

    #100 FurryCatHerder
    It appears you can download the GISS ModelE (used for AR4) from here:
    http://www.giss.nasa.gov/tools/modelE/

    Can’t comment on it’s user friendliness or otherwise, but I see Gavin Schmidt’s name at the bottom of the page…

  109. Dan Hughes:

    re: #85

    The climate system is an open system for which there is not equilibrium between the components or within individual components. Under these conditions, change is the only constant.

  110. bender:

    Re #107 Yes, thanks to gavin and to others too.

  111. Alexi Tekhasski:

    In #85, Gavin responded: “The GCM equilibrium is in a statistical sense, as you well know, and there is plenty of internal variability. – gavin”

    I am not sure what do you mean. Do you mean that the 30-year average of pseudo-weather’s quasi-oscillations still have substantial variability, or that the ocean model and coupling between ice cups and something else has intrinsic variability on its own?

  112. Alexi Tekhasski:

    In #95, Ray wrote: “First, there is no evidence climate is chaotic.”

    There is plenty of evidence. For example, Prof. C. Wunsch has examined almost every known proxy and performed time series analysis of them, e.g. http://ocean.mit.edu/~cwunsch/papersonline/milankovitchqsr2004.pdf
    The typical conclusion is that
    “record of myriadic climate variability in deep-sea and ice
    cores is dominated by processes indistinguishable from
    stochastic, apart from a very small amount (less than
    20% andsomet imes less than 1%) of the variance
    attributable to insolation forcing. Climate variability in
    this range of periods is difficult to distinguish from a
    form of random walk”

    Spectral power density of climate variations appears to be continuous power-law functions, which is a good indication of chaotic behavior.

    “Certainly there seem to be epochs in the history of the planet where the climate was more or less predictable, but on the whole you can say that climate certainly is predictable in a piecewise fashion.”

    Everything is predictable on a piecewise basis, the question only is if you can predict which direction the piece is heading after a turn. Even a sine function can look like a flat (“stable?”) twice in the cycle.

    “If the source is not increased greenhouse activity from anthropogenic CO2, what is it?”
    How about solar wind activity that changes formation of clouds and acts as a strong shutter to solar radiation?

  113. Tim McDermott:

    re 106, V&V and SQA of the models:

    First, saying “Then we should …” immediately raises the question of who the “we” are. V&V is seldom used, even on military applications, except for highly mission critical or human safety systems. This is because V&V can come close to doubling the cost of a project. This is evident from the sort of questions in the quote, “… error propagation analysis…,” “…complete list…,” “make sure … converge…,” “thoroughly investigated.” These questions are asked in a “prove it to me” tone of voice. The modelers now not only have to build models that match observed phenomena, they have to provide evidence (that would not otherwise have been developed) to satisfy the V&Vers. You are going to need as many V&Vers as you have developers (they are going to verify all of the analytical work) and the V&Vers themselves need to be qualified climate scientists to understand what they are looking at.

    So who pays?

    Is any of this of any benefit? No. There are better, cheaper ways to assess the quality of software. In this particular case, the fact that the couple dozen climate models in the world all pretty much agree shows that they are free of disqualifying defects (assuming that they were designed and coded independently.)
    By way of reference, in the 7 years I spent at NASA Goddard, I never heard of a project, even for flight software, that used real IV&V (Independent V&V). They called using a separate test team IV&V, and calculated that it added 5% to 15% to the cost of a project. Note that an independent test team can’t answer any of the questions in the original post. This was the shop that built the flight control software for nearly all of NASA’s earth satellites for decades. They understood software quality.

    The second thing that a call for V&V does is misunderstand scientific, and perhaps all, modeling. V&V’s goal is to answer two questions: Was the system built correctly? and Does the system as built meet the requirements? But notice that the goal of any model building is to better understand the thing being modeled. So where are the requirements specified? In Nature? How do we trust the V&V team to better understand the subject than the modelers?

    I reluctantly conclude that the call for V&V is either deeply ignorant of real-world software development, or is just another piece of FUD trying to impeach climate models.

  114. Tim McDermott:

    In the process of composing my previous comment, I downloaded modelE from the giss site and took a quick look at the code. From a completely non-rigourous assessment of two modules, it looks like it was written by fully competent programmers. The code is modular. I didn’t see any overly-long subroutines. There are plenty of comments that look like they would mean something to somebody who understands the domain. I didn’t notice any routines with excessive cyclomatic complexity.

    It’s been 18 years since I worked with FORTRAN, so I’m not competent to thoroughly evaluate the code, but it certainly passes my quick look.

    I wish the Java I’m working on now was as well written.

  115. Ike Solem:

    Alexi, you say, “I usually keep in mind that if the system is open (like Earth), energy is not conserved.”

    That is an excellent point, and I stand corrected. It’s easy to fall into the trap of thinking of the climate in simple thermodynamic terms, as in ‘system and surroundings’ – and ‘internal oscillations of the climate system’ – as if the Earth’s climate behaved like a piston, or like a mass on a spring. Of course, energy is always conserved, even in open systems – it just can leave the system, is all.

    However, the basic rules of heat transfer still apply to the Earth, and we can note that a body can only lose heat via conduction, convection and radiation. We also know that the oceans absorb warm up very slowly due to the high heat capacity of water, and that the ground has low heat capacity. So, how can we get at a net energy balance for the Earth?

    See, for example, Gavin’s earlier post, Planetary Energy Imbalance? The issue is succinctly summed up in the statement in that post “Since the heat capacity of the land surface is so small compared to the ocean, any significant imbalance in the planetary radiation budget (the solar in minus the longwave out) must end up increasing the heat content in the ocean.”

    Still, this is not a direct measurement of radiative imbalance (ocean heat content, that is), and thus includes some assumptions. How do we get a better measure? Well, one could look at the Earth from a good vantage point (space) and measure the solar energy balance directly – as the plan for NASA’s Deep Space Climate Observatory. This project would cost $100 million and the instrument has already been built – but has been mothballed by NASA (clink on the link). For some reason, NASA can’t come up with the money.. but they can come up with $5.6 billion for Hewlett Packard.. certainly a fine company, but what’s up?

    Perhaps this quote by the NASA chief on NPR explains this: “I guess I would ask which human beings – where and when – are to be accorded the privilege of deciding that this particular climate that we have right here today, right now is the best climate for all other human beings. I think that’s a rather arrogant position for people to take.”

    Yes- attempting to address global warming is ‘arrogant’, according to the head of NASA, who can’t seem to come up with funds for climate satellites any more.

    P.S. “First, there is natural variability that is tied to external forcings, such as variations in the Sun, volcanoes, and the orbital variations of Earth around the Sun. The latter is the driving force for the major ice ages and interglacial periods. Second, there is natural variability that is internal to the climate system, arising, for instance, from interactions between the atmosphere and ocean, such as El Niño. This internal variability occurs even in an unchanging climate.” (From Trenberth 2001 Sci)

    Would one classify alterations of carbon cycle dynamics as �internal variables�? For example: the saturation of the oceans with respect to CO2 resulting in less uptake, the transition from forest to grassland resulting in less biosphere carbon storage in the expanding dry subtropical zones, and the melting of northern permafrost resulting in release of frozen carbon as CH4 or CO2? Tamino had an interesting post on this issue (A CO2 surge?).

    Basically, climate scientists seem to understand the physics of the unforced internal variability well enough to rule it out as the cause of the observed temperature trend, and they understand the physics of the forcings well enough to attribute the observed temperature increase to anthropogenic changes in greenhouse gases. What still seems uncertain to us humble observers is the response time of the system, but it sure seems to be tending toward the rapid end of the possibilities… which is not good news.

  116. Barton Paul Levenson:

    “PAL” means “Present Atmospheric Level.” When a paleoclimatologist says an old Earth atmosphere was 30% oxygen, that’s not “30% PAL.” That’s about 150% PAL. And the surface pressure of Venus is 92 bars, not 200.

  117. Alastair McDonald:

    Re #106

    Walt, There was a large study performed trying to reconcile the measurements made using radiosondes with the models for the lower atmosphere of the tropics (local climate?) “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences.” Thomas R. Karl, Susan J. Hassol, Christopher D. Miller, and William L. Murray, editors, 2006. A Report by the Climate Change Science Program and the Subcommittee on Global Change Research, Washington, DC. http://www.climatescience.gov/Library/sap/sap1-1/finalreport/

    In Chapter 5 [Santer et al. 2006] two key points are made:

    � For longer-timescale temperature changes over 1979 to 1999, only one of four observed upper-air data sets has larger tropical warming aloft than in the surface records. All model runs with surface warming over this period show amplified warming aloft.

    � These results could arise due to errors common to all models; to significant non-climatic influences remaining within some or all of the observational data sets, leading to biased long-term trend estimates; or [to] a combination of these factors. The new evidence in this Report (model-to-model consistency of amplification results, the large uncertainties in observed tropospheric temperature trends, and independent physical evidence supporting substantial tropospheric warming) favors the second explanation.

    IMHO, the final sentence seems to imply that since the models all agree then there cannot be “errors common to all models.” Walt, I leave it to you to decide whether this is a plausible reading, and if so whether this is plausible.

  118. Ray Ladbury:

    Alexi, First, energy is conserved whether the system is open or closed–the energy has to come from somewhere. Where? Your reference to solar winds is incomplete, but I can only assume you are talking about the modulation of galactic cosmic rays by solar winds, since solar particles are largely below the geomagnetic cutoff. The main problem here is that the GCR fluxes are not changing–per either the neutron-flux data or the satellite data. The GCR mechanism is not persistent, and for it to change the climate, it must be changing itself. It is not. And even if it were a credible mechanism, there would then be the question of why a greenhouse gas would magically stop functioning as a greenhouse gas above ~320 ppm.
    It surprises me that you would take comfort in the idea the climate is chaotic. Certainly hundreds of millions of farmers depend on climate being somewhat predictable, and during the era in which they have been farming, it largely has been. Now certainly the past 10000 years have been a period of exceptional stability. If climate is indeed chaotic, that stability can only be viewed as a quasi-stable strange attractor, and by perturbing the climate away from that attractor we risk inducing much more chaotic activity in which agriculture would be largely impossible. I would contend that this would make it all the more important to limit CO2 emissions. Now certainly, this makes for some fascinating speculation: Did civilization begin when the climate stabilized to the point where agriculture became more profitable than a hunter-gatherer lifestyle? It is possible, but the more relevant point is that the planet will not support a population of 9 billion hunter-gatherers, and I would consider that a significant issue if we want human civilization to continue.
    Ike: Regarding Mike Griffin’s comments, I have to say I was appalled. My God, why does he have world-class climate scientists on staff if he is not going to listen to them?
    Re: the new focus of NASA, you might appreciate the essay by Tom Bodett

    http://www.tombodett.com/storyarchive/homeplanet.htm

    Warning, don’t drink coffee while reading this as you could scald your nasal passages.

  119. Hank Roberts:

    “Nowhere in NASA’s authorization … is there anything at all telling us that we should take actions to affect climate change …. we study …. I’m proud of that ….” — Mike Griffin on NPR

    Well, not since the Administration took that OUT.

    Do we assume he’s also going to cancel the watch for asteroids on Earth-crossing orbits?

    The rate of change from an asteroid impact is closer to the rate of change from fossil fuel impacts than either of those are to any other climate change event we know about, I think. Doesn’t the man understand rate of change of rate of change? Calculus??

  120. Dan Hughes:

    re: #106, 113, 114

    I concur that documentation that software V&V and SQA procedures have been applied to NWP and AOLGCM software has proven to be very difficult to locate.

    Additionally, I suggest you get an update on the status of software V&V and SQA within NASA, and see this report for detailed discussions of code-to-code comparisons as a procedure for V&V and SQA, in which you’ll find:

    The Seven Deadly Sins of Verification
    (1) Assume the code is correct.
    (2) Qualitative comparison.
    (3) Use of problem-specific settings.
    (4) Code-to-code comparisons only.
    (5) Computing on one mesh only.
    (6) Show only results that make the code �look good.�
    (7) Don�t differentiate between accuracy and robustness.

    A good starting point might be the information in the literature references listed here.

    Finally, check out some of the additional information about ModelE under code counts.

  121. FurryCatHerder:

    Re #118 by Ray –

    Alexi, First, energy is conserved whether the system is open or closed–the energy has to come from somewhere. Where? Your reference to solar winds is incomplete, but I can only assume you are talking about the modulation of galactic cosmic rays by solar winds, since solar particles are largely below the geomagnetic cutoff. The main problem here is that the GCR fluxes are not changing–per either the neutron-flux data or the satellite data. The GCR mechanism is not persistent, and for it to change the climate, it must be changing itself. It is not. And even if it were a credible mechanism, there would then be the question of why a greenhouse gas would magically stop functioning as a greenhouse gas above ~320 ppm.

    If you look at the aa index (verbosity here), as well as the affect of cosmic radiation on cloud formation, I’m sure you’d find more than enough evidence that GCR is far from a constant. Whether or not it’s a major factor is open to debate, but looking at a graph of aa over the past 150 years looks a lot like looking at the temperature graph. Tim and I had a discussion about this sort of thing a few weeks back, so perhaps he can repost the article he found.

  122. ray ladbury:

    FurryCatHerder, First, you have to consider the 11-year solar modulation, which does indeed affect GCR flux considerably, just as it affects geomagnetic field. That would not explain the warming effect–you’d have to be seeing decreasing solar wind to let in more GCR particles. The solar particles won’t do it by themselves as they are mostly below the geomagnetic cutoff even during magnetic storms (exceptions would be really big flares like the Carrington Event in 1854, the ’56 event, the ’72 event and maybe the ’89 event). If you compare fluxes at the same stage of the solar cycle, they’re consistent within expected Poisson errors.
    Lots of things vary with the solar cycle. Also, look at graph–it ends in 1998–and 1998 which was one of the hottest years ever is low on the AA scale. And unlike CO2, there’s no persistence to the effect–if the flux isn’t decreased, then cloud formation doesn’t decrease and you don’t get increased solar irradiance. The GCR mechanism really isn’t credible–the flux is only 5 per cm^2 per second, so it’s kind of hard to figure out how that can be significant anyway.

  123. FurryCatHerder:

    Ray @#122:

    Ray,

    I don’t think the chart ending in 1998 is relevant — the aa index and sunspot count are related (chart), and SC23 was only slightly less robust than SC22.

    This is the comment I take issue with –

    “And unlike CO2, there’s no persistence to the effect–if the flux isn’t decreased, then cloud formation doesn’t decrease and you don’t get increased solar irradiance.”

    If you look at the aa index, you see that the rise in the minimum above prior decades maximum values makes the change in aa index “persistent”. This comment is from the paper I referenced with “verbosity here” above –

    “Although not documented here, it is interesting to note that the overall level of magnetic disturbance from year to year has increased substantially from a low around 1900 Also, the level of mean yearly aa is now much higher so that a year of minimum magnetic disturbances now is typically more disturbed than years at maximum disturbance levels before 1900.”

    That would provide the persistent trend towards decreasing cloud formation and increasing irradiance.

  124. Blair Dowden:

    The May 25 edition of Science has a report on regional climate projections for American southwest. Figure 1 (available here) shows the median results of 19 AR4 model projections for precipitation and evaporation for this region. The most obvious feature, other than a drying trend, is a 10 year precipitation cycle (the blue line). According to the paper, this is driven by the El Nino / Na Nina cycle.

    I would expect that as model projections go farther into the future, that model results would increasingly diverge, and the average (or median) might show a trend, but would not show decadal variation in that trend. Yet looking at the graph, I see a very clear and intense La Nina cycle starting in 2080. Considering that presently we cannot predict that cycle even a year in advance (an unexpected El Nino is why last year’s hurricane forecast was so wrong), I would like to know how 19 climate models could all predict such a result.

  125. ray ladbury:

    FurryCatHerder, First, yes there is a correlation between sunspot number and geomagnetic distrubance. This is because sunspot number correlates roughly with solar activity, which correlates with the solar wind density. Solar wind is a plasma, and so the moving charged particles generate magnetic disturbances in the geomagnetic field. Many other things can also change geomagnetic field–including the geodynamo in the core. Since we’re heading for a field flip in another few thousand years, there’s considerably variability now than at any time in the instrument record.
    However, solar wind particles themselves do not penetrate the geomagnetic field (except near the poles as the aurora) except for extreme solar particle events. The argument is that solar wind modulates the flux of galactic cosmic rays, which create showers of charged particles in the atmosphere and seed cloud formation. There are several things wrong with this argument. First, there is no evidence that the atmosphere lacks for nucleation centers for cloud formation in the firstplace. Second, GCR fluxes are small–5 particles per cm2/sec in free space on average–and less by the time they penetrate the geomagnetic field. Third and most damning–GCR flux is not changing–not since ~1975 if you look at the satellite record, and not since the 1950s based on neutron fluxes measured at ground level (the neutrons are produced by the GCR as end products of the showers). You can find all sorts of patterns if you look at solar variability. However, you have to ask yourself whether that variability is sufficiently large to explain the observed effects, and if it’s not, what amplification mechanism do you propose. So far no one has anything that comes close to a coherent theory. Anthropogenic CO2 on the other hand is known physics and sufficiently large to explain the effects we are seeing. If you contend that Svensmark’s ideas are credible, then you have to explain why CO2 is a less effective ghg than we thought–as well as developing a coherent theory for amplifying the GCR effect.