RealClimate

Comments

RSS feed for comments on this post.

  1. Note the effect of the Butterfly Ballot and a few hanging chad.

    Comment by Richard Pauli — 23 Apr 2008 @ 9:55 AM

  2. Please see http://climatesci.org/2008/04/23/comment-on-real-climates-post-on-the-relevance-of-the-sensitivity-of-initial-conditions-in-the-ipcc-models/

    [Response: In the linked piece, you very clearly state that you do not believe that the real world is sensitive to initial condition variations like butterflies. That is all we are discussing here. If you now think that it is, feel free to expound on your viewpoint. We were just trying to make sure that a diversity of points was presented. – gavin]

    Comment by Roger A. Pielke Sr. — 23 Apr 2008 @ 10:15 AM

  3. Ed Lorenz was one of the great figures of 20th century science, and he was an accomplished writer as well. His book, “The Essence of Chaos” is still the best popular science book on the subject. It is loaded with interesting commentary as well:

    To the often-heard question, “Why can’t we make better weather forecasts?” I am often tempted to reply, “Well why should we be able to make forecasts at all?” . . .

    I must admit that my first encounter with the question of tidal prediction left me with an uneasy feeling. Apparently I had always looked at announcements of coming high and low tides as statements of fact, as firmly established as the times of yesterday’s tides, and no to be doubted. Yet it is apparent that they are predictions, and so, for that matter, are such “facts” as the times of sunrise and sunset that appear in almanacs. That the latter times are simply predictions with a high probability of being exactly right, rather than facts, becomes evident when we realize that an unforeseen cosmic catastrophe – perhaps a collision with an asteroid – could render them completely wrong. Even without a catastrophe they can be slightly in error. . .

    Ed Lorenz then goes on to provide the clearest and most concise description of the ocean-atmosphere system I’ve ever heard:

    Both the atmosphere and the ocean are large fluid masses, and each envelops all or most of the earth. They obey rather similar sets of physical laws. They both possess fields of motion that tend to be damped or attenuated by internal processes, and both fields of motion are driven, at least indirectly, by periodically varying external influences. In short, each is a very complicated forced dissipative dynamical system.

    Perhaps it would be more appropriate to call them two components of of a larger dynamical system, since each exerts a considerable influence on the other at the surface where they come into contact. The winds, which vary with the state of the system, produce most of the ocean’s waves, and help to drive the great currents like the Gulf Stream. Evaporation from the ocean, which also varies with the state of the system, supplies the atmosphere with most of the moisture that subsequently condenses and still later falls as rain or snow.

    Why, with so many similarities, should we have had so much more success in tidal than in weather prediction? Are oceanographers more capable than meteorologists? As a meteorologist whose close friends include a number of oceanographers, I would dispute any such hypothesis.

    He then goes into a really great description of the predictable and the irregular (after a brief discussion of how strong winds can produce anomalously high tides):

    It appears, then, that in attempting to forecast the tides we are for the most part trying to predict the highly predictable regular response. We may also wish to predict the smaller irregular response, but even when we fail to do so we have usually made a fairly good forecast. In forecasting the weather, or for definitiveness, the temperature, we usually take the attitude that the regular response is already known – we know in advance that summer will be warmer than winter – and we regard our problem as that of predicting those things that we do not already know simply by virtue of knowing the climate.

    In short, when we compare tidal forecasting and weather forecasting, we are comparing prediction of predictable regularities and some lesser irregularities with prediction of irregularities alone. If oceanographers are smarter than meteorologists, it is in knowing enough to pick a readily solvable problem. I should hasten to add that most oceanographers are not tidal forecasters anyway, nor, for that matter, are most meteorologists weather forecasters. In most respects the oceans present just as many challenges as the atmosphere.

    Thus, we can think of efforts at climate prediction as being focused on the regular components of the land-ocean-atmosphere system. As a simple example, think of dropping a china cup on a tile floor from a two meter height. One can quickly predict the velocity of the cup at impact as well as the time the cup will hit – but try predicting the number of pieces the cup shatters into, and where those pieces all end up. (The notions that Lorenz came up with had such far-reaching effects in mathematics that people like Ian Stewart claimed him as one of their own, as well).

    Unfortunately, a number of widely-quoted commentators on global warming seem to have little understanding of this entire issue. For an example of a heavily distorting “science article”, see the LA times:

    http://www.latimes.com/news/science/environment/la-sci-adapt26mar26,1,3720995,full.story
    “Global warming: Just deal with it, some scientists say: The ‘non-skeptic heretic club’ says it would be easier and cheaper to adapt than fight climate change. Critics say the flaw in the theory is that the effects will be unpredictable. “(March 26 2008)

    No – the flaw in the theory is that the climate effects ARE predictable – that’s the regular component! There is high uncertainty about the upper bound of responses, is all. What is remarkable is that the press is giving so much coverage to such a notion – well, maybe not, after what we’ve seen so far.

    For another example of this fundamental lack of understanding, see:
    http://sciencepolicy.colorado.edu/admin/publication_files/resource-2593-2008.08.pdf

    Unpredictable future: The IPCC scenarios include a wide range of possibilities for the future evolution of energy and carbon intensities. Many of the scenarios are arguably unrealistic and some are likely to be unachievable. . .

    . . .Enormous advances in energy technology will be needed to stabilize atmospheric carbon dioxide concentrations at acceptable levels. If much of these advances occur spontaneously, as suggested by the scenarios used by the IPCC, then the challenge of stabilization might be less complicated and costly. However, if most decarbonization does not occur automatically, then the challenge to stabilization could in fact be much larger than presented by the IPCC.

    This is the latest last-ditch effort to prevent governments from taking action on global warming. Essentially, there is a small group that is trying to convince the world that doing nothing about global warming is the right thing to do.

    However, those authors – Pielke, Green and Wigley – have no idea of what they are talking about. None of them has any background in energy technology whatsoever, yet they feel they have some authority – and apparently Nature agreed. None has ever published any papers on actual energy technology – solar PV, for example – and yet their paper – an attempt at predicting future technology scenarios – was accepted and published by Nature.

    The fact of the matter is that existing energy technology is sufficient to replace all fossil fuels – it is just a manufacturing and infrastructure issue. Thus, Pielke et. al’s analysis is completely bogus. I can’t imagine why Nature would allow people with zero research experience in solar, wind, biogas, other biofuels, or nuclear to write such an article. What specific “enormous advances in technology” are needed? I doubt that the authors could even write down a theoretical description of a PV cell without a lot of help.

    A far more interesting question, though, is what the regular and irregular components of attempts to forecast energy development are. We know that enough sunlight falls on the Earth in one hour to power all of human society for one year – so we just have to capture and convert a small fraction of that in order to meet our energy needs. That is the regular component.

    Comment by Ike Solem — 23 Apr 2008 @ 11:15 AM

  4. Thank you for this obit. Lorenz does indeed deserve to be praised more than buried. Your comments about chaos theory are particularly germane given the common misconceptions about chaos in climate and climate models. Lorenz’s story illustrates how scientists can become intrigued by a problem and can’t put it down until they not only understand it, but can explain it clearly to their peers.
    WRT how climate can be deterministic even as weather is chaotic, it helps me to think in terms of conserved quantities and the regions they constrain a system to in phase space. A system may have many states it can occupy within that constrained space, but the long-term averages over those states will not vary much even given small perturbations.

    Comment by Ray Ladbury — 23 Apr 2008 @ 11:30 AM

  5. It is a shame that we have lost someone who was so adept at communicating complex ideas with such lucidity, brevity and precision at the same time. I mean no offense to the authors of this site when I say that I feel that Lorenz was unmatched in this regard. His skill with words is something that I — and many others — aspire to.

    [Response: Believe me, no offense could possibly be taken. It’s an honor to be even put in the same sentence as an expositor such as Ed. Most of us would be happy to achieve a half the clarity of what he did in his writing. –raypierre]

    Comment by Bruno — 23 Apr 2008 @ 12:27 PM

  6. I was looking through his publication list, and he published 4 papers in 2006! Remarkable.

    Comment by SteveF — 23 Apr 2008 @ 5:27 PM

  7. For those readers who are not climate scientists or mathematicians, I can recommend James Gleick’s 1987 book Chaos: Making a New Science which does a very nice job of telling the story of how Lorenz made his serendipitous discovery and explaning the basic ideas and implications of choas theory for climatology and other fields. A good read.

    Comment by Sue — 23 Apr 2008 @ 5:30 PM

  8. Re: #3,

    We know that enough sunlight falls on the Earth in one hour to power all of human society for one year – so we just have to capture and convert a small fraction of that in order to meet our energy needs.

    Another, at least to me, interesting question is just what exactly are our so called needs? As an example of what I mean let me postulate a possibly radical paradigm shift and alternative way of life. Do we really need what we think we do? Could we imagine a much more personal self contained but still comfortable and fulfilling existence? We living in the so called first world style of Western civilization live in relatively large houses with lots of power consuming lights and gadgets. Do we really need to light an entire room to full daylight standards at night all the time? could we survive with low intensity LED strips and maybe wear self contained fashionable headgear with personal high powered lighting powered by human powered kinetic generators. Maybe saving the full daylight at night brilliance for very special occasions? I can think of hundreds more such examples of energy efficiency which is available right now with off the shelf technology if we look at how and what we do from radically different perspectives. I’m not even going to bother with pointing out the stupidity of sitting in a typical traffic jam commute to your average downtown office, cocooned in a couple of thousand pounds of fossil fuel powered carbon dioxide spewing automobile. As a more concrete example one of my colleagues from work takes three to four days of very stressful travel by plane, rental car hotel stay etc… to visit and implement my company’s software on site at a typical customer. In that same time I can log on to computers in half a dozen countries around the world and do the same implementation and training for dozens on customers with ever leaving my home. Maybe I can invite those customers to meet me on some nice sunny beach on our two month long sabaticals and make it a point of all of us getting there by sailboat instead of by plane. Let me invite all of you to start imagining this kind of world. BTW I just came back from doing kayak support for a relay swim team http://www.distancematters.com held in Tampa Bay Florida in commemoration of Earth Day, I spent over 13 non stop hours paddling in this 24 mile race. I met some people who do things like swim the English Channel run up on the beach turn around and then swim back again, It was an immense honor and a pleasure to be associated with such people.

    Happy Earth day to all!

    Comment by Fernando Magyar — 23 Apr 2008 @ 8:15 PM

  9. Thanks for a very nice scientific obituary. In addition to his writing, Lorenz’s oral presentations were also marvels of astonishing clarity. I always was left at the end of his talks with the distinct impression that his brain neurons were deterministic while mine were chaotic.

    [Response: no distinction then? – gavin]

    Comment by John Nielsen-Gammon — 23 Apr 2008 @ 8:57 PM

  10. A fine scientist, a fine man, and a fine human being. I will mourn him.

    Comment by Costanza — 23 Apr 2008 @ 11:34 PM

  11. Re #3, on energy security and replacement of fossil fuels. I am unsure as to the notion of the technology being available presently to replace fossil fuel enough to mitigate climate change. The big issue over here in the UK at the present time is Energy Poverty. Fossil fuels are cheap, renewables are more expensive and hence that will drive fuel bills up. If we produce 20% of our energy from sustainable/renewable sources then energy is presently going to cost more putting millions of homes into energy poverty which is defined as 10% of income being spent on energy expenditure.

    In the UK petrol (gasoline) is on the rise, £1.10 a liter and diesel £1.20 (that equates to almost $7 a US gallon), gas and electricity prices have risen significantly in the past two years and although this makes renewables more affordable as fossil fuels become more expensive more R&D is required before this price difference is closed. Our Government or anyone else for that matter has not mentioned this to the people. I wonder why.

    Edward Lorentz – RIP. I knew of him through my laymans studies if fractals, chaos and nonlinear dyanamics. James Gleiks Book CHAOS mentioned him a lot. He seemed a very capable and judging by this article brilliant scientist.

    Comment by pete best — 24 Apr 2008 @ 4:36 AM

  12. I read the little companion piece by Richard Eykholt, and I think a point is being missed. It is not that the flap of the butterfly wings in Tokyo is “causing” the tornado, in so far as creating the kinetic and thermal energy differentials that result in a tornado. Rather the flap of the butterfly wings can set up an escalating sequence of variations in the temporal and spatial pattern of energy flows (as compared with the non-butterfly case), that ultimately could result in the tornado happening in June in Dallas, rather than in May in Oklahoma City. It is somewhat analagous to a pinball machine. A very very minor change in the initial velocity vector of the ball results after a period of time in a completely different, even unrecognizable, path for the ball. A brief nudge on the table is not construed to set the ball in motionr, nor is it construed to create the energy exchange between the ball and each bumper it contacts. Rather, a very minor initial change to the ball’s path results after time in a totally different pattern for the ball’s motion. Threshold phenomena, just like barely missing a specific bumper, become significant markers in the pattern change.

    Comment by Daniel Nall — 24 Apr 2008 @ 7:49 AM

  13. One thing I’ve wondered about. Ed’s writings were always so clear that they came across as just about the only way to express a scientific idea in writing. They expressed the idea effortlessly. In my experience, such effortless communication is anything but effortless. For those who worked with him, could you describe his approach to problem solving and subsequent communication of the results. Was he intuitive–like Landau for example–or did he need concrete examples he could study and draw insight from? And did he write his papers in a single draft or did they undergo many revisions? Thanks.

    Comment by Ray Ladbury — 24 Apr 2008 @ 8:06 AM

  14. Gavin – Thank you for posting my Climate Science link. It terms of actual butterlies, as clearly explained by an expert in the physics and mathematics of nonlinear dynamics and chaos in geophysical flows, Professor
    Richard Eykholt (see http://climatesci.org/2005/10/12/more-on-the-butterfly-effect/), where he writes

    “Roger:

    I think that you captured the key features and misconceptions pretty well. The butterfly effect refers to the exponential growth of any small perturbation. However, this exponential growth continues only so long as the disturbance remains very small compared to the size of the attractor. It then folds back onto the attractor. Unfortunately, most people miss this latter part and think that the small perturbation continues to grow until it is huge and has some large effect. The point of the effect is that it prevents us from making very detailed predictions at very small scales, but it does not have a significant effect at larger scales.

    Richard Eykholt”

    [Response: You misinterpreted this back on the original thread and you are misinterpreting it here again. However, just repeating the same argument is pointless. Since I agree with Dr. Eykholt’s statement, and so do you, let’s just leave it at that. (if other readers are interested in what this is about, please go to the original thread. The clue is that ‘larger scales’ in the Eykholt quote means the attractor itself (i.e. climate), while RP thinks he means the large scale flow (i.e. the specific position on the attractor)). – gavin]

    Comment by Roger A. Pielke Sr. — 24 Apr 2008 @ 11:08 AM

  15. pete best wrote: “Fossil fuels are cheap, renewables are more expensive and hence that will drive fuel bills up.”

    The cost of fossil fuels is going up rapidly and will continue to do so, and in the not too distant future fossil fuels will be in increasingly short supply at any price due to depletion of supplies, and declining extraction rates. So fuel bills are going to be driven up in any case. Meanwhile the cost of clean renewable energy from wind and solar is dropping, and as both industries are growing rapidly worldwide, economies of scale combined with new technologies are set to reduce prices even more, perhaps dramatically so. There is every economic reason in the world to accelerate the phaseout of fossil fuels and the expansion of clean renewable energy sources (and of course for maximum efficiency in the usage of both).

    [Response: Please, this is not at all an appropriate place to be discussing fossil fuel economics. –raypierre]

    Comment by SecularAnimist — 24 Apr 2008 @ 12:31 PM

  16. There’s some pretty libelous things said in this Esterbrook interview by that opponent of mine. Doesn’t take long to find them either. Downright shameful. Somebody should hand them their hat.

    http://icecap.us/images/uploads/DonEasterbrookInterviewTranscript.pdf

    [Response: Yes it’s pretty awful, but it doesn’t have anything to do with Lorenz. Can we please try to show a little respect for the topic? –raypierre]

    Comment by Mark A. York — 24 Apr 2008 @ 3:13 PM

  17. Gavin- I agree readers can go through the thread to see the discussion. However, you are misrepresenting my views. Rich Eykholt and I are in 100% agreement on this subject. The question that was being discussed is whether an atmospheric perturbation as small as a real world butterfly could actually effect large scale weather features thousands of kilometers away. The answer, as given by Professor Ekykholt is NO under any circumstance. The perturbation has to be much larger (Issac Held, as I recall said meters in his NPR interview; I suspect it is a few kilometers or more) for a pertubation to effect an atmospheric feature thousands of kilometers away.

    This issue, based on our disagreement, would benefit from further quantitative evaluation with both analytic and numerical models. We do have papers on the use of analytic models to examine chaos and nonlinear dynamics which document that we are quite familiar with the subject of sensitivity of the climate system to initial conditions; e.g. see

    Pielke, R.A. and X. Zeng, 1994: Long-term variability of climate. J. Atmos. Sci., 51, 155-159.
    http://climatesci.colorado.edu/publications/pdf/R-120.pdf

    [Response: As we said above, this is what you believe. Why you accused us of misrepresenting you is a mystery. However, your claim about Ekykholt’s belief is contradicted by his quote above. He states very specifically that exponential growth saturates at the time the perturbation reaches the size of the attractor. That, for the atmosphere, is very large indeed and is certainly large scale enough to encompass storms thousands of miles away. Isaac can certainly speak for himself, but as far as I know there is no demonstration that there is a minimum scale below which perturbations do not grow. Such a thing may exist, but your certainty on the matter seems a little overconfident. Perhaps you’d care to point out a reference on the subject? – gavin]

    Comment by Roger A. Pielke Sr. — 24 Apr 2008 @ 3:28 PM

  18. RE #11 & 15, and if they stop all the tax-breaks and subsidies to oil that would jack up the price of oil (but save us some money come April 15th).

    However, no butterfly flap will ever get those Hummer guys to switch to a Prius. You need a mamouth elephant stomping on and squashing all the Hummers to accomplish that, and even then it seems doubtful.

    Our Univ is trying to get more into supercomputers — the hard sciences and computer sciences already are into them to some extent, but we may be getting access to an even more powerful supercomputer.

    I’m just wondering what the social sciences can do with it. I’m not aware of any modelling formula that includes the true forcing of this bout of global warming — people and the psychological, social, and cultural dimensions of our human condition. These are not only highly complex, but in a reflexive feedback with “reality” as perceived through people’s (changing) sociocultural-psychological lenses.

    I realize the various climate scenarios (for different levels of our human emissions) are meant to capture this behavioral science dimension. But it would be great if we could narrow down those scenarios.

    And then there are relatively abrupt changes or somewhat discontinuous functions that are possible, for instance, revitalization movements (social movements in sociology), in which people change really fast. I’m just hoping for that — change in the right direction of reducing GHG emissions, due to a new vision for a better society and world.

    Comment by Lynn Vincentnathan — 24 Apr 2008 @ 3:42 PM

  19. I am sorry, but everything in the citation of Prof. Eykholt seems either inaccurate or backwards.

    First, not every small perturbation on a strange attractor grows. Chaotic attractors usually have so-called hyperbolic local structure of the phase flow at almost any point, meaning that the vicinity of every trajectory has two manifolds, one expanding (led by Lyapunov exponent), and the other CONTRACTING, to maintain phase volume or have it shrinking for dissipative systems. Therefore, if the flap energy projects entirely on the contracting side of phase flow, this perturbation would decay, exponentially.

    Second, if the perturbation did grow up to the attractor boundary and folds back, it is exactly where unpredictability occurs; it is unpredictable where the system state would land on the next turn. The butterfly “effect” is not that it would trigger growing contiguous squalls of tornados in Texas, but it would affect the time or place _when/where_ a tornado would strike. Tornados in Texas will occur no matter what, but their time and place could be traced back to a butterfly flap in Brazil, hypothetically speaking.

    Third, it is exactly opposite of what was said about detailed predictability on small scales: locally, the phase flow on attractors is pretty simple almost everywhere, and therefore is easily predictable. Anyone can predict their weather along a trajectory of 30 seconds long.

    About “effect at large scales”, it looks like the original question was misunderstood. About the actual relationship between a butterfly flap and large scale effects, see my explanation here:
    http://climatesci.org/2007/05/18/wg1-ipcc-chapter-1-more-scientifically-erroneous-statements/#comment-179850

    Cheers to global climatology,
    – Alexi Tekhasski

    Comment by Al Tekhasski — 24 Apr 2008 @ 10:10 PM

  20. Re #17: In the interview with NPR, I was caught off guard when asked if a butterfly was big enough — I was thinking that the scale of the perturbation has to be larger than what is often referred to as the Kolmogorov microscale, the scale below which the flow is effectively laminar, to avoid being damped out immediately. This scale is typically a few millimeters in the atmosphere — so its an academic point even if true, which, in retrospect, is not self-evident. Not being able to articulate any of this during the interview, I just blurted out “a meter” as the scale require, while trying to visualize if the flap of a butterfly’s wings was turbulent. Afterwards I regretted questioning Ed’s choice of perturbation. I don’t think there is any question that once one perturbs turbulent scales the perturbation will grow to affect the entire atmosphere.

    Comment by isaac held — 24 Apr 2008 @ 10:55 PM

  21. The unpredictability of weather doesn’t matter to climate predictions,
    but the proper modeling of weather does. The tornado in June in
    Dallas might have the same effect on climate as the tornado in May in
    Oklahoma City, but how about 100 tornadoes versus zero? It’s easy to
    think that weather averages out, hence the example of one tornado
    versus a different one. But weather does not average out, weather is
    climate. The diurnal cumulus the past few days here in No. VA due to
    the recent rains are affecting the climate and are not averaged out by
    the lack of clouds somewhere else.

    More to the point, the effect of some butterfly somewhere on this
    weekend’s upper low and Monday’s rains will have it’s corresponding
    effect on the rest of next week’s climate in our area. It matters in
    the short and medium range whether the upper low moves out to sea or
    travels down to North Carolina like last Sunday’s upper low. Having
    these lows two weeks in a row will have even more effect on our future
    weather and thus climate. Weather forecasters don’t use expressions
    like “when in drought, leave it out” for continuity only. It often
    does not rain in a drought because of the local lack of moisture, not
    the broader weather pattern. Today’s butterfly and this coming
    Monday’s upper low may indeed prevent some drought next month. Our
    weather can affect our climate.

    There are workarounds for correcting weather in climate models, such
    as doing detailed weather modeling for a small representative area
    using parameters from the climate model, then applying the resultant
    parameters from the weather model back into the climate model. Or
    using real world weather measurements for the parameters in climate
    model . But the first workaround doesn’t fully address the
    idiosyncrasies of terrain, land and ocean that affect the weather
    (such as the Appalachian mountains here). The second workaround
    doesn’t fully address the effect of climate changes on real world
    weather (i.e. current weather measurements may not apply in future
    climates). Still those workarounds are better than nothing and
    getting better all the time. And weather modeling in climate models,
    particularly the spatial and temporal resolution, will ultimately make
    my whole point moot (within 20 years at most).

    Comment by Eric (skeptic) — 25 Apr 2008 @ 5:59 AM

  22. I think was a leader in scientific thought in so many ways, not the least was his understanding that everything is related and interrelated to everything else in the universe and nothing is superfulous. Hense the more scientists know the more (the smart ones that is) they realise the enormity of what they dont know. For every item of knowledge they understand there forms an squaring of the items they do not understand…yet.
    Just seen that CO2 production was at a record high last year and that now atmospheric CO2 concentration is up to 385ppm –0.6% up than from 2006. Methane was up as well for the first time in ten years..to early to tell if that was just a glitch or readjustment or that the melting of the tundra may have contributed.

    Comment by Lawrence Coleman — 25 Apr 2008 @ 8:13 AM

  23. Eric,
    Realclimate has addressed the issues of regional climate several times before. Indeed, this is an area where more work is needed, especially if you want to assess/bound the costs of climate change. Modeling global climate is more forgiving, and it can also give you important information that helps you in making regional assessments. An open system (e.g. regional climate) will always exhibit more seemingly chaotic behavior than a closed one (global climate).

    Comment by Ray Ladbury — 25 Apr 2008 @ 9:13 AM

  24. Say more about Lorenz, please. This thread is about him and his work.

    Comment by Hank Roberts — 25 Apr 2008 @ 10:44 AM

  25. This is an excerpt from Appendix 1, Essence of Chaos, Ed Lorenz:

    Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set off a Tornado in Texas?

    Lest I appear frivolous in even posing the title question, let alone suggesting that it might have an affirmative answer, let me try to place it in proper perspective by offering two propositions:

    1. If a single flap of a butterfly’s wings can be instrumental in generating a tornado, so also can all the previous and subsequent flaps of its wings, as can the flaps of the wings of the wings of millions of other butterflies, not to mention the activities of innumerable more powerful creatures, including our own species.

    2. If the flap of a butterfly’s wings can be instrumental in generating a tornado, it can equally well be instrumental in preventing a tornado.

    Supercooled water, when tapped slightly, gives a vivid example of one kind of possible effect. If the water is allowed to slowly warm up to the freezing point, nothing happens. If jiggled, or if even a single ice crystal is added, however: http://www.youtube.com/watch?v=DpiUZI_3o8s

    So it is not just the perturbation, it is the state of the system at the time of perturbation. Jiggling water at 50C will not cause it to flash into steam, for example.

    A more relevant weather example is that of a “conditionally unstable parcel of air” – as explained well at wiki:

    If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL( lifting condensation level ) or CCL (convective condensation level), and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL).

    In this case, the amount of water vapor in the air will determine the sensitivity of the “air parcel” to perturbations – and “sensitive dependence on initial conditions implies chaos” – but chaos does not imply a total lack of predictivity.

    So, what sets the boundary conditions for weather systems? Can we look at climate prediction as an attempt to forecast those boundary conditions? One main boundary condition is sea surface temperatures – leading to this 1998 paper:

    Predictability in the Midst of Chaos: A Scientifc Basis for Climate Forecasting
    J. Shukla, Science 23 Oct 1998

    “. . .One of the simplest and perhaps the most elegant definition of a chaotic system was provided by Lorenz (2) as “one that is sensitively dependent on interior changes in initial conditions.” The Earth’s atmosphere is an example of a chaotic system; irreducible errors in and incompleteness of observations of the initial conditions as well as the imperfection of the models prohibit accurate forecasts of the day-to-day sequence of weather beyond a few days. However, it is shown here that aspects of the tropical atmosphere do not conform to the above definition of chaos. The tropical flow patterns and rainfall, especially over the open ocean, are so strongly determined by the underlying sea-surface temperature (SST) that they show little sensitivity to changes in the initial conditions of the atmosphere.”

    http://ptolemy.gmu.edu/~beall/data/chaos_papers/shukla-1998-science-v282.pdf

    The paper has a discussion of limitations to predicting the frequency and intensity of El Nino events, but the point is that the ocean plays a major role in setting boundary conditions for weather systems. This is not some new realization – it’s been know for quite a while, despite efforts by some to present this as their new, sole discovery:

    http://www.jsg.utexas.edu/news/feats/2007/pielke.html

    “He said a more reliable metric for gauging climate change would be ocean heat content.

    “The ocean is a big heat capacitor,” he said. “It’s like heating up a pot of water—turn the burner off, it stops warming. There’s no ‘unrealized heat.’”

    He cited a study published last September by NASA and NOAA researchers which found that the near surface waters of Earth’s oceans cooled slightly from 2003 to 2005.

    “None of the models have picked up on the fact that there has been global cooling in the oceans,” said Pielke. “Neither has the media.”

    Usually, when people make such large mistakes they print retractions.

    Comment by Ike Solem — 25 Apr 2008 @ 10:59 AM

  26. Ray, I agree with your last two sentences, except that my impression is that global climate is the average of regional climates. Although that smooths out the chaos of the regional climates, it doesn’t avoid their complexity. The fact that the global climate is a closed system doesn’t really help since it can’t just be simply modeled with energy equations. The complexity of the regional climates must be modeled to determine the effects of clouds in particular and all forms of water in general. Those can then be averaged or used as parameters in a global climate model.

    Comment by Eric (skeptic) — 25 Apr 2008 @ 1:52 PM

  27. Eric, I’m afraid I agree with Hank–we should be celebrating Lorenz here. Just want to say that you don’t have to get things 100% right on a regional scale to have a good enough global climate model. The more closed the system, the closer it comes to having its state space determined by conserved quantities. Earth comes pretty darned close.

    [Response: I certainly don’t think Ed would have had any objection to lively, even contentious, discussion of the nature of chaos, or its implications. We don’t have to stick to encomium, but I was making a plea to try to at least keep the discussion from just becoming a dumping ground for disconnected random thoughts of the day. –raypierre]

    Comment by Ray Ladbury — 25 Apr 2008 @ 2:10 PM

  28. In the spirit of this memorial to Lorenz and the seminal discovery of “deterministic non-periodic flow”, it is of interest to recall a little more of Barry Saltzman’s role, as I heard him tell it around 1990. Barry is now deceased.

    With regard to:

    “Lorenz found this (chaotic) phenomenon by accident …and

    … it was only with the onset of electronic computers, as used by Lorenz, that this became more widely recognized.”

    Barry himself did the first numerical integration of the 3 non-linear equations and so was the first person to witness/discover the chaotic behavior. He then showed the results to Lorenz, showing that the solution was “non-repeating” — contrary to their expectations at the time for relatively simple, albeit non-linear, coupled ordinary differential equations.

    Barry, always the gentleman, nevertheless was completely reverential to Lorenz about the discovery saying “… he (Barry) could never have written… “ the analysis presented in the original 1963 paper.

    [Response: Indeed, this is a very fair description of the sequence of events, both with regard to Saltzman’s important role in the discovery and Lorenz’s seminal contribution to the analysis. By the way, what got Lorenz interested in this sort of equation originally was a desire to demonstrate the deficiencies of statistical methods of weather forecasting, as opposed to dynamically based forecasts. –raypierre]

    Comment by Stuart — 25 Apr 2008 @ 2:22 PM

  29. Gavin – I am glad this discussion is continuing. I will be having more to say on this next week in a weblog on Climate Science, however, you are failing to distinguish between an open and closed system, and between the real world and models.

    With nonlinear atmospheric models such as analyzed by Professor Lorenz, the results for large scale features are sensitive to the initial conditions regardless of how small they are. This is because the system is closed.

    The real world climate system, however, is not closed, such that energy (i.e. in the form of heat) can leak out of the system. In the case of such a small perturbation as the flap of a butterfly wing, the kinetic energy of the small amount of turbulent air that it generates will quickly dissipate into heat, once the flapping stops. Radiative loss of this heat to space will prevent the flapping to have any effect at large distances.

    This is one of the reasons that you are mistaken in stating that “there is no demonstration that there is a minimum scale below which perturbations do not grow.” If a perturbation in the system (i.e. the atmosphere) dissipates into heat, it can be lost to the system before affecting atmospheric features at large distances. I will have more on this topic on my weblog next week, and will post a comment on Real Climate when it appears.

    [Response: Have a look at Isaac’s remark above. I think what you probably have in mind is the possibility that if a perturbation is at a scale where you have primarily downscale energy cascade to the dissipation range, it might never project on the large scale quantities whose behavior determines large scale predictability loss. Given the nature of turbulence, it is hard to absolutely exclude this possibility a priori, but for this to happen, there would have to be ZERO leakage to large scales. Not just small but ZERO. That is exceedingly unlikely, and would be contrary to most of what is know about turbulent cascades. As a practical matter, I do agree that if the initial perturbation is at sufficiently small scales, the projection on large scales would be small enough that it could take an exceedingly long time before it affected the evolution of the large scales. –raypierre]

    Comment by Roger A. Pielke Sr. — 25 Apr 2008 @ 2:31 PM

  30. Re: 29

    As far as I remember, Lorenz considered an open system in his original paper – there was a heat source.

    I don’t think it matters whether the system is open or closed. What is more important, I believe, is the multiplicity of characteristic spatial scales. In the Lorenz system there are only two: the global scale and the dissipative scale. This is why butterfly affects solution everywhere. In real atmosphere there are multiple spatial
    scale, including local convective and synoptic. The butterfly perturbation could be dissipated within the containing spatial scale so that larger scales will never feel the effects of the butterfly wings flap.

    Comment by Sashka — 25 Apr 2008 @ 3:59 PM

  31. I’m going to botch this story but I’ll tell it anyway. As many of you know Steve Smale was sort of the grandfather of modern dynamical systems theory. He created the “horseshoe map” which abstracts the necessary mathematical ingredients for a strange attractor down to squishing, stretching, folding the square.

    Jim Yorke told me this next which is sort of amusing. Yorke is a brilliant dynamicist in his own right coined the word “chaos” for chaotic dynamics in his paper “Period 3 Implies Chaos”. It was bout 10 years ago so I’m probably mucking it up a bit. Anyway, at some point York xeroxed his copy of Lorenz’ famous paper on chaos “Deterministic Nonperiodic Flow” which had his name on the front and mailed it to Smale. This led to terms around around Berkeley such as “York’s discovery of Lorenz”.

    RayPierre nailed this whole silly argument in his response to #29.

    Comment by John E. Pearson — 25 Apr 2008 @ 4:31 PM

  32. I do wish that Roger Pielke Sr. would present some concrete examples of his claim that sensitive dependence on initial conditions depends on whether the system is open or closed.

    Take the example of a conditionally unstable parcel of air that has just reached the saturation point with respect to water vapor – a flap of a butterfly’s wings then could be the minor perturbation that sets off turbulent mixing. The timing of events is then really what is subject to the butterfly effect – not the final thermodynamics.

    Take another example – models of hurricanes. See http://scitation.aip.org/journals/doc/PHTOAD-ft/vol_59/iss_8/74_1.shtml for Kerry Emanuel’s article, Tempest in A Greenhouse, in Physics Today:

    In the part of the tropics where the sea surface is warm enough and the projection of Earth’s angular velocity vector onto the local vertical axis is large enough, random small-scale convective currents sometimes organize into rotating vortices known as tropical cyclones. In computer models of the tropical atmosphere, such organization can happen spontaneously, but usually only if a combination of ocean temperature and rotation is somewhat higher than those observed in nature. In subcritical conditions, some trigger is necessary to initiate the vortices, and in the terrestrial atmosphere tropical cyclones only develop from preexisting disturbances of independent origin. In mathematical parlance, tropical cyclones may be said to result from a subcritical bifurcation of the radiative–convective equilibrium state.

    Again, here we have a system poised at just the moment that some atmospheric perturbation – maybe a butterfly flapping its wings, or someone waving hello – initiates as sequence of events that can end in a category 5 hurricane – but only if a number of conditions (one being adequate SSTs) are met. The timing of events is chaotic, and the system is open – not closed.

    To quote Lorenz again:

    More generally, I am proposing that over the years minuscule disturbances neither increase nor decrease the frequency of occurrence of various weather events such as tornados; the most that they may do is to modify the sequence in which these events occur. The question which really interests us is whether they can do even this. . . in more technical terms, is the atmosphere unstable with respect to perturbations of small amplitude?

    Thus, chaos isn’t the ultimate cause of the hurricane, just the proximate cause. I think Roger Pielke Sr. also is misunderstanding the statement, “there is no demonstration that there is a minimum scale below which perturbations do not grow.” Take for example the source of the eastern African waves that “seed” hurricane systems: http://www.aoml.noaa.gov/hrd/tcfaq/A4.html

    It was Burpee (1972) who documented that the waves were being generated by an instability of the African easterly jet. (This instability – known as baroclinic-barotropic instability – is where the value of the potential vorticity begins to decrease toward the north.) The jet arises as a result of the reversed lower-tropospheric temperature gradient over western and central North Africa due to extremely warm temperatures over the Saharan Desert in contrast with substantially cooler temperatures along the Gulf of Guinea coast.

    There is no connection between the number of African easterly waves and the number of hurricanes, but they still set the timing of hurricane systems – even though only some fraction develop into hurricanes. Can the flapping of a butterfly’s wings affect the timing of the formation of an easterly African wave? If so, then the Pielke argument is somewhat mistaken.

    If we rephrase it as this: “there is no demonstration that there is a minimum scale below which perturbations do not cancel each other out”, we can ask “can the flapping of one butterfly’s wings cancel out the effect of the flapping of another butterfly’s wings?” – and so on.

    Let’s also look at the oceans – what about the timing of El Ninos? Are there perturbations in the net state of the tropical Pacific Ocean-Atmosphere that can lead to the formation of an El Nino when the system is poised to produce an El Nino? El Nino forecasts are successful on the scale of months, not days or weeks – so there the ocean is playing the leading role.

    How about much longer timescales? On such timescales, a major volcanic eruption is just a small perturbation. Can it have a large effect? Imagine the world at the beginning of the Last Glacial Maximum, but not without full glaciation yet. Could a large volcanic eruption like Pinatubo then cool the climate enough to really lock in the ice age? Probably. Such a volcanic eruption would not have the same long-term effect if it happened 8000 years ago.

    The thing to remember seems to be that all chaotic systems have different degrees of predictability, which always depends on the details of the system in question and the timescale of interest, and which also evolve with the system (i.e. with changing boundary conditions). One can easily imagine situations in which the weather is even less predictable than it is now – on a waterless world, for example, with no stabilizing ocean surface.

    Comment by Ike Solem — 25 Apr 2008 @ 7:29 PM

  33. Ray – Thank you for getting involved in this discussion. The question of the leakage time scale is, of course needed, in order to determine when the exceedingly long time scale becomes infinite (in terms of where the heat goes). If we both agree that ALL of the turbulence quickly dissipates into heat when the flapping stops, then what is your estimate of the residence time of this heat within the atmosphere before it is lost to space?

    Also, as another thought example, if a butterfly flaps its wings inside a room with the doors shut, would you still maintain that this has an influence on atmospheric circulation at large distances? All of the heat generated would be absorbed by the walls of the room, and subsequent heat conduction is, of course, laminar. An analogous behavior will occur in a very stable boundary layer (and any region of the atmosphere for such small perturbations), and if we can agree on this “exception” than we have made progress in understanding this issue. My point here is that if there is an part of the process which results in complete loss of the turbulent flow, then it is not communicated over large distances.

    Issac’s Held’s answer also actually contains part of the answer on this issue. If the turbulence dissipates into heat, as illustrated in the above example, than its further behavior can be described by non-turbulent behavior. As he explained, he was “was thinking that the scale of the perturbation has to be larger than what is often referred to as the Kolmogorov microscale, the scale below which the flow is effectively laminar, to avoid being damped out immediately. This scale is typically a few millimeters in the atmosphere “. This is what occurs with the flapping of the wings of a butterfly; all of its energy dissipates into heat and the spatial structure of this heated air is less than a few mm. To disprove this total transfer downscale, one would have to show that a coherent turbulent structure remains and becomes progressively larger in scale and/or is monitored propagating away from the location of the flapping wings as a coherent disturbance of the air flow; in both cases, while still retaining the conservation of total energy. Since the total energy of the flaps of the butterfly’s wings must be accounted for (as kinetic energy in the turbulence, heat) what is your estimate of the magnitude of this energy that reaches thousands of kilometers away, as well as the path this energy would take to get there?

    [Response: Regarding the butterfly in the room — even in a jar in the room — sure I think it’s likely that it would ultimately affect the large scale weather. Look at it this way: Temperature has a dynamic influence through buoyancy. The heat dissipated by the butterfly might warm the room by a few tens of microkelvins, say. That increased temperature will change the heat flow between the house and the environment, which will ultimately change the temperature of some parcel of air by a few nanokelvins. Then before you know it, some parcel of air the size of the state of Illinois has a temperature different by maybe a few picokelvins. I guarantee that if you take a GCM and change the temperature of the air over Illinois by a few picokelvins (given sufficient arithmetic precision) that that will lead to divergence of the large scale forecast given infinite time. I have seen no indication either in dynamical systems theorems or in numerical experiment to suggest that anything else would be the case. –raypierre]

    Comment by Roger A. Pielke Sr. — 26 Apr 2008 @ 8:58 AM

  34. I agree that if the atmosphere behaves as a Lorenzian system then Ray’s comment above is right and I can’t think of a reason why not. If so, however, the possible future states of the atmosphere must be a very large number just considering the numbers of butterflies and other atmospheric perturbators of various sizes, locations and times.

    neutrino

    Comment by neutrino — 26 Apr 2008 @ 2:17 PM

  35. Ray,

    Doesn’t the “butterfly effect” imply a very large number of possible future atmospheric states given the vary large number of atmospheric perturbators?

    neutrino

    [Response: It’s better to think of it as uncertainty in knowledge of the future, rather than trying to decide what is meant precisely by a phrase like “possible future atmospheric states.” At any given time, you are lacking knowledge of what the various perturbators are going to do, and also lacking knowledge of the precise initial condition. This leads to a bundle of different trajectories the atmospheric state can evolve through, which will come to occupy a certain volume in phase space. Now, where climate vs weather comes in is that, if you are increasing CO2 at the same time, while those uncertain futures are splattered all across phase space, the at year 2100 splatter will nonetheless be confined to a region of phase space where, say, the global mean temperature is between 3.5C and 4C warmer than the present. In thinking about uncertainty about possible future states, you also need to distinguish between uncertainty due to “weather noise” of that sort, uncertainty due to possible different representations of physical processes (what will clouds really do? How fast will Greenland melt? Will land ecosystems start becoming a net carbon source?), and finally uncertainty in the future rate of emissions of CO2 by human industrial activity. –raypierre]

    Comment by neutrino — 26 Apr 2008 @ 3:00 PM

  36. Ray- We certainly disagree with respect to the butterfly in the room in a jar. :-). Other readers of Real Climate (and Climate Science) can make up their own minds on this.

    You are, however, taking the concept of chaos too narrowly and are focusing on idealizations (simple illustrative models and GCMs) of how the real atmosphere (and climate system) works. You are ignoring the consequences of the dissipation of kinetic energy into heat within a open system. The “picokelvins” of heat, even if they could cause such a temperature perturbation over the state of Illinois (which it would not), would be lost to space long before an “infinite” time were reached.

    [Response: Roger, I can’t make sense of what you’re trying to say here. For those picokelvins of temperature to be lost to space, first they have to appear in the atmosphere as an increase of temperature, right? So there you have your change of one digit in the initial conditions, just like in Lorenz’s example. And your statement is just flatly inconsistent with thermodynamics. The butterfly dissipates heat locally, and that heat will be gradually diluted over a larger and large area. So just divide by Cp and there’s your answer. Do you think there’s some way to magically teleport the heat away, leaving the fluid to heal back to exactly the same condition it would have had without the flap? That’s really a stretch. Your remarks about simple models and GCM’s don’t make much sense to me either. The GCM doesn’t resolve butterfly-scale motions, but once you have influenced a dynamic variable (e.g. temperature) at a resolved scale, any number of actual twin experiments in GCM’s confirm the divergence. If you are claiming there’s some fundamental difference between sensitive dependence to large scale changes in a GCM and sensitive dependence in the atmosphere, I’d like to see some evidence to back up that claim. The success of GCM’s in short term weather forecasting would be pretty much impossible to reconcile with such a claim. –raypierre]

    Comment by Roger A. Pielke Sr. — 26 Apr 2008 @ 3:51 PM

  37. Roger Pielke Sr. says:

    “The question of the leakage time scale is, of course needed, in order to determine when the exceedingly long time scale becomes infinite (in terms of where the heat goes). If we both agree that ALL of the turbulence quickly dissipates into heat when the flapping stops, then what is your estimate of the residence time of this heat within the atmosphere before it is lost to space?”

    This is missing the point entirely. Again, let’s look at a specific example: the generation of Easterly African Waves.

    R.W. Burpee (1972) The Origin and Structure of Easterly Waves in the Lower Troposphere of North Africa, Journal of the Atmospheric Sciences

    Wind statistics at stations flanking the mountains in Ethiopia indicate that airflow over these mountains is not the cause of the easterly waves. This study shows that the African waves are directly related to the mid-tropospheric easterly jet that is found within the baroclinic zone to the south of the Sahara. During the same season that the waves are observed, the gradient of the monthly mean potential vorticity vanishes along the isentropic surfaces. Charney and Stem have shown that this is a necessary condition for instability of the jet provided that the amplitude of the waves is negligible at the ground. Results show that the horizontal and vertical shear of the mean zonal wind are acting as nearly equal sources of energy for the perturbations. The role of convection in the origin of these waves has not yet been determined.

    The issue of convection is still actively studied, with interesting results:

    Mekonnen, et.al (2006) Analysis of Convection and Its Association with African Easterly Waves, Journal of Climate

    This study suggests that weak AEW activity in the east is consistent with initial wave development there and indicates that convection triggered on the western side of the mountains over central and eastern Africa, near Darfur (western Sudan) and Ethiopia, has a role in initiating AEWs westward. The subsequent development and growth of AEWs in West Africa is associated with stronger coherence with convection there.

    So, what initiates convection? An unstable atmospheric profile leads to convection, but what controls the timing? Small and unpredictable perturbations. While most perturbations will lead to no effect, some will initiate processes that had already built up a thermodynamic potential. That is not really about the propagation of energy through the atmosphere. Consider a large avalanche triggered by a small pebble – or by a slight increase in temperature. The initial energy is irrelevant relative to the final effect. It is the timing that matters here in terms of predictability at large scales.

    Getting back to our concrete example, a train of easterly waves forms over Africa each year from with a periodic component of 3-5 days. Butterfly-triggered convection plays a role in the timing of the irregular component. (Type convection + chaos into any science publication search engine and you’ll get thousands of hits.)

    Each member of this train of generated easterly waves may or may not amplify into anything, depending on the net state of the Atlantic Warm Pool, such as sea surface temps and depth of the warm surface layer, and the atmospheric wind shear above it. Each easterly wave represents, after Kerry Emanuel: “A subcritical bifurcation of the radiative-convective equilibrium state.” Any butterfly energy involved in initiating the wave dissipated long ago.

    The perturbation doesn’t have to amplify into a hurricane. There are many kinetic pathways that will resolve the thermodynamic potential that backs up a major hurricane without actually forming a hurricane. In other words, only a small fraction of the net energy released via the evaporation/condensation water cycle over the Atlantic organizes into hurricanes. Isaac Held probably knows, journalists.

    However, warm the atmosphere, warm the oceans, and you increase sea surface temps as well as the moisture content of the atmosphere, thereby forcing two of the thermodynamic parameters behind hurricane formation. There is more latent energy in the system, in other words, unless some other parameters act to counteract that (external wind shear, say). Will the fraction of energy that organizes itself into hurricanes remain constant? If so, you would expect the frequency and/or intensity of hurricanes to increase, regardless of their specific and irregular timing.

    Well, the oceans and sea surface temps have definitely warmed up. After Rayner et. al (2006):

    “Changes in large-scale SST through time are assessed by two methods. . The linear warming between 1850 and 2004 was 0.52° ± 0.19°C (95% confidence interval) for the globe, 0.59° ± 0.20°C for the Northern Hemisphere, and 0.46° ± 0.29°C for the Southern Hemisphere.”

    Accuweather and a few others. are still trying to claim, with no backing evidence, that this due to a “warm water cycle” in the Atlantic – and are being quoted on that by Reuters and other wire services.

    The point here is that broad statements about the climate being chaotic or not are always misleading. Every real physical system has regular and irregular components; analyzing that allows one to get a handle on the limits of predictability for the system over different timescales.

    Comment by Ike Solem — 26 Apr 2008 @ 5:48 PM

  38. > very large number of possible future atmospheric states

    But not going off into infinity in all directions, gathering around
    http://en.wikipedia.org/wiki/Image:Lorenz_attractor_yb.svg
    as mentioned in the original post

    Comment by Hank Roberts — 26 Apr 2008 @ 9:11 PM

  39. Excuse me for my ignorance, but is the following anywhere near target?

    Butterfly flap can

    1. have little or no effect beyond the perturbation of the surrounding air
    2. have considerable effect if that flap contributes to and is essential for a cascading effect– in which case other conditions are as essential. If any one of the necessary conditions were missing, then possibly no large scale event might occur. How we might “rank” conditions as to importance and to effect might be instructive.

    In either case, to ascribe to the flap the primary creation of an event larger than itself is certainly a mistake. Note I said, “primary creation.”

    [Response: None of this is right. The influence of the butterfly on the surrounding air propagates to larger scales and eventually affects the entire atmosphere. The thought experiment is starting with two identical atmospheres, which differ only by the flap of the butterfly wing. The claim is that even the large scale winds (e.g. position of a tornado or hurricane, or even onset of an El Nino) will diverge between the two realizations, given sufficient time. Regarding point (2), the general understanding of chaos is that you don’t need special conditions in the atmosphere for a “cascade” to occur leading to sensitive dependence on initial conditions. The sensitivity is generic to any sufficiently complicated nonlinear system, of which the atmosphere is almost certainly an example (though not yet provably so, in the rigorous mathematical sense). In fact Ruelle showed that (in a strict sense) “almost every” system with 5 or more degrees of freedom has a strange attractor in its dynamics. Thus, you should turn (2) around: the situation is that VERY SPECIAL conditions would have to occur to avoid a cascade leading to sensitive dependence. Now mind you, for any given specified nonlinear system, it has proved difficult or impossible for mathematicians to rigorously prove the existence of a strange attractor. This has been done for several special classes of systems, but partial differential equations pose a severe challenge. Last time I checked, it wasn’t even rigorously proved that the Lorentz/Saltzman equations had a strange attractor, though from the numerical behavior the truth of the assertion is not seriously in doubt. We are all physicists here, and need to do things even in areas where the mathematical challenge of rigorous proof has not yet been satisfied. No surprise there. Airplanes are designed on the basis of Navier Stokes all the time, even though it hasn’t yet been proved that solutions exist and are unique. –raypierre]

    Comment by Stormy — 26 Apr 2008 @ 10:06 PM

  40. More than just the occasionnal butterfly, one has to look at drifting snow or sand streams, in the hundreds per Kilometer, all undulating differently stretching for Kilometers, just to realize that its a pipe dream to come up with a model capable of replicating each wind stream correctly for each future moment. The argument about heat is a better one, I think we can measure the temperature of the entire atmosphere, at least I do that myself at 2 locations. It is a climate metric, far more easier to measure, trend and predict, yet, I see very little of it out there. Must thank Lorenz nevertheless for showing our limitations. Now can we manage to do what we can?

    Comment by Wayne Davidson — 26 Apr 2008 @ 10:09 PM

  41. I really liked Ed Lorenz and have tremendous respect for him. He was a friend of my advisor and came around a few times when I was a grad student.

    But one point; probably futile, but before there was the “Butterfly Effect” is was called the “Seagull Effect”, but it did not become as popular. My advisor explained the origin to me was an address that G. D. Robinson gave, I think some sort of presidential address to a 1965 WMO meeting, if memory serves about the limits of weather prediction. Robinson referred to the (then well-known) transfer of information from small length scales to large length scales and the overturning time of the atmosphere (about three days) asking whether in order to predict weather for more than a week we would need to observe every flap of a seagull’s wing on the other side of the globe.

    Is this the same as the “butterfly effect”? I think so. because at the time, it was well understood that on short length scales, atmospheric turbulence is like 3-D homogeneous turbulence then it’s really clear that neither the seagull nor the butterfly are going to be monitored for the five day forecast. So the small scale mixing gets rid of the effect of any single small event.

    What then was the preoccupation with this question in 1965? The real issue was not that a single event on a small scale could result in hurricanes on the other side of the world. The real issue was that there was coupling from those length scales to larger length scales, which is in some sense the gist of the school of thought of Kolmogorov/Obukhov, Batchelor, Kraichnan, and Leith (etc). In some sense it’s the Fourier coefficient of the flow field at the high wavenumbers which matters, so it is a sort of collective organization of information at that spatial scale that matters. And in some sense, what you do about that is figure out how to solve the problem only on the large spatial scales without your solution implicitly violating what has to occur at the spatial scales which you wish were beneath your notice. You do not have to have either the butterfly or seagull traffic reports on your desk. And this was to some extent Robinson’s point. They knew then they didn’t have to birdwatch to forecast.

    [Response: The history linked in the original post gives the history of the metamorphosis of the seagull (1963) to the butterfly (1972). – gavin]

    Comment by Andrew — 26 Apr 2008 @ 10:42 PM

  42. John Nielsen-Gammon: “Lorenz’s oral presentations were also marvels of astonishing clarity.”

    Yes they were. You just had to sit close and listen hard. People like to talk about Lorenz as a guy who had this great phenomenon drop in his lap. He was lethally intelligent, and didn’t miss tricks. He had many important ideas and contributions, and he usually knew exactly what he was doing.

    Comment by Andrew — 26 Apr 2008 @ 10:50 PM

  43. Raypierre’s response to #33 suggests an interesting solution to the problem of global warming.

    We just need to find the butterfly in the jar in that room and kill it.

    [Response: I think you’d find that no amount of butterfly-killing (not that I’d advocate that anyway) would offset the effects of a change in the global radiation budget. There are somethings you just can’t control by controlling the noise. Now, if you were talking about weather control, you’d be in a different ball game. There is a considerable literature on control of chaotic systems, exploiting the fact that there is a window of time where you can actually use the sensitive dependence to initial conditions to control the future state with little expenditure of energy. This was worked out explicitly for the chaotic pendulum, but a few years ago, as a joke, I suggested to the clever folks at the Beckman Institute at U. of Illinois (Urbana-Champaign) that you could perhaps control the Pacific storm track the same way. In fact, they took me quite seriously, and I wound up putting together a talk on the subject, though the idea never got picked up and explored in a rigorous way. The question is: how much could you control storms in the Pacific storm track through a controlled variation of the surface energy budget by,say, a tenth of a W/m**2 applied over 10,000 km**2 near the origin of the storm track? It’s quite routine these days to do similar things with control of turbulence on the laboratory scale. –raypierre]

    Comment by Jim — 27 Apr 2008 @ 12:09 AM

  44. Re #35, Ray says “what will clouds really do?”. That’s where I am wondering what the models really do. The clouds are resulting from the weather, currently mainly from the negative NAO here in Virginia. There are low and middle clouds which are probably climate neutral but also some blow-off high clouds from yesterday’s thunderstorms and the jet stream (generally climate cooling). Like I said above, these climate effects are not “averaged out” or offset by climate effects elsewhere. However, I can envision that the NAO can be modeled based on other statistics in the climate models (mainly SST). That should give an accurate depiction despite its highly nonlinear and chaotic origins.

    But there are other smaller scale effects on clouds that require similar parameterization to be accurately modeled. These are things like warm weather convection and cyclogenesis in various temperatures. The climate effects of the resultant clouds are concentrated convection (cooling), warming from low clouds, cooling from high clouds, and others (see http://www.ccsm.ucar.edu/publications/PhD%20and%20Masters%20Theses.htm) My question is can the chaotic effects be parameterized or modeled directly, particularly at the mesoscale?

    Comment by Eric (skeptic) — 27 Apr 2008 @ 7:16 AM

  45. I only knew of Lorenz’s work at second hand, but he was clearly a scientist of great significance, and from what’s been said here, a fine human being. On both counts, a sad loss.

    I think I remember reading in New Scientist a few years ago, that the rate at which weather models diverged from the real weather was not in fact exponential, suggesting that flaws in the models rather than sensitivity to initial conditions was, at that time, the main cause of inaccuracy. This was, if I recall rightly, presented as grounds for optimism about future progress in weather forecasting. Anyone else remember this?

    [Response: I didn’t see the New Scientist article, but Lorenz himself started off an inquiry of this sort in the 1980’s during a sabbatical at the European Center for Medium Range Weather Forecasting. By a clever study of forecast performance as a function of successive time lags, it was concluded that, at the time, forecast error 5 days out was still dominated by systematic error — model errors leading to climate drift — rather than the intrinsic predictability decay. That was good news, because it meant that the medium range predictability limit had not yet been reached, and science could still improve medium range forecasts. I haven’t followed this area very closely in the subsequent years, so I don’t know where the matter stands at the moment. Perhaps one of our readers can provide an update. The problem of climate drift was one that bedeviled ocean-atmospere models for a long time, but the better ones seem to have gotten it under control, at least on the time scale of a few centuries. –raypierre]

    Comment by Nick Gotts — 27 Apr 2008 @ 8:16 AM

  46. Having read raypierre’s last reply that even an educated climate science layman can appreciate, I have to wonder if Roger A. Pielke Sr. has crossed a line here. I can fully appreciate the contributions of Edward Lorenz in the field of the numerical analysis of dynamic systems, and his discovery of the vividly named and startlingly non-intuitive “butterfly effect” is an inspiration to anyone interested in mathematics and the sciences. But during the course of Pielke’s arguments on scale of phenomena and their effects on the atmosphere, I begin to wonder if we should call for the expertise of a lepidopterist to inform us of how many butterflies may fit on the head of a bowling pin.

    Unfortunately having to step back from the realm of science into that of the political, it would seem that Roger A. Pielke Sr. has selected this line of debate precisely because Lorenz’s discovery is counter-intuitive and therefore its refutation is appealing to the pseudo-common sense of the global warming deniers. Despite the credibility granted to Pielke by the editors of Nature, his belief that heat at the surface of the Earth may be radiated away without any expanding effect on the surrounding atmosphere makes this reader wonder if he is any longer worthy of the space afforded to him in his tedious arguments.

    My apologies for having to bring up this point in a blog dedicated as tribute to the life of Edward Lorenz. Of course the ultimate decision rests with the owners of RealClimate, and in answering Pielke’s latest salvos they have once again provided an excellent service to their readers.

    [Response: I wouldn’t want to guess at Roger Sr’s motives, but I’d avoid explaining by conspiracy what could be more easily be explained by confusion and bad judgment. Also, I’ll emphasize that with regard to chaos and Lorenz, Roger and I probably agree on more than we disagree on. I do think it is fair to keep in mind the scientifically indefensible tenor of some of Roger’s arguments here when judging how much credence to give the stuff Roger publishes on his blog — for example the recent guest post by Spencer, about which I’ll me commenting in the next few weeks. By the way, regarding your comment on Nature, are you perhaps confusing Roger Sr with Roger Jr? Or which article did you have in mind? –raypierre]

    Comment by Guy Schivone — 27 Apr 2008 @ 10:01 AM

  47. Re: Reponse to my 45: “I didn’t see the New Scientist article, but Lorenz himself started off an inquiry of this sort in the 1980’s during a sabbatical at the European Center for Medium Range Weather Forecasting. – raypierre

    I’m probably remembering a New Scientist report of that inquiry. You know you’re well into middle age when two decades seems like “a few years”!

    Comment by Nick Gotts — 27 Apr 2008 @ 11:59 AM

  48. You are correct in that you and I probably agree on most issues in chaos and nonlinear dynamics. All NWP and climate models show the sensitivity of large scale circulation features to initial conditions when perturbations are inserted in their initial state or in their parameterizations (these are all much larger effects than the energy that a butterfly places in the system). We also agree that the added heat from a butterflies flapping wings results in a slightly different system than if this flapping did not occur. However, the issue is whether the heat (the “information”) from this effect can translate (teleconnect) to larger scale so as to result in alterations in large scale features.

    Even Issac Held seemed to indicate that there is a lower limit to when this upscale effect can occur (i.e. this ability disappears when the flow becomes laminar); he said in this thread

    “the scale of the perturbation has to be larger than what is often referred to as the Kolmogorov microscale, the scale below which the flow is effectively laminar, to avoid being damped out immediately. This scale is typically a few millimeters in the atmosphere….”

    I agree with this, but maintain that the smallest turbulent scales also are damped out due to the physics of non-motion transfers (i.e. radiative transfers) of energy. I have been in communication with Professor Ekyholt on this question, and he and I agree that you are misinterpreting the butterfly effect for very small scale perturbations. We will be preparing a paper on this to demonstrate that there is lower limit to which the “butterfly effect” applies.

    On a separate note, I see commenters on this thread are somehow skewing this discussion to be on climate change. It is not. This issue of the scale at which the “butterfly effect” occurs is a pure discussion of the science such as we all used to have as graduate students and need more of!

    Also, you questioned as to why Roy Spencer posted a guest weblog. The answer is that he has introduced a novel and important new perspective into how variations in atmospheric/ocean circulations can result in alterations in the global average radiative balance. Disagreements with his results and conclusions should be on his science. I invite others (including any interested Real Climate climate scientist) to post unedited guest weblogs on Climate Science

    Comment by Roger A. Pielke Sr. — 27 Apr 2008 @ 5:16 PM

  49. Ray – In searching for what Professor Lorenz has said on this issue, please see “Chaos Avant-Garde: Memories of the Early Days of Chaos Theory”

    http://books.google.com/books?hl=en&lr=&id=0E667XpBq1UC&oi=fnd&pg=PA91&dq=butterfly+effect&ots=VUaNFd67Be&sig=gQh5rcXesTIokMPZeT1qYmHh36g#PPA94,M1

    In this essay he writes,

    “Returning now to the question as originally posed, we notice some additional points not yet considered. First of all, the influence of a single butterfly is not only a fine detal – it is confined to a small volume. Some of the numerical methods which seem to be well adapted for examining the intensification of errors are not suitable for studying the dispersion of errors from restricted to unrestricted regions. One hypothesis, unconfirmed, is that the influence of a butterfly’s wings will spread in turbulent air, but not in calm air”

    This certainly would rule out the butterfly in the jar! More importantly, he recognized that there remain questions about the “butterfly effect”, one of which is when small pertubations result in altering larger scale atmospheric flow, and when they do not.

    [Response: I’ve said all I’m going to say about this issue. I think the implications of conservation of energy, and of the effect of the heating of surrounding air by dissipation of mechanical energy, are pretty clear. You are putting your own construction on Lorenz’s words. –raypierre]

    Comment by Roger A. Pielke Sr. — 27 Apr 2008 @ 7:31 PM

  50. My apologies for skipping over many comments for lack of time – I may come back later to finish, but I wanted to jump in on a few points.

    Re 19 – that’s a very interesting point. It occurs to that it may be a matter of probability – a random pertubation would have almost no chance of having no component in the expanding dimension. Then I can imagine a class of perturbations that initially shrink (if dominated by the contracting component) before growing again (as the contracting component becomes smaller than the other component). Presumably this can be generalized to strange attractors in phase spaces of n-dimensions.

    Re 48 – it occurs to me that this may be a matter of probability – if the heat generated by a perturbation’s thermal damping is contained by just one molecule, and can be emitted by a single photon (before any molecular collisions), and that single photon manages to escape to space, then some of the perturbation has been ‘lost’ (but what if it reflects off the moon and comes back, or knocks into a dust grain – after some time, the dust grain is in a different position, perhaps it causes some other dust grain to fall back to earth and a butterfly notices it… granted, I had to go outside the system, there) but there is still the recoil of the photon-emitting particle. Anyway, the molecule may be oriented differently… And the effects on the gravitational field, and the electromagnetic field of the molecule … and then, what if a person sees the butterfly (okay, I’ve gone out of the system again – but on that note, presumably the butterfly has been affected by it’s own flap, so the effects of it’s future flaps have been altered). It seems to me their is always some lingering microscopic alteration that has some potential to grow at a later time. Quantum uncertainty may make the effects of the smallest perturbations somewhat superfluous, I think.

    Could a climate phase space be constructed with modes of internal variability being used as dimensions? I’d like to see that.

    I enjoy thinking about attractors in phase space. It occurs to me that for n degrees of freedom (all the properties of every particle), one might be able to compress the description of the state of a system down to m degrees of freedom (the conditions of air parcels of meter size) by settling for an approximation – in which case, the trajectories become somewhat probabilistic. One might map the phase space into a different phase space where features of the attractor are used as dimensions. Climate change could involve both translation of the attractor and distorting it, so I wonder if, for example, not just the variability along a mode of internal variability, but the mode’s shape, may change… Of course this is common sense now, but it’s fun to think about in that way. One helpful point is that at sufficiently small scale, all trajectories are locally nearly parallel. I like to use the distinction between weather and climate (sensitivity to initial conditions vs determined by boundary conditions) to use the terms in a generalized way – one might think of the weather and climate of mantle convection, of ecosystems, of evolution, of economies, of galaxies – climate might even apply just in space, where the climate would be a generalized texture… (On a related note, I’ve wondered how it would look if one modelled the atmosphere as an ecosystem, with different populations of ‘organisms’ (some of which may be endosymbionts of others…) And they can vary on different scales – climate for a short-lived bacterium (a snowstorm, individual weather events are snowflakes) may be weather for a human; etc. In that sense, conditions might be approximated as boundary conditions in the short term (sea surface temperatures, for weather) but in the longer term must be included as variables in a larger (in terms of dimensions, any way) system. Speaking of which, if the limit of predictability of the atmosphere is 2 weeks, I’m curious what it is for the ocean (realizing that may depend on the particular aspect or part), river meanderings, mantle convection, the outer core (and thus, magnetic field)?

    So thanks to Ed Lorenz for writing about chaotic behavior! PS I thought the butterfly came from the shape of the attractor.

    I’d love to hear more about “resonant triad instability”.

    Comment by Patrick 027 — 27 Apr 2008 @ 9:12 PM

  51. Re 19 again – it occurs to me that, in general, along portions of a strange attractor, one could have regions where there is local contraction in all dimensions, or local expansion in all dimensions – so long as it is in a limited portion of phase space. Interestingly, this would have to include not just the direction but the speed – one may discuss the velocity field in phase space. I suppose one could map the one phase space into another phase space where the divergence of the velocity field is everywhere zero, so then you’d have your complementary contracting and expanding components relative to the attractor itself.

    Comment by Patrick 027 — 27 Apr 2008 @ 9:27 PM

  52. Re 50,51 – here’s another interesting visualizing – when a system (such as the atmosphere-ocean, or atmosphere-ocean-biosphere, or atmosphere-ocean-biosphere-crust, or…) has variation on different timescales, one could imagine a strange attractor that is slowly being deformed and/or moved about (this could include time components – speeding up, slowing down, each in different parts, etc.). The shape (and timing, or including timing) of the attractor could be represented as a subset of the coordinates in a different phase space; On a longer timescale, the attractor’s shape (and/including timing) would trace out another attractor.

    Comment by Patrick 027 — 27 Apr 2008 @ 9:50 PM

  53. Roger Pielke Sr. is mistaken, but it will take a little while to explain why:

    First, here is a good discussion of the mathematics of the basic Lorenz attractor and also of the related phenomenon of Taylor-Couette flow – a long studied example of the onset of turbulence.

    http://hmf.enseeiht.fr/travaux/CD9899/travaux/optmfn/hi/99pa/hydinst/report03.htm

    There is a nice user-operated online java program that shows the time evolution of those same Lorenz equations. http://www.geom.uiuc.edu/java/Lorenz/
    Try to predict the number of consecutive cycles on either “wing”.

    Lorenz was the first to provide a quantitative description of what had been until then a purely qualitative argument – and this work absolutely relied on using a computer to carry out the large numbers of calculations. David Ruelle says this in his short book, Chance and Chaos (1991)

    “The Lorenz time evolution is not a realistic description of atmospheric convection, but its study nevertheless gave a very strong argument in favor of unpredictability of the motions of the atmosphere. . .Poincare had made precisely the same remark much earlier (Lorenz was not aware of this). But the approach of Lorenz has the great virtue of being specific, and extendable to realistic studies of the motion of the atmosphere.”

    Ruelle also says this about attractors and hydrodynamic turbulence:

    “What is an attractor? It is the set on which the point P, representing the system of interest, is moving at large times (i.e., after so-called transients have died out). . . . Dissipation is the reason why transients die out. Dissipation is the reason why, in the infinite-dimensional space representing the system, only a small set (the attractor) is really interesting.”

    One can see this in action by playing with the Lorenz java applet linked above – no matter where a trajectory begins, it is drawn onto the attractor. Each point on the attractor represents the entire system in different states – a big job for a little point! (climate models now have some 10 million dynamic variables, I believe)

    The notion of dissipation is important here. In the supercooled water example, the heat of ice formation is immediately dissipated in the sub-freezing water, allowing for rapid ice growth – that’s just a dramatic example of a carefully prepared dissipative system. More generally, a dissipative system is a thermodynamically open system, which operates far from thermodynamic equilibrium, while exchanging energy and matter with its surroundings. That’s what Lorenz was working with.

    Now, Roger Pielke Sr. makes at least two incorrect claims about this:

    1) “With nonlinear atmospheric models such as analyzed by Professor Lorenz, the results for large scale features are sensitive to the initial conditions regardless of how small they are. This is because the system is closed. The real world climate system, however, is not closed, such that energy (i.e. in the form of heat) can leak out of the system. . .”

    AND:

    2) “. . .the smallest turbulent scales also are damped out due to the physics of non-motion transfers (i.e. radiative transfers) of energy. We will be preparing a paper on this to demonstrate that there is lower limit to which the “butterfly effect” applies.”

    The first claim is just plain wrong – see
    http://www.atmos.umd.edu/~ekalnay/nwp%20book/nwpchapter7/Ch7_2.html

    Lorenz (1963) introduced a 3-variable model that is a prototype of the main characteristics of chaos theory. These equations were derived as a simplification of Saltzman (1962) nonperiodic model for convection. Like Lorenz (1962) original 12-variable model, the 3-variable model is a dissipative system, in contrast to Hamiltonian systems in which some property of the flow such as total energy is conserved.

    As for the second – they are going to revisit David Ruelle 1979, “Microscopic fluctuations and turbulence”? Here is Ruelle on this (Chance & Chaos pg. 182)

    “We can estimate the time it takes for thermal fluctuations in a turbulent fluid to be amplified by sensitive dependence on initial condition to a macroscopic scale (say 1cm). The calculation uses Kolmogorov’s theory of turbulence. . . From microscopic fluctuations to macroscopic changes in turbulence takes about a minute . . Going from small scales to large scales of turbulence takes a time proportional to the turnover time of the largest eddies considered. . .We estimate that it takes a day to reach the scale of kilometers. We now pass to the level of the circulation of the atmosphere over the entire planet, where the time it takes to amplify a small change to a globally different situation is estimated to be 1 or 2 weeks by meteorologists.”

    Compare that to say, http://www.renewamerica.us/columns/hutchison/070711
    – a curious article on the uselessness of computers for modeling climate that has a certain similarity to Roger Pielke Sr.’s and Roy Spencer’s themes.

    In the world of physics, a microscopic fluctuation is something like the collision between two molecules (very sub-millimeter scale). Any radiative transfer of energy will not necessarily damp out a perturbation on those scales. A water molecule that absorbs or emits an infrared photon will vibrate differently; that will slightly change the result of the next collision it has, and so on – creating enough of a perturbation that two near-identical states will generate wildly different behavior at some time T in the future.

    In the practical case of a hurricane, the energy budget depends on the thermal difference between the sea surface and the top of the troposphere. No butterfly flaps will have much effect on that. What they will affect is the timing of easterly waves. The damping or amplification of convection by a few hours could affect the timing of an easterly wave by a day; a day’s difference over the Atlantic could make all the difference for favorable hurricane conditions (no wind shear, say).

    Finally, let’s return to Lorenz (Essence of Chaos) himself to clear up a few points about the dissipative (open!) system he was working with – and let’s also note that his discovery was not an accident, but rather the result of remarkable tenacity:

    ”. . .My guess, though, is that such a switch would not have appealed to me; the mere knowledge that simple systems with nonperiodic solutions did exist might have given me additional encouragement to continue my own search, and in any case I still had my eye on the possible side benefits. These, I felt, demanded that I work with a dissipative system. As it was, I kept trying new combinations of constants, and finally encountered the long-sought nonperiodic behavior after making the external heating vary with longitude as well as latitude. This is of course what happens in the real atmosphere, which, instead of receiving most of its heat directly from the sun, gets it from the underlying oceans and continents after they have been heated by the sun. . .

    In due time I convinced myself that the amplification of small differences was the cause of the lack of periodicity. . .”

    For a real example of modern work on estimating the predictability of chaotic systems, from the European Centre for Medium-Range Weather Forecasts (1999), see:
    http://www.ecmwf.com/newsevents/training/lecture_notes/pdf_files/PREDICT/Uncert.pdf

    Chaotic dynamics implies not only that a forecast is sensitive to initial error, but also that the rate of growth of initial error is itself a function of the initial state. . .In this paper, we consider two types of prediction. Following Lorenz (1975), we refer to initial value problems as ‘predictions of the first kind’. By contrast, forecasts which are not dependent on initial conditions, for example predicting changes in the statistics of climate as a result of some prescribed imposed perturbation, would constitute a ‘prediction of the second kind’.

    A weather forecast is clearly a prediction of the first kind; so is a forecast of El Nino, referred to as a climate prediction of the first kind. By contrast, estimating the effects on climate of a prescribed volcanic emission, prescribed variations in Earth’s orbit (thought to cause ice ages) or prescribed anthropogenic changes in atmospheric composition, would constitute a climate prediction of the second kind.

    Here is another final (promise, that’s all on this) Edward N. Lorenz quote to sleep on:

    “Perhaps the principal lesson is that we still have much to learn about what can happen in chaotic dynamical systems with many interconnected parts.”

    Comment by Ike Solem — 28 Apr 2008 @ 3:54 AM

  54. RE: #29 Hello Roger!

    The earth is a closed thermodynamic system: it gains and looses essentially no mass with its surroundings i.e., outer space, but gains and looses large amounts of energy in such a way that the average surface temperature of the system is about 288 K (Mass loss is confined to hydrogen, helium, space probes and their expelled fuel exhaust while mass gain is from meteors, space snowballs and particles from the solar
    wind and deep space).

    Since the earth is a sphere comprised of three phase of heterogenous and honogenous matter of enormous mass whose spatial and temporal composition and properties are never constant, it is not possible that the flap of a single small butterfly wing will have any discernible effect on the system whatsoever. Even the effects on system earth of the very violent wing flappings of the large butterfly Mt. Pinitubo disspated after a short period of time.

    Comment by Harold Pierce Jr — 28 Apr 2008 @ 4:38 AM

  55. Harold Pierce Jr writes:

    The earth is a closed thermodynamic system: it gains and looses essentially no mass with its surroundings i.e., outer space, but gains and looses large amounts of energy in such a way that the average surface temperature of the system is about 288 K (Mass loss is confined to hydrogen, helium, space probes and their expelled fuel exhaust while mass gain is from meteors, space snowballs and particles from the solar
    wind and deep space).

    The Earth is NOT a closed thermodynamic system, since closure of a thermodynamic system involves energy as well as matter.

    Since the earth is a sphere comprised of three phase of heterogenous and honogenous matter of enormous mass whose spatial and temporal composition and properties are never constant, it is not possible that the flap of a single small butterfly wing will have any discernible effect on the system whatsoever.

    Non sequitur. The conclusion does not follow from the premises.

    Even the effects on system earth of the very violent wing flappings of the large butterfly Mt. Pinitubo disspated after a short period of time.

    “Pinatubo.” And the fact that it dissipated doesn’t mean it was never there.

    Comment by Barton Paul Levenson — 28 Apr 2008 @ 7:19 AM

  56. Surely the flapping of a butterflies wings would be dampened down by the other millions of small perturbations taking place a the same time. In a real world scenario there would be literally millions are small uncertain perturbations at any one time. How could anyone assign probability to a single one of these events ?

    [Response: As a practical matter, you couldn’t, though the mathematics still says the effect of all these perturbations is there. For that matter, the uncertainties that matter most in weather prediction are the uncertainties in observation of the initial state of the atmosphere, not the actual butterfly-scale forcing uncertainties, which almost certainly contribute less to error growth over time scales of ten days or so. The butterfly statement is just a vivid way of expressing the notion of sensitive dependence on initial conditions. That said, there is a lot of interesting mathematics and physics in the question of the nature of the ensemble of trajectories you get when you represent all those unknown perturbations by stochastic noise. How sensitive are the results to the details of the noise? How much is the long term mean state affected by the presence of noise? What is the probability distribution of the ensemble of trajectories? All these are very deep and difficult questions, but very interesting ones. Good results are available for fairly simple systems like iterated nonlinear 1D maps, but things rapidly get intractable when you go further up the hierarchy of models. –raypierre]

    Comment by pete best — 28 Apr 2008 @ 8:35 AM

  57. Re: Ike Solem #53,

    The differences between predictions of the first and second kind might be more theoretical than practical, as stated in Palmer’s uncertainty memorandum which you cited:

    “In practice, as discussed below, it is not easy to separate the predictability problem into a component associated with initial error and a component associated with model error.”

    The paper acknowledges “our limited knowledge of the current state of the climate system (especially the deep ocean), and the limited natural predictability that may in fact exist on decadal to centennial timescales”

    Uncertainties introduced by parameterizations and other model formulations might be assessed explicity by stocastic methods: “A principal uncertainty in climate change prediction lies in the uncertainties in model formulation, e.g as discussed in Section 4. As discussed in Section 4, such uncertainty may necessarily imply some stochastic representation of the inßuence of sub-grid scale processes within the model formulation.”

    But they are not addressed that way currently by the IPCC. “At present, such stochastic representations have not been used in the context of climate simulation. Rather, the representation of model uncertainty in ensemble integrations has been based on the multi-model ensemble.” This was circa 1999, but I saw no evidence that this was different in the FAR.

    But there is the problem that the “the pdf spanned by the use of the different sets of models is ad hoc, and may not cover all of the uncertainties in model formulation. Moreover, in practice the models are not truly independent of one another; there is a tendency (possibly for pragmatic rather than scientiÞc reasons) for groups to adopt similar sets of parametrisations.”

    Given the number of degrees of freedom, perhaps it is not suprising that there are some documented correlated errors in the AR4 model. Unfortunately, the paper didn’t treat what the threshold of significance of these errors must be before the meta-ensemble can no longer be said to address the pdf of uncertainty in model formulations. Furthermore, the consideration of validation of the model or ensemble skill was very event oriented. Even if good skill is validated for the prediction of certain events, there still remains the question of whether that would imply that the models are also skillful in non-events such as attribution.

    Let me say in advance that any similarity in my comments to those available at:

    http://www.renewamerica.us/columns/hutchison/070711

    are a mere coincidence, or if particularly strong, must be due to a convergence of some sort, since I have never read the site.

    Comment by Martin Lewitt — 28 Apr 2008 @ 10:53 AM

  58. Pete, As Raypierre said, the butterfly reference is just a poetic way of saying “a small perturbation”. Sometimes the metaphor in poetry will communicate to the laymen a concept that would otherwise take several pages of math–which the layman wouldn’t read anyway. Scientists tend to look at controlled systems–partial derivatives, admonitions of “ceteris paribus” (meaning other things being equal), etc. We know that ceteris is rarely paribus, but the controlled experiment is much easier to interpret. One can then look at 2 simultaneous perturbations, etc. as needed to understand the system. Fortunately, physical systems are rarely so complicated that multiple simultaneous perturbations are essential to understanding. This does occur in ecology, though.

    Comment by Ray Ladbury — 28 Apr 2008 @ 11:38 AM

  59. Maybe the butterfly effect is simply an artifact of the capacity to run sets of parameters through a function and observe the difference. Nobody but a tort lawyer would attempt to do the same in “real life”. The events of “real life” seem over-determined.

    Comment by Jeffrey Davis — 28 Apr 2008 @ 12:51 PM

  60. There is a religious difference between physicists and chemists on what a closed system is. Physicists describe a closed system as one that exchanges nada with the surroundings. Chemists differentiate between closed systems that can exchange energy but not mass, and isolated ones that can exchange neither with the surroundings.

    Before discussing the matter, one does well to inquire what the faith of your partner is.

    Comment by Eli Rabett — 28 Apr 2008 @ 12:52 PM

  61. The butterfly could be the catalyst. The straw that broke the camels back (so to speak). But this is not like the cigarette that started the forest fire. It is more like emergence that would explain how it could be a contributing factor.

    However, it would easier to explain how evolution produced the butterflies holometabolism than is would to explain how a single butterflies wings creates a tornado half way around the world. But it is certainly a great question and contribution by Lorenz.

    Comment by Dave Blair — 28 Apr 2008 @ 2:09 PM

  62. Some confusion here revolves around the issue of chaos in mathematical models vs. chaos in nature. This is also a topic that Lorenz discussed, starting with an overview of the mathematical model:

    Lest a system of 5,000,000 simultaneous equations in as many variables appear extravagant, let us note that, with a horizontal grid of less than 50,000 points, each point must account for more than 10,000 square kilometers. Such an area is large enough to hide a thunderstorm in its interior.

    One could also construct a detailed model of a tornado – and the grid scale of the model would miss microscale perturbations that will influence the future course of the system. For example, the strongest tornadoes are generated by rotating supercell thunderstorms. This is another example of sensitive dependence on initial conditions, because not all supercells lead to tornadoes (only about 20%), and not all tornadoes become real monsters.

    As far as the parameterizations of sub-model grid scale process (like thunderstorms), I don’t believe that there are any reported realistic parameterizations that don’t result in sensitive dependence.

    The key point here is that chaos is a mathematical phenomenon – but down the centuries, we’ve found that nature follows mathematical rules – so is the chaos in models also found in nature? That’s the hard-to-analyze question (see raypierre’s response to #56).

    Once again, Lorenz clarifies the situtation:

    What about chaos? Almost all global models, aside from the very earliest, have been used for predictability experiments, in which two or more solutions originating from slightly different initial states have been examined for the presence of sensitive dependence. . .

    Almost without exception, the models have indicated that small initial differences will amplify until they are no longer small. There is even good quantitative agreement as to the rate of amplification. Unless we wish to maintain that the state-of-the-art model at the European Centre, and competitive models at the National Meteorological Center in Washington, do not really behave like the atmosphere, in spite of the rather good forecasts that they produce at short range, we are more or less forced to conclude that the atmosphere itself is chaotic.

    One other thing – “dissipative” has a technical meaning here that doesn’t match the normal use of the term. For example, a category 5 hurricane is a very active dissipative system that is dissipating like crazy – that doesn’t mean “weakening and falling apart”, as the normal usage, but rather that the system of interest is open and is far from thermodynamic equilibrium, as opposed to a “Hamiltonian” system. Thus, the issue here is Hamiltonian vs. general dynamical systems. In Hamiltonian systems, forces are not dependent on velocity (unlike say, friction). In this view, all life forms including ourselves are non-Hamiltonian dissipative systems – it’s not a moral judgement on character. :)

    Fluid systems are never Hamiltonian, because the theorem is that “If the system is Hamiltonian (no friction) then the fluid is incompressible”. Lorenz explains it like this:

    Hamiltonian systems may be chaotic; note that the qualitative reasoning indicating that a pinball machine should behave chaotically does not invoke any dissipation. Chaotic systems therefore do not always possess strange attractors, although most of the generally encountered compact dissipative chaotic systems do have them. Despite the absence of these intriguing features, many scientists have chosen Hamiltonian systems as the ones that they prefer to study.

    A very interesting question now is to what degree the ocean circulation is chaotic – what is the predictability scale for the ocean circulation? In other words, what kind of reliable long-term ocean weather forecasts can we make?

    Comment by Ike Solem — 28 Apr 2008 @ 2:50 PM

  63. I have a question for the experts here. I think it’s somewhat related to the thread topic because it has to do with modeling dynamic systems. A colleague at my institution, a physicist, claims that climate science is junk science because climatologists are not experts in nonlinear dynamical systems analysis. He claims that because they don’t publish in the top peer-reviewed journals of nonlinear dynamics, such as PhysicaD, Physical Review E, International Journal of Bifurcations and Chaos, or Chaos, that their models are meaningless, and it is impossible to separate the human contribution to climate from natural variations in climate. I’m a physical chemist, not an expert on nonlinear dynamic systems, but I know enough to realize that modeling such systems is very difficult. What is the best way to respond to my colleague? Are there any good resources I could read on this? Thanks in advance.

    [Response: You left out Phys. Rev. Letters. Actually, many of us (myself included) have published in some or all of those journals, when the subject material is appropriate. I think I have at least one paper in each, except for Physica D, and I know Stefan has a couple of things in PRL. That’s beside the point, though. Following the tradition of Ed Lorenz, climate scientists pay close attention to the dynamical systems literature, and make use of it in their own work where appropriate. You can find lots of dynamical systems stuff cited in work in Journal of Climate, Journal of Atmospheric Sciences, and elsewhere. When the main applications are to atmospheres/oceans we tend to publish in our own journals, since that’s the audience. We only turn to the basic physics journals when a result has broader interest to the community of physicists. Most importantly, there are relatively few results in dynamical systems theory that in and of themselves help you distinguish natural variabilty from trends due to a changing control parameter (CO2, say). That requires an understanding of the physics of the particular system you are working on, not of generic properties of dynamical systems. Given that issue, I’m not sure exactly what readings to point you to, though if all you’re looking for is a selection of climate science papers that show an awareness of and contribution to the dynamical systems literature, I’m sure I or the RC readership could help out with that. –raypierre]

    Comment by Mark — 28 Apr 2008 @ 3:44 PM

  64. Climate Science has posted a weblog on the topic of the “Butterfly Effect“.

    Comment by Roger A. Pielke Sr. — 28 Apr 2008 @ 6:19 PM

  65. Re 21 – I think some of what you are describing is the butterfly effect. Besides that, you mentioned the persistence of droughts.

    (An aside, it occurs to me that one might expect droughts to tend to propogate downwind? (as gauged by the boundary layer where the soil moisture would have the most direct impact on humidity) as well as to some extent downstream, though not as fast as floods since water flows faster when there is more of it, generally speaking – in the ecosystem concept of weather/climate, is a drought migrating or continually giving birth to daughter droughts that overlap the area … the two descriptions would be mathematically equivalent and useful for different purposes, I expect, … And the persistence of drought might be a case of a ‘compositional genome’, which may also have been involved in the origin of life – PS I think a simple population model x(n+1) = k*[x(n) * [1 – x(n)]] also exhibits chaotic behavior for some values of k; as k is increased, the single point attractor bifurcates several times and then goes ‘haywire'; interestingly the chaos is interupted by brief intervals where there are 3 or some other number of point attractors that each bifurcate again… although there if k and x(0) are not within certain ranges, the solution goes to +/- infinity… As a sample of ‘population climatology’, I was thinking one might define the set of all extratropical storms which inhabited similar habitats from genesis to lysis, with similar life histories, along the same storm track, for some season or month of the year, during periods defined by NAO, ENSO, etc. indices … for some period of n years… and then consider the collective metabolism, etc. of this population – PV transport, APE conversion, latent heat intake and precipitation, etc… and whether these are balanced by other populations of weather/climate phenomena or not – ecological succession would be triggered by imbalances, then – because there may have been many such periods during the n years, the ecological succession may have recurred many times, etc…)

    The drought persistence would be from the temporary memory in the soil moisture (or effects on vegetation). But if a drought can be triggered by the butterfly effect, then that is an aspect of the longer term climate that the potential for chaotic drought initiation exists (if a drought of some particular magnitude only happens once, then maybe the climate system only momentarily crossed a threshold, or crossed two successive thresholds – or maybe it was just one of those rare things when some of the butterfies randomly lined up). Such droughts would be aspects of internal variability, the longer term weather of an even longer term climate, and the soil moisture’s role would be similar to an SST anomaly’s role in other low-frequency variability.

    Re 53 – those transients die out – I expect the butterfly effect exists because the chances are essentially zero of picking a perturbation that dies out to the where it would have been on the attractor as opposed to somewhere else?

    [Response: The last comment is very well put. To rephrase it in more mathematical language, a random perturbation has zero probability of projecting ONLY on contracting dimensions. –raypierre]

    Comment by Patrick 027 — 28 Apr 2008 @ 8:00 PM

  66. RE: #36

    Since there is no uniform spatial and temporal distribution of CO2 and of All other gases including water vapor in the atmosphere as expressed in absolute concentarion units such as moles/cu meter, how is this taken into account in the various computer modeling experiments?

    I saw some recent sat images of the distribution of CO2 in the mid-trophosphere and its distibution is not uniform specially over the continents.

    [Response: The variations in CO2 are of interest to carbon cycle people, and are therefore tracked in models that are aimed at understanding sources and sinks. However, the variations are small enough that they are not radiatively significant, so in most climate models CO2 is treated as a well-mixed gas for the purposes of radiative transfer. As you note, more variable greenhouse gases, such as water vapor, are treated as inhomogeneous in the radiative calculation. Some models have begun doing this for methane as well. There would be no particular difficulty in doing the same for CO2 if anybody wanted to but it’s really a very small effect, radiatively. By the way this is way off topic, which is probably why your earlier comments were deleted. I’d suggest this is not the right thread for further discussion of your query, but I hope my information helps. –raypierre]

    Comment by Harold Pierce Jr — 29 Apr 2008 @ 4:31 AM

  67. Re, Re,# 56; Therefore Raypierre you are surely using statistical averages or many averages spread across the system in question at certain resolutions to model the initial conditions, say 10×10 or 100×100 mile grid?

    I have read that this is why with millions more spent on attemtping to improving the long range climate temperature (sensitivity) your error bars are roughly the same as they were in the first IPCC report. The uncertainty in black carbon, aerosols, clouds etc makes for a possible range of temperatures that has not lent itself to be much better refined.

    These uncertainties lead to the reason why we have temperature rise ranges?

    [Response: Not really. I was mainly talking about the accuracy of medium range forecasts in my remark. You are talking about the long term response of climate to change of a control parameter (CO2, mainly), and you should be talking about forecast uncertainty rather than “error bars”. The forecast uncertainty represents the likely range of future climates given what we know today, and that’s the range policy planners have to live with and deal with. The range hasn’t narrowed because as our understanding of climate has improved we’ve discovered new phenomena at about the same rate we settle older issues. In addition, most of the uncertainty (for a given emission scenario) is still due to difficulties in nailing down cloud feedbacks. Although the forecast range of warming hasn’t narrowed much, we have a much greater understanding of the nature of the uncertainty. Clouds are just a hard problem, and I don’t think anybody has much expectation that the uncertainty range in that area is going to go down much in the next decade or two, though we may well get better at using paleoclimate data to get an idea of what kinds of cloud feedbacks are most likely to be correct. –raypierre]

    Comment by pete best — 29 Apr 2008 @ 6:55 AM

  68. Re inline response to #56 “there is a lot of interesting mathematics and physics in the question of the nature of the ensemble of trajectories you get when you represent all those unknown perturbations by stochastic noise. How sensitive are the results to the details of the noise? How much is the long term mean state affected by the presence of noise? What is the probability distribution of the ensemble of trajectories? All these are very deep and difficult questions, but very interesting ones. Good results are available for fairly simple systems like iterated nonlinear 1D maps, but things rapidly get intractable when you go further up the hierarchy of models.” –raypierre

    By “intractable”, I guess you mean analytically so? Do you know if anyone is working on simulations of systems that are intermediate between nonlinear 1D maps and real-world complexity? I believe statisticians use simulations to get distributions for analytically intractable stochastic processes.

    [Response: By “intractable” I meant it’s a hard thing to prove theorems about. It’s even hard to show that a probability distribution on some general attractor actually exists, let alone compute it. Nonlinear systems subject to stochastic noise are certainly amenable to simulation, though. –raypierre]

    Comment by Nick Gotts — 29 Apr 2008 @ 8:18 AM

  69. Re 65, Patrick, I’m not sure why the population of storms can be expressed as a series although weather forecasters always like to relate storms to previous ones based on track, season, ENSO and many of the other factors you listed. But each successive storm has no relationship to the previous one except in the cases where there is some persistent effect. So although your model would exhibit the required chaotic behavior, I don’t think it has a basis in reality. I agree about the soil moisture persisting the drought and propagation downwind. Those should be quite easy to model. The end of the drought is what concerns me. How is the pattern change modeled when it has (AFAIK) chaotic origins?

    Comment by Eric (skeptic) — 29 Apr 2008 @ 8:33 AM

  70. Thanks, Raypierre, that helps some. I think my colleague’s main objection is that because climate is an “aperiodic nonlinear system” (his words), it’s impossible to quantify the extent of human contribution. If you (or the readership) can suggest some references for how the numbers are extracted, that would be useful. (For example, IPCC WG1, Feb. 2007, FAQ 2.1, Fig. 2 shows that total net human activities have a radiative forcing of ~1.5 +/- ~1 W/m2. How is that number determined?)

    [Response: Even aperiodic nonlinear systems vary within a limited range — call that the “natural variability.” When the observed variations are different in character and magnitude from the “natural variability” then you know they are caused by a trend in some control parameter. When, moreover, the trend agrees with the computed response to changing the control parameter (CO2) you have an even stronger argument. If you had an identical twin Earth with constant CO2, you could determine the natural variability without a model. We don’t really have that; recent paleoclimate provides some evidence, but there are problems with reconstructing past global temperatures and with correcting for past “forcings” like volcanic activity or solar fluctuations. Hence, it is necessary to us models of one sort or another to help understand natural variability, as well as to determine the response to changing forcing. Think of it as a “signal to noise” problem. You can send information by modulating a radio wave even though the radio transmitter generates a certain amount of chaotic, aperiodic noise. For climate, it’s CO2 that’s doing the modulation (and civilization which is doing the speaking). The IPCC report does a quite good job of explaining this, I think — look at their graphs of observed temperature, simulated temperature with natural forcings, and simulated temperature with both natural and anthropogenic forcings. Lee Kump is doing a very elementary book interpreting the IPCC results for the lay audience, and Dave Archer and Stefan Rahmstorf have a somewhat more advanced book coming out. Either of those will help for people who don’t want to slog through the whole IPCC report. By the way,the radiative forcing numbers you quote come from using observed trends in atmospheric constituents (long-lived greenhouse gases and aerosols) and running them through radiation models. There is little uncertainty in the radiation models. The main uncertainty comes from the effect of aerosols on clouds, and the high end of the possible range of that effect is what gives you the low end of your range of anthropogenic forcing. –raypierre]

    Comment by Mark — 29 Apr 2008 @ 2:02 PM

  71. Mark (70) — I suggest “Plows, Plagues and Petroleum” for W.F. Ruddiman’s popular account of his hypothesis regarding the climate of the past. His papers then go into greater detail with regard to the agreement with ice core, etc., data.

    Comment by David B. Benson — 29 Apr 2008 @ 3:46 PM

  72. re 70, (can’t resist some bad humor here)

    Shouldn’t the more important questions be:

    What is the total net butterfly activty radiative forcing?

    What caused the butterfly to flap his wings?

    If butterflies in Brazil are causing these Texas tornados then shouldn’t we spray the heck out of the Amazon?

    [Response: And please, let’s not forget the butterfly-albedo feedback! –raypierre]

    Comment by Dave Blair — 29 Apr 2008 @ 3:53 PM

  73. Re 69 –

    I’m not an expert on droughts specifically, but I would say:

    If a drought has a lot of trouble ending but was easily started, it was probably during the course of a shift in climate. In the extreme, if desertification was becoming likely, the butterfly effect would have some role in the exact timing.

    Otherwise, though, I would answer with this question: if lack of soil moisture gives inertia to a drought, why didn’t a previous higher level of soil moisture prevent the drought? I suspect the ending of a drought can be part of the same overall chaotic process that is ‘low-frequency variability’.

    Oh, and I wasn’t saying that each storm along a storm track is related to each other in the sense of a lineage (However, one could try to make that analogy); also, I was refering to a hypothetical subset of such storms, which may not be grouped in time. What I meant was just a population of similar phenomena – although timing might be used as defining characteristic (winter storms (seasonal timing), El Nino storms, storms that started during the start of a blocking event at some distance x to the west or … whatever (relative timing)), the population need not occur in one continuous unit of time. It is a population that is a part of an ecosystem that occupies some region of the atmosphere’s (and possibly the ocean’s) phase space, and ecological succession occurs when trajectories enter or leave that phase space, and one way of looking at why that happens is that the ecosystem is not balanced – and possibly this is because of fluxes of potential vorticity or other things are not balanced, and that may involve the population in question.

    The ‘fitness’ of such phenomena would just be their tendency to recur – that’s not to say they are reproducing themselves autocatalytically (although thunderstorms can seed new thunderstorms via gust fronts). One might also try fitting ‘carrying capacity’ and ‘niche’ to the situation… And coevolution occurs: one population affects the fitness of another.

    PS I wasn’t suggesting that the x(n+1) formula given earlier was to be used in this case (or any particular case that I know of), that was just a familiar example (PS if one charts the x point attractors in a k by x graph, the pattern is fractal).

    PS each storm along a storm track must have some effect on the conditions of the track; though some of those effects will be advected away from the next cyclogenisis, the previous storm will help shape the environment of the next storm.

    There are some other clarifications/add-ons I’d like to make to my previous comments:

    Earlier I suggested that very small perturbations might be somewhat or partly superfluous given quantum uncertainty. However, while quantum uncertainty makes the trajectories probabilistic rather than perfectly deterministic, the idea the other very small perturbations would be superfluous would be like saying that the other butterflies’ flaps are superfluous.

    Another point from the second paragraph: the butterfly effect can be involved in the timing of a threshold being crossed (and on occasions when the threshold would otherwise only just be crossed or not, the butterfly effect could make the difference); on the small scale, icebergs may calve off a glacier at some rate, but the precise moment of calving at each instance may be affected by butterflies at some time in the past. On the larger scale, global warming has a longer term trajectory towards a sea ice-free arctic summer, but interannual variability introduces a range of possible timing for first occurence of that state.

    One key point about the butterfly effect is that it may be possible to know it exists, but it is next to impossible to actually trace events back to butterflies. It is unnoticed because it doesn’t cause big surprises (or, big surprises are just rare – which I guess is circular, but anyway…).

    Another key point is timing. If the butterfies with significant effect on the details of a tornado at time t0 (small variations in timing, strength, track) flapped at time t1 before the tornado, then the butterflies with larger influences happened at t2, t2 before t1, and the butterflies which decided whether that tornado would occur happened before then, and the butterflies that had sizable influence on the entire 100 tornado outbreak happened before that, and the butterflies that affected whether their would be a tornado outbreak at all before that (they happenned before the proximate recognizable causes, like wind shear, dew points, divergence aloft, dryline, etc.) … and the butterflies that had influence on whether parts of Scotland and Ireland would end up on the coast of North America or not must have occured some time before that part of the break-up of Pangea…

    And when a large event does occur, it may use up some amount of energy or transport something somewhere, and could have a negative feedback regarding the likelihood of such an event happenning again soon. This doesn’t shut down the butterfly effect – butterflies still have influence on when the next such event occurs.

    The shape of a strange attractor, and the velocity field of the phase space, can be such that, for given boundary conditions there is an average tendency to be drawn preferentially into certain parts of phase space. This may or may not include the time average of all trajectories; the trajectories may loop around their center. There are positive, negative, and ‘sideways’ feedbacks, but outside of some region (but within the attractor’s basin), the feedbacks may be overall negative. This isn’t the same as negative feedback in climate change – changes in boundary conditions may cause by themselves some forcing of the climate, for example, by moving the center of the attractor through phase space, but then positive feedbacks may amplify that initial change, but if/when a new equilibrium is reached, negative feedbacks would tend to draw trajectories toward the new region of the attractor.

    PS the shape of the attractor and the velocity field in it’s region may also change. Climate is a multidimensional thing. Some people may then point out that keeping track of something like global average surface temperature is not of much value. However, that ignores tendencies for correlations between such simple variables and the more complex aspects of climate (although there are also differences in changes in shape that can be correlated to solar vs greenhouse vs various types of aerosol forcing – fingerprints of causes of climate change).

    On that note, examples of the potential for complexity of climate:

    The same average value of some variable, say x (x may be T, the spatial gradient of T, humidity, wind, or some higher-level variable like the ENSO index)may occur with different variation about the average – for example, the standard deviation is not specified. Even if the standard deviation is specified, the actual shape of the distribution is not specified – is it a bell curve or are their multiple peaks? And even if the distribution is specified for a given time interval, the arrangment is not specified. And that can have real effects: for example, if spikes in evaporation tended to follow immediately after spikes in precipitation, average soil moisture would tend to be low, whereas the reverse would be expected if spikes in evaporation tended to immediately precede spikes in precipitation. … Speaking of which, of course runoff will tend to peak after precipitation peaks, in uplands, but lower in a basin, runoff may initially be negative (water flows in from upslope)… anyway, runoff is one reason why shorter periods of more intense precipitation would matter.

    PS I also had an argument with someone over whether or not all these statistics (climate) is real or just an abstraction (as if an abstraction like 2+2=4 would cease to be true?). But of course it has real effects, as can be seen by the insensitivity of large lakes to short term precipitation/runoff/evaporation variations, or that a tropical rainforest won’t spring up overnight after a warm rainy day (nor will it spring up if all the precipitation came in 1 minute or if the temperature started at -50 and rose to 100 (deg C)), whereas longer term variations will have significant effects on such things.

    Anyway, finding a way to justify the above: The shape (and velocity field) of attractors in phase space are a handy way of thinking of those kinds of issues.

    I wanted to put all that out there because I won’t be able to get back to this for a few weeks at least. (And remember: The stars do affect people. Exhibit A: telescopes. And UFOs do exist – millions of them are in the Amazon rainforest. We’ve narrowed them down to insects, but beyond that…)

    PS thanks for the inline comment in my last post.

    Comment by Patrick 027 — 29 Apr 2008 @ 6:43 PM

  74. Mark, What is your colleague’s specialty in physics? Certainly not nonlinear dynsmics. The naivete in his argumens belies his having looked into climate science (or nonlinear dynamics) in any detail. I think he is purveying organic matter of bovine fabrication, and as such will be reluctant to admit his mistake. Rather than backing him into a corner, the best strategy would be to provide a graceful way for him to back down.

    Comment by Ray Ladbury — 29 Apr 2008 @ 7:19 PM

  75. Most nonlinear systems tend to exist near equilibrium and their behaviour is nominal and nothing special happens. However push that equilibrium system out of its equilibrium and interesting things can happen. In simple one parameter or two parameter systems behaviour is unpredictable and can turn chaotic once the system is pushed far enough from equilibrium.

    The interesting behaviour occurs somewhere in between but it terms of climate there are many drivers and hence many parameters and a combination of factors can lead to so called interesting climate behaviour although that interesting behaviour could kill a few million of us in terms of the four horses.

    Comment by pete best — 30 Apr 2008 @ 3:40 AM

  76. Re #73, Patrick, your rhetorical question “why didn’t a previous higher level of soil moisture prevent the drought?” is answered by the large scale factors that change the weather patterns. If the small scale factors were dominant, then the authorities should mandate lawn watering during a drought rather than ban it. The current negative NAO is not affected by the wetness of the mid-Atlantic, but could change based on one butterfly in one yard around here. But that change is just “low frequency variability” by which I believe you mean a small change in timing compared to large periodicity and large variations in that periodicity and doesn’t matter to the model. Other effects are easy to model, soil moisture from distribution of rainfall, gust fronts, local SST changes from storms, etc, given sufficient model resolution. I’m still not sure that there is any tendency to be drawn into any part of the phase space. That seems more likely to be the psychology of weather forecasters (e.g. for storm tracks).

    We know from Ike in that the waves coming off of Africa can be parameterized into a 3-5 period without having to model the mountains and convection that are the ultimate source. However, the pattern changes that promote or inhibit convection on those mountains would have to be fed back into the period and distribution parameters for the waves. But going further Ike asks if the fraction of energy that organizes itself into hurricanes will remain constant. I think not. The reason is the same as with the convection, measured or submodeled parameterization no longer holds in the new, higher energy environment. In fact the large scale patterns will be quite different with positive or negative consequences for hurricane formation and strength.

    Likewise I disagree with the distinction that Ike quotes in #53, initial value problems (e.g. forecasting El Nino) versus changes in forcing. The forecast is never necessary in climate models (I realize they are talking about weather forecasts), but the modeling of El Nino as it may change with climate changes is necessary. That modeling is agnostic of any particular conditions (i.e. initial) at any time, but does require fidelity to the effect on El Nino by those conditions, and the effect of the climate model on those conditions. A lack of periodicity or a some statistical distribution of periodicity due to amplification of chaotic changes does not invalidate models, but only forces modelers to do more real world measurements and submodels and create dynamic parameters from those.

    Likewise about #62, I don’t think that unpredictability due to chaos needs to be resolved (implicitly) by resorting to energy equations. Yes, energy is increasing, along with water vapor, etc. But the models can have fidelity to those changes by applying appropriate real-world measurements or results from detailed submodels. To me this seems to be a solvable problem.

    Comment by Eric (skeptic) — 30 Apr 2008 @ 8:17 AM

  77. Eric, I think you are missing the point entirely. There are irregular and regular components of the climate system, and there are also thermodynamic limits – the system is open, but that doesn’t mean conservation of energy doesn’t apply.

    Weather predictions of the first kind display sensitive dependence on initial conditions. The predictive ability of a model of nature is not some set quantity, but varies with the system itself and also spatially. Parts of the tropics are far more predictable than mid-latitude regions where frontal systems are mixing – but the limits are anywhere from a few days to a few weeks.

    Climate predictions of the first kind are similar in that they look a bit like ocean circulation forecasts – an impending El Nino or La Nina can be predicted some months in advance, for example. Discussions of the chaotic elements of El Nino have shown up in the normal “dynamical systems” journals: Power-law correlations in the southern-oscillation-index fluctuations characterizing El Niño – M Ausloos, K Ivanova – Physical Review E, 2001 (pdf)

    The ability to forecast El Nino over the short term goes back some time – see Science 1988, Barnett et. al: http://www.sciencemag.org/cgi/content/abstract/241/4862/192

    Three different classes of numerical models successfully predicted the occurrence of the El Nino of 1986-87 at lead times of 3 to 9 months. Although the magnitude and timing of predicted ocean surface temperatures were not perfect, these results suggest that routine prediction of moderate to late El Nino events is feasible.

    For current La Nina & SOI conditions: http://www.bom.gov.au/climate/enso/

    Now, what about climate predictions of the second kind? These are far more general questions, in that if we have a solid understanding of climate, than we should be able to estimate what the climate of any rocky planet with a known atmospheric composition and surface characteristics is. If we find a rocky planet around some distant star, and we know how much sunlight it gets, what its orbit is like, what its atmosphere is made of, and the distribution of landmasses and oceans, can we predict what the basic climate will be like? The answer there is yes – within limits.

    Take the hurricane situation. Can we accurately predict hurricanes a month away right now? Take a look at the surface temperature field on the satellite view of the Atlantic basin right now.

    Compare that to the very similiar current sea surface temperature: http://weather.unisys.com/surface/sst.html
    (anomalies are at http://www.osdpd.noaa.gov/PSB/EPS/SST/climo.html but are not so useful).

    The basic element needed for hurricane formation is a SST of at least 26.5C – and the surface layer has to be warm to a depth of some 50 meters. If you look at current conditions from 10 to 30 N latitude, there is no way for a hurricane to form, no matter how many perturbations are introduced.

    However, if we increase SSTs and the depth of the warm mixed layer, we change the basic thermodynamics of the system, and we can predict, that if all other factors are held equal, that the warmer than 26.5C SSTs will appear earlier in the spring, will persist later into the fall, and will have a greater area of extent. We can also predict that the warm surface layer will be deeper – perhaps the most important factor, as that is really what allows a hurricane to reach maximum intensity.

    It may very well be that there will be compensating changes in wind shear, but that’s highly uncertain – and it is very unlikely that such compensations would be constant. Thus chances are higher for stronger hurricanes in a warmer world.

    Unfortunately, Gray & Klotzbach at Colorado are still claiming that current trends are all due to a “positive phase of the AMO” which is resulting in more rapid northward spread of warm water – implying that we will see a cooling trend in SSTs “some 5-20 years from now”. That is basically nonsense, regardless of any AMO effect, but that’s what their latest hurricane predictions claim, as covered at ScienceDaily:

    Current conditions in the Atlantic basin are quite favorable for an active hurricane season. The current sea surface temperature pattern in the Atlantic – prevalent in most years since 1995 – is a pattern typically observed before very active seasons. Warm sea surface temperatures are likely to continue being present in the tropical and North Atlantic during 2008 because of a positive phase of the Atlantic Multidecadal Oscillation (AMO).

    There is some evidence of this AMO in climate models…
    http://www.agu.org/pubs/crossref/2006…/2006GL026242.shtml

    Here, we seek evidence of these links in the 1400 year control simulation of the HadCM3 climate model, which produces a realistic long-lived AMO as part of its internal climate variability. By permitting the analysis of more AMO cycles than are present in observations, we find that the model confirms the association of the AMO with almost all of the above phenomena. This has implications for the predictability of regional climate.

    So. . . the models are showing that the AMO is actually real and is having an influence – I hear applause from Pielke and Gray already – except that they say that the models are bogus and can’t be trusted, and that they have no “predictive skill.” – and this paper relies heavily on model runs as “calculated observations” – a procedure that should have the skeptics screaming bloody murder, I think.

    The AMO and its cousin the PDO are recently recognized “oscillations” (with periods of from 5-20 years… on the basis of one century’s worth of time series analysis data? A “periodic oscillation” – with an apparently inexplicable oceanic mechanism driving it?) – but let’s look at how HadCM3 deals with El Nino – EL Ninos show up, but frequency and timing is off, indicating some degree of sensitive dependence:

    http://www.agu.org/pubs/crossref/2007/2007JD008705.shtml

    One claim that you will see often is this: “The El Niño-Southern Oscillation (ENSO) phenomena are the largest natural interannual climate fluctuation in the coupled ocean-atmosphere system. Warm (El Niño) and cold (La Niña) ENSO events occur quasi-periodically.”

    Do they really? (Quasi-periodic functions are finite combinations of periodic functions) Or is the incidence of El Nino events chaotic on the scale of years? Could El Ninos triggered by whales, in other words? This is probably one of the more interesting questions in ocean science today – what is the scale of predictablity for ocean circulation?

    The point is that in making any prediction of weather or climate, one has to be aware of whether it is a weather prediction of the first kind, a climate prediction of the first kind, or a climate prediction of the second kind. Climate predictions of the second kind do not appear to display sensitive dependence on initial conditions. Any discussion of climate predictions thus has to involve at least some understanding of this whole topic.

    Climate predictions, in this language, consist of efforts to predict how the regular components of the land-ocean fluid dynamic system will change due to changes in the overall structure of the system itself – changes in atmospheric composition, changes in ice cover, changes in forest cover, and so on.

    What skeptics appear to be doing is trying to muddy the waters by claiming that there are no regular or predictable components of the system over the long run – which is clearly not the case.

    Comment by Ike Solem — 30 Apr 2008 @ 11:49 AM

  78. Ray, (#74) well, my colleague’s PhD thesis title is “On Nonlinear Time Series Analysis”, and he has published on complexity of multichannel EEGs, and nonlinear time series analysis of sunspots. He certainly knows very little about climate scientists, and you are correct that giving him a graceful way out is more likely to be effective than direct confrontation.

    Thanks, Raypierre and David Benson for reading suggestions.

    Comment by Mark — 30 Apr 2008 @ 12:19 PM

  79. Mark (78) — I’ll also suggest “The Discovery of Global Warming”,

    http://www.aip.org/history/climate/index.html

    hoping that your colleague will take the time to read it.

    Comment by David B. Benson — 30 Apr 2008 @ 2:07 PM

  80. During all this discussion I’m not sure anyone asked whether the actual climate system is in fact chaotic, in the sense that it has at least one positive Lyapunov exponent. I have always been struck by a comment I once heard a pretty good numerical hydrodynamicist make. He did Galerkin simulations. WIth say N=N1 Galerkin modes at Reynolds number R=R1 he would see exponential divergence of nearby trajectories (chaos). Holding R fixed but increasing the number of Galerkin modes (from N1 to say N2) resulted in behavior that did not display exponential divergence of nearby trajectories. Increasing the reynolds number from R1 to R2 would result in exponential divergence with N=N2 and so on. I have no idea if this stuff was ever published. I’ve never really known what to make of it. It isn’t clear that the higher dimensional but nonchaotic system is even more predictable than the lower dimensional chaotic one.

    Re 63 and 78: I occasionally fancy myself a dynamicist of sorts but I sure as hell wouldn’t be dissing these climate fellers for their lack of understanding of dynamics.

    Comment by John E. Pearson — 30 Apr 2008 @ 5:01 PM

  81. Re #76, Ike, there’s 26.5 water right now, but no hurricanes (the charts don’t show the depth of the warm water). In the other links, El Ninos are predicted, where possible, from related periodic factors. These oscillations may bounce between states without any particular periodicity, but it will always be an ocean weather (and related climate) prediction of the first kind. As for the second kind of prediction, I believe that’s mainly an abstraction, rather than a reality. One reason is that the second kind of prediction depends on accurate modeling (not predictions) of the first kind of effects. Without those models, the first kind of effects have to be parameterized preventing them from changing as the climate changes. We can model El Nino in today’s climate, but what about a warmer climate?

    I don’t think a planet with water and land in a particular configuration and known atmospheric composition can have a predicted temperature range to any useful precision without a sufficient model of the first kind of effects. Earth’s history shows as much. Water is the problem, predictions can only be as good as the prediction of the distribution of water vapor in all dimensions and clouds. In current global models these are heavily parameterized based on detailed models and real world measurements to get decent accuracy. The good news for all concerned is the fidelity and computer power (model resolution) is constantly increasing and will make this whole discussion moot.

    Comment by Eric (skeptic) — 30 Apr 2008 @ 7:50 PM

  82. Lorenz was certainly a very smart guy. However, for most people his “butterfly effect” creates the wrong impression. It implies the butterfly *causes* the tornado, which would be remarkable if true. But what if there are a thousand butterflies all in the same area. Which one causes the tornado?

    You can try and run a kind of counterfactual argument, whereby you say something like: “on all trajectories where the butterfly does not flap his wings, but all other boundary/initial conditions are the same, the tornado doesn’t happen; on all trajectories where the butterfly flaps his wings, but all other boundary/initial conditions are the same, the tornado does happen”, but that is unlikely to be true.

    The best you can probably say is: “if the butterfly flaps his wings exactly *this* way (for some precise specification of “this”), and all other boundary/initial conditions are equal, the tornado happens; if the butterfly does not flap his wings exactly *that* way (for some precise definition of “that”), the tornado does not happen”. But this is just comparing two single trajectories – it is a very weak notion of causality. For if this is what we mean by the “butterfly causing the tornado”, then we can equally find any other pair of trajectories, one with a tornado and one without that differ by a single event and claim that it is that event that causes the tornado. In other words, by this definition of causality, any number of other butterfly wing flaps “cause” the tornado, as do an uncountable number of other events ranging from my choice to eat cereal instead of toast for breakfast through to the individual decisions made by the cockroaches on my street.

    Most people take from Lorenz’s analogy that the butterfly causes the tornado. But on any sensible definition of causality it does not, hence I think the analogy does more harm than good.

    Comment by Jonathan Baxter — 30 Apr 2008 @ 8:06 PM

  83. Mark (#70), your colleague is correct, but only half-way. While there are all indications that the climate is indeed an “aperiodic nonlinear system”, it should be possible to quantify the extent of human contribution. But to do so, one needs a correct model of natural climate variations first. It is quite obvious that during 99.9% of available climate records (from various sediments and ice cores) there was no human contribution, because the industry simply didn’t exist. These are purely natural variations. Yet the climatology is quite short of delivering a model of natural long ice ages and fast deglaciations. Given the absence of such model of natural variations, the claim of anthropogenic influence on current state of climate has no scientific foundation so far.

    [Response: This is equivalent to arguing that since we don’t know the precise cause of every wildfire throughout history, we can’t convict an arsonist whose actions were caught on tape. It’s a complete fallacy. – gavin]

    Comment by Al Tekhasski — 1 May 2008 @ 12:51 AM

  84. Roger (#64):
    Part of your final construction reads: “However, this kinetic energy is dispersed over progressively larger and larger volumes such that it will quickly dissipate into heat as the magnitude of the disturbance to the flow at any single location becomes smaller”. This assertion fails to take into account the general and well documented effect of hydrodynamic instability, when, under certain (even uniform) stress conditions above certain threshold of forcing, generic infinitesimal perturbations grow into a macroscopic global flow pattern by _drawing_ its energy from otherwise smooth velocity (and/or temperature) field. If the flow is far from being stable and already turbulent, there are several known effects of “coherent structures”, who’s appearance resembles the process of original instability in many respects.

    Comment by Al Tekhasski — 1 May 2008 @ 1:12 AM

  85. The original Saltzman equation system of 1962 does not contain accounting of dissipation of fluid motions into thermal energy; see Equation (4) of the paper. The same approximation obtains for the Lorenz system of 1963. The original Saltzman 7-equation system does not produce chaotic response for the range of parameters and initial conditions investigated in the paper. Saltzman used more modes in the expansions, and then reduced these to 7 for calculations. Lorenz found a subset of these 7 equations and a range of parameters for which he investigated chaotic response. So far as I am aware, no Lorenz-like modeling approach has included viscous dissipation in the thermal energy balance model.

    Both the Saltzman and Lorenz systems are zeroth-order models for the onset of motion for a fluid contained between horizontal planes; the Rayleigh-Benard problem is a convenient designation. Similar systems of equations are obtained also for flows in closed loops; thermosyphons, also here. The fluid motions for the original Saltzman and Lorenz problem are driven by a temperature difference between the planes bounding the flow. The Boussinesq approximation is used in the momentum-balance model for the vertical direction.

    The temperature difference between the planes is required to be maintained in order for the flow to continue. In the presence of frictional losses all fluid motions require that energy be constantly added to the system for the flow to maintain. Likewise, in a very rough analogy with small perturbations and turbulence, a driving force is required in order for the small-scale motions to be maintained. In many flows, the driving potential is provided through the shear in the mean flow by the power supplied to maintain the mean flow.

    Periodic motions, not chaotic motions, have been observed for the Lorenz system for some values of the parameters appearing in the system; see Tritton, Physical Fluid Dynamics, for example, and the closer analyses such as that linked above. It is equally well-known that chaotic response can be obtained due solely to inappropriate numerical methods applied to PDEs and ODEs that cannot produce chaotic response in their solutions. When numerical solution methods are important aspects of any analysis, the effects of these methods on the calculated numbers require deep investigations so as to eliminate spurious effects that are solely products of the numerical methods.

    Changes in the parameters in the model system produce the same kinds of effects that are observed when the initial conditions are changed. There are some ranges of the parameters that do not show sensitivity to initial conditions nor chaotic response. The focus on sensitivity to initial conditions and chaotic response seems to always miss the fact that simply invoking ‘the Lorenz model’ is not sufficient to ensure that chaotic response is obtained.

    Dynamical systems theory predicts that the differences between the dependent variables will grow exponentially for two different values of initial conditions, no matter how small. Statements about the time required for a perturbation to be effective over various ranges of temporal and spatial extent in the physical world should be addressed from the viewpoint of physical phenomena and processes; not by what a few calculations using a model/code indicate. Again, aphysical properties of calculated fluid flows can frequently be traced to improper numerical solution methods and the implementation of these into computer software.

    The demonstration linked to in the original post for this thread led to the introduction of more questions than answers. The effects of the size of the discrete time interval was an issue. As were the effects of the numerical solution methods. A comment over there mentions that the calculated response might in fact be controlled by model equations and numerical solution method in contrast to chaotic response. Additional discussions of step-size effects have been presented here for both NWP and GCMs. The presence, or absence, of chaotic response in the model/code cannot be ascertained by use of a few calculations. The possibility of causality linked to the model equations and/or numerical solution method should be determined first. The mere possible existence of the potential for chaotic response is not sufficient for concluding chaotic response.

    A demonstration by a calculation of an idealized model equation system never says anything about the actual response of the modeled world; especially when as in the case of GCMs the model equations are acknowledged to be approximations and simplifications of the complete fundamental equations. A calculation by a GCM demonstrates the properties of that GCM for that calculation and can provide no information relative to the chaotic response of the real world climate.

    Neither sensitivity to initial conditions nor non-linearity nor complexity, either separately or all together, provide necessary and sufficient conditions for a prior determination of chaotic response. Analyses of the complete system of continuous equations, plus careful consideration of numerical solution aspects, combined with analyses of calculated results are all necessary for determination of chaotic response. A model of a complex physical system, comprised of a system of nonlinear equations, the numerical solutions of which show sensitivity to initial conditions, and the calculated output from which ‘looks random’ does not even mean that the calculation exhibits chaotic response. ‘Looks random’ in itself is not a description that is consistent with chaotic response. And the properties and characteristics of the calculated results cannot ever be attributed to be properties and characteristics of the modeled physical system.

    Heuristic appeals to Lyapunov exponents should be verified by calculations of the numerical values of these numbers. So far as I know that has yet to be carried out for any GCM model equations. As the numerical values of the exponents generally requires use of numerical methods, convergence of these methods for the exponenst must also be demonstrated. The results of application of numerical solution methods are well known to be very capable of producing results that appear to have the same characteristics of positive Lyapunov exponents.

    There are many papers dealing with generalization of the Lorenz model equations, primarily by including more modes in the expansions. Three recent papers by Roy and Munsielak here, here, and here, provide a good review and summary of these investigations. The authors rightly conclude that some of the generalizations are not properly related to the original Lorenz model systems. As noted by Roy and Munsielak, there are two important characteristics of the continuous Lorenz equations are (1) they conserve energy in the limit of no viscosity (energy conserving in the dissipationless limit) and (2) the systems have solutions that are not unbounded. Some generalizations of the Lorenz model system that do not conform to these requirements show routes to chaos that are different from those for the Lorenz system. Is it clear that the continuous equations, expansions, and associated numerical solution methods, used in NWP and GCMs are consistent with the original basis of the Lorenz-like models?

    It would seem that in order to say that GCMs are producing chaotic response in the sense of the Lorenz model system, the continuous equations should be investigated to ensure that each complete continuous system of model equations has the properties of the original Lorenz system. Hasn’t yet been done as far as I know.

    It seems to me that the hypothesis that GCM calculations are demonstrating chaotic response is not well founded. Especially whenever the original Lorenz model system is invoked as the template for chaotic response.

    As noted in response to Comment 39 and Comment 56 above, the analytical properties of equation systems comprised of PDEs plus ODEs plus algebraic model equations relative to chaotic response are for all practical purposes unknown (my summary). I think there is a proof of the existence of the Lorenz attractor by Warwick Tucker since about 2000.

    I think maybe Comment #45 above is referring to papers and reports by D. Orrell on model error vs. chaotic response of complex dynamical systems.

    All corrections to the above will be appreciated.

    [Response: All GCMs as written can be considered to be deterministic dynamical systems. All of them display extreme sensitivity to initial conditions. All of them have positive Lyapunov exponents (though I agree it would be interesting to do a formal comparison across models of what those exponents are). All are therefore chaotic. Your restriction of the term chaotic to only continuous PDEs where it can be demonstrated analytically, is way too restrictive and excludes all natural systems – thus it is not particularly useful. Since the main consequence of the empirical determination that the models are chaotic is that we need to use ensembles, I don’t see how any of your points make any practical difference. The GCMs can be thought of as the sum total of their underlying equations, their discretisation and the libraries used (and that will always be the case). It is certainly conceivable, nay obvious, that these dynamical systems are not exactly the same as the one in the real world – which is why we spend so much time on evaluating their responses and comparing that to the real world. But as a practical matter, the distinction (since it can never be eliminated) doesn’t play much of a role. – gavin]

    Comment by Dan Hughes — 1 May 2008 @ 6:30 AM

  86. Will you please take a look at this curious blog by an italian negationist?
    http://omniclimate.wordpress.com/2008/04/24/realclimate-raises-the-bar-against-climate-models/
    It says “your” models are unfalsifiable, and therefore not science at all. So, “climate change” is an entity that can only become observable in the long, long term. And since there is little concern for the “specific trajectory”, there literally exists NO possible short-term sets of observations that can falsify the climate models.
    Ohm, and he ends with In further irony, the above pairs up perfectly well with RC’s “comments policy” that can be summarized more or less into “we will censor everything we do not like“.
    Maybe a short answer will silence him. Once and for all, although I doubt it.
    Thanks

    [Response: I doubt it too. But really, what is so hard with the concept of short term noise and long term signal? Does a single toss of a coin that lands on heads mean the coin is unfair? No. How about 2 heads in a row? No. However, if you get a really long series of heads the chances that it is fair coin become smaller. The same is true for models – a head here or a tail there are interesting but do not determine the long term accuracy of the model. It is the long term trends that do. I apologise if that’s inconvenient for people who think that one La Niña event implies the onset of a new ice age, but them’s the breaks. – gavin]

    Comment by Marco Ferrari — 1 May 2008 @ 7:23 AM

  87. Dan (#85): Your introductory premise is incorrect. The original Saltzman equation system was derived from an approximation of Navier-Stockes system with finite viscosity, see Eq. (1)-(3), which means that the dissipation is accounted, and therefore the rest of truncation represents a dissipative system.

    Comment by Al Tekhasski — 1 May 2008 @ 9:24 AM

  88. Gavin,

    In regards to your reply in #86, a long series of heads is NOT a sign of an unfair coin in itself. You have an equal chance of getting a sequence of heads as you do any other sequence. Reminds me of the Richard Feynman quote “You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”

    [Response: Agreed for the sequence, but I’m only interested in the distribution (the ratio of heads to tails). – gavin]

    Comment by Dave Blair — 1 May 2008 @ 12:30 PM

  89. Gavin, regarding your comment to #83: Isn’t it true that dominant cause of wildfires is natural, from thunders? Then, if your goal is to restrict global fires, convicting one or two arsonist would accomplish nothing, correct? Also, who would you like to convict for wildfires a million years ago, some lemurs?

    Comment by Al Tekhasski — 1 May 2008 @ 6:12 PM

  90. There is a simple way to settle the falsifiability issue. Could anybody at RC please post a blog clearly stating what would falsify the climate models? Say (just as a way of example) “if temperatures will be cooler than today’s in 2020″ or “if there is a sustained negative trend over the course of 25 years”. Those statements are simplistic: I am sure you can come up with something more sophisticated.

    Alternatively, if such a clear-cut answer has already been the topic of one of your blogs, could you please provide the link. thanks in advance.

    Comment by Maurizio Morabito — 1 May 2008 @ 7:11 PM

  91. The idea of a climate model being falsifiable strikes me as naive. The statement that “climate sensitivity to CO2 doubling is 3 degrees C per doubling” is certainly falsifiable. However, the evidence currently favors the proposition, and even if it were falsified, the climate model would persist more or less in the same form with a different sensitivity. New forcers might be discovered, but that would not change the form of the climate model dramatically.

    There is more to the philosophy of science than Karl Popper, folks.

    The hypothesis that humans are behind the current warming epoch is certainly falsifiable–all you have to do is 1)show that all the independent lines of evidence constraining CO2 sensitivity are wrong; 2)come up with a model that explains the current warming without this sensitivity; and 3)show why physics is wrong and somehow the greenhouse contribution stops magically at 280 ppmv. And do let us know how you progress with that.

    Comment by Ray Ladbury — 1 May 2008 @ 8:22 PM

  92. Which particular model are you talking about, Maurizio?
    Or are you talking about the known physics?
    Or the forcings?

    You can fix a problem in a component by understanding it, rather than deciding the entire engine is unusable because something is wrong with how it works.

    http://www.google.com/search?q=%22falsify+the+climate+models%22

    Comment by Hank Roberts — 1 May 2008 @ 8:44 PM

  93. Re: #89 Al Tekhasski wrote: “Isn’t it true that dominant cause of wildfires is natural, from thunders?”

    In the United States. somewhere between 2 and 10 percent of wildfires are caused by lightning. The vast majority of wildfires is due to human causes, intentional or accidental. In some places, coastal California for example, lightning is relatively uncommon.

    http://www.bellmuseum.org/wildfire.html

    Comment by Jim Eaton — 1 May 2008 @ 11:35 PM

  94. #86 THe models in general are extraordinary in precision, until Chaos, as per Lorenz butterflies, take over. I defy anyone who thinks otherwise to come up with a 4 day forecast having accurate temperatures for say 1000 locations simultaneously… Or for someone to come up with a better long range climate temperature projection than from Nasa GISS or other proven models. What some people call model “errors” are simply mathematical interpretation or application mistakes, one of which should be corrected in the near future. I give three letters, DWT, a clue, whole atmosphere temperature trends work, for those who want to see proof, check out my website….

    Comment by wayne davidson — 2 May 2008 @ 8:07 AM

  95. Re #90: Somebody posted two ways to falsify the climate models on your web site.

    (1)Perform lab experiment with the result that CO2 doesn’t absorb/emit the way the climate models assume.

    (2)Perform lab experiment in which fluid flow is observed that contradicts the Navier Stokes equations.

    I could go on and on. It’s trivial to come up with experiments that could falsify the climate models in the short term. It isn’t the fault of the climate models that they’re based on very well established physics.

    Comment by John E. Pearson — 2 May 2008 @ 8:28 AM

  96. Regarding #91 and #92: well folks it was the RC people that spoke in this blog about “predictability” and finding the “common signal that is a signature of a particular forcing”. Maybe that’s where the falsifiability lies. It’s not just a matter of changing a parameter’s value though, as it could very well be that not all parameters, and not all physical relationships have been identified yet.

    I am sure Gavin is clever enough to know that there must be a prediction made, and a measurement done regarding that prediction, otherwise it’s not “science”, it’s useless.

    Please don’t be “more loyalist than the King”…

    Comment by Maurizio Morabito — 2 May 2008 @ 8:52 AM

  97. Eric says: “One reason is that the second kind of prediction depends on accurate modeling (not predictions) of the first kind of effects.”

    That’s like saying that a prediction that June in the U.S. will be warmer than February depends on accurate modeling of atmospheric and oceanic weather systems in February! Obviously it does not.

    Breaking the problem into atmospheric weather forecasts of the first kind, ocean weather forecasts of the first kind, and climate forecasts of the second kind seems reasonable when discussing the overall ocean-atmosphere-land-ice fluid dynamics system. More generally, the overall system has regular and irregular components, and the regular components are predictable on far longer timescales than the irregular components.

    The big uncertainty here is carbon cycle feedbacks, which require an understanding of the biosphere and cryosphere responses – and that requires an ability to predict future carbon emission and uptake by soils, the biosphere, and oceans – and also by people.

    At this point, it seems fair to say that the greatest uncertainty in predictions of future climate change is now future human behavior.

    P.S. Regarding “causality” it helps to think about trying to recover the past behavior of a system from a measurement of its current state. This is really the paleoclimate perspective – and it turns out that the present state of the system isn’t of much use in recovering a record of its past state – for that, paleoclimate scientists turn to sediment records, ice core records, and the like. This is the other side of “sensitive dependence on initial conditions.” What does the Euclidean “proof” mean in such a situation?

    For example, if we take the current state of the atmosphere, can we say what it was like yesterday based on that alone? Only to a point, and the backwards limit dies out as fast as the forward one does. During the 18th and 19th century, many people believed that perfect knowledge of the universe would allow one to predict the future and know the past, completely. That notion is now only of historical interest. People are still working out the “philosophical implications” – meaning that there are a lot of people running around with 18th-19th century physical models of the world in their heads, unfortunately. Sad to say, most courses in differential equations at colleges today are largely taught from that archaic perspective.

    Comment by Ike Solem — 2 May 2008 @ 9:36 AM

  98. Maurizio, Why did your reading of philosophy of science stop with Popper in the mid 1900s? GCMs are complicated entities with many interacting parts. Yes you could try to falsify any one of those parts, but the new model will look very much like the old one–with perhaps slightly different parameters.

    You say that perhaps “not all parameters, and not all physical relationships have been identified yet.”

    Yes, and there could be a pink unicorn in my car trunk when I go to the parking lot today as well. Do you have a concrete suggestion for a missing parameter or relationship. If so, propose it. If not, then I’m not sure what your admonition adds to the discussion.

    So, let’s see, you don’t understand GCMs or philosophy of science. What else do you not understand…?

    Comment by Ray Ladbury — 2 May 2008 @ 9:50 AM

  99. Maurizio,

    I think your comment is an example of popular misconceptions about basic scientific issues. If you wanted a prediction, and went to your relevant expert, and you were told that your system was actually not predictable, or had a small degree of predictability, wouldn’t that be valuable information about the system?

    You seem to be claiming that studies of probablility are not scientific, in other words? If so, you would really be saying that most of 20th century science is not actually science!

    What a certain class of skeptics are trying to do is to claim that since one cannot predict the irregular components of the climate system, one can’t predict the regular components either. This is a highly ridiculous claim, but an apparent lack of scientific knowledge among the general public allows them to get away with it.

    It’s true that some systems do have such a large irregular component that they are essentially random over any but the shorter timescale – “econometric” predictions really do seem useless, for example – but climate is clearly predictable. It’s also clear that the composition of the atmosphere affects the climate, and it’s also clear that we’ve added a lot of heat-trapping gases to the atmosphere, mainly from fossil fuel combustion, so yes, we are changing the climate, slowly on the human timescale, but incredibly quickly on the geologic timescale.

    Comment by Ike Solem — 2 May 2008 @ 9:57 AM

  100. From an outward observer’s perspective, climate science seems interesting in that you can make lots of predictions but the time scale implies that validation extends over one’s career. I suppose it requires some patience…

    Comment by Rob Andre — 2 May 2008 @ 4:13 PM

  101. Re: #101 As a matter of fact if you search on PubMed http://www.ncbi.nlm.nih.gov/pubmed/ there are four articles with my name. In two of them I appear as first author. And no, they are not first-rated earth-shattering Science or Nature articles about climate science.

    But as we all agree now, that’s beside the point.

    Let’s me start again from a simple question. Hansen et al did compare model results to observations.

    “Climate simulations for 1880–2003 with GISS modelE”, Clim Dyn (2007) 29:661–696 – DOI 10.1007/s00382-007-0255-8

    For example, consider fig. 9 (the PDF of the article is on the internet, apologies but I do not have time to search for it right now):

    “Fig. 9 Global maps of temperature change in observations (top row)
    and in the model runs of Fig. 8, for 1880–2003 and several
    subperiods. […]”

    Observations there are shown in periods respectively of 124 years (1880-2003), 54 years, 61 years, 40 years and finally 25 years (1979-2003).

    Presumably, this provides a first approximation of what time spans are needed to talk about climate (around 25 years). The actual shortest period may be 40 years or longer, as 1979-2003 has been chosen primarily as “the era of extensive satellite observations”. Please correct me if am wrong.

    Let’s take now a clear-cut example. The authors write “All forcings together yield a global mean warming ~0.1C less than observed for the full period 1880–2003.”. And that’s a remarkable result.

    But…may I ask this rather elementary question: say, if the global mean warming yielded by all forcings together had been much less, or much more than observed, what would have been the (absolute) threshold above which the climate simulations would have been declared a failure?

    Or has this question no meaning either? If not, why not?

    Once again, I am consciously simplifying things here but this is a blog…more a brainstorming session than a week-long workshop.

    Comment by Maurizio Morabito — 2 May 2008 @ 4:40 PM

  102. > this rather elementary question: say, if the global mean warming
    > yielded by all forcings together had been [grossly different]

    Well, you can’t go back earlier than Arrhenius’s work and he wasn’t far off the current numbers. So you’re into fantasy there.
    http://www.aip.org/history/climate/co2.htm

    You’re saying that in some universe, the original work would have been so far from fact that

    > the climate simulations would have been declared a failure

    About all we can say is, the first estimate in the field was already close enough that people found it interesting to investigate.

    People will investigate blind alleys — cold fusion, perhaps; Bode’s Law, perhaps; epicycles, certainly. But you’ve got to be “not even wrong” to get people to swear off thinking about an idea.

    It does happen.

    Comment by Hank Roberts — 2 May 2008 @ 5:09 PM

  103. Maurizio Morabito (103) — It is necessary to move beyond notions of ‘falsifiability’, a la Karl Popper. Here some form of informal and formal Bayesian reasoning is required.

    http://en.wikipedia.org/wiki/Bayes_factor

    is one place to begin exploring the concepts.

    Comment by David B. Benson — 2 May 2008 @ 5:56 PM

  104. Maurizio, Since we are talking climate here, the longer the period, the greater the signal to noise ratio. However, one can model the noise as well as the signal and look at how often fluctuations produce the given result–or how often noise alone would be expected to produce the signal.
    This is purely a swag, but I suspect that if predictions were consistently off by more than half a degree, the models would be judged incomplete. Note that this is not really Popperian falsification–that really doesn’t apply here. It is pretty certain that the basic physical processes are correct. Climate science is really pretty mature. Yes there is much still to learn, but that does not diminish the importance or the confidence in what is already known (e.g. CO2 sensitivity).

    Comment by Ray Ladbury — 2 May 2008 @ 6:18 PM

  105. #86 Gavin There should be a post dedicated to Hit and Run journalism. They take a swipe at AGW theory, run for cover, and then reappear again at the next cold spell. Here are a few stellar examples:

    Forget global warming: Welcome to the new Ice Age

    http://www.nationalpost.com/opinion/ columnists/story.html?id=332289

    Is Winter 2008 Making Climate Alarmists Question Global Warming …

    newsbusters.org/blogs/noel-sheppard/2008/ 03/02/winter-2008-making-climate-alarmists-question-global-warming

    Prepare for the Next Ice Age «

    michaelscomments.wordpress.com/ 2008/04/25/prepare-for-the-next-ice-age/

    Now that 2008 is turning hot, we should be glad to have a moment of respite from contrarian propaganda…..

    Comment by wayne davidson — 3 May 2008 @ 12:23 AM

  106. Re: 106

    Thank you Ray. I can see some progress there (“incomplete” models rather than “wrong”). Perhaps some esteemed epistemologist will write a post on RC one day on how climate modelling does differ, say, from cosmology. Or doesn’t: still there’d be something to learn.

    Thoughts spring to mind about classical physics being “really pretty mature” at the end of the XIX century apart from the “noise” called “black-body radiation”…only, for quantum physics to be discovered.

    Comment by Maurizio Morabito — 3 May 2008 @ 7:45 AM

  107. Maruizio, your reference to classical physics shows that you have an incomplete understanding–of the transition between classical and quantum/relativistic physics and of how science works. The failures of classical physics came when scientists tried to apply it to realms well outside those in which it developed–the very small for the quantum revolution and near-light-speed for relativity. Nothing like that is happening here. What is more, you will note that classical physics did not go away. Nobody solves the Schroedinger equation for a plane or looks at foreshortening for a rock falling. In fact, the successes of classical physics required that the new theories look very much like the old–a requirement Niels Bohr formalized in the Correspondence Principle.
    So, even if you were to see a “new theory” of climate, it would look very much like the old–and since the old has been very successful and is not failing, I rather doubt you’ll see a new theory of climate.

    Comment by Ray Ladbury — 3 May 2008 @ 1:05 PM

  108. re: #87

    Yes, the momentum equations are based on a viscous fluid. I did not say that they were not.

    The energy equation, however, does not contain the term that accounts for dissipation of fluid motions into thermal energy. See Equation (4) as I noted in my comment.

    That is, I did not say the model was based on the Euler equations. My original statement is correct as it stands. If it is not, kindly point to the specific term in the energy equation model that accounts for the dissipation into thermal energy.

    [Response: There are other forms of dissipation besides that one dissipative term, and the Saltzmann equations include dissipation. This is a pointless argument. Systems like Lorentz/Saltzmann are without question and obviously dissipative in the sense used in dynamical systems. Otherwise they wouldn’t have attractors! A Hamiltonian system conserves volume in phase space, which means you can’t contract down onto an attractor. This is the whole difference between Hamiltonian chaos (treatable for many degrees of freedom by standard thermodynamic principles) and dissipative chaos, which is orders of magnitude harder to get a grip on by statistical methods. –raypierre]

    Comment by Dan Hughes — 3 May 2008 @ 1:35 PM

  109. Maurizio, your website claims you’re being silenced here, calls the site ‘Goebbelist’ and throws tantrum after tantrum about your being mistreated. You may want to check your perspective on life a bit.

    [Response: Seconded. I would like to firmly underline that the use of inappropriate Nazi analogies is deeply offensive and will at no times be tolerated here. I don’t care who does it, it is out of order. Get a grip. – gavin]

    Comment by Hank Roberts — 3 May 2008 @ 2:32 PM

  110. re: 110

    I did not say the systems are not dissipative. And of course the systems are dissipative as that term is used in dynamical systems analyses.

    I said, for the third time, that the energy equation model does not contain accounting of dissipation of fluid motions to thermal energy. The usual/standard positive-definite term on the RHS of the temperature form of the energy equation model is not present in the Saltzman system.

    I said this based on Eq. 4 in the Saltzman paper.

    If no one can point to that equation and show the error of my statement, the statement stands as correct as stated.

    I hope RC will kindly allow this comment to be posted. I am only trying to make clear the term that I am pointing to.

    [Response: But what in the world is your point in making such a fuss about that one term? It’s not relevant to anything we’ve been discussing. –raypierre]

    Comment by Dan Hughes — 3 May 2008 @ 5:37 PM

  111. As a matter of fact, Morabito accused ME of using goebbelite fashion (whathever that means) since I could not reply to his arguments (not true, of course. I have my own blog, which he knows, with a post on the very same topic; http://leucophaea.blogspot.com/2008/05/caldo-o-freddo-o-tutti-e-due.html – in italian). As far as I understand, according to Morabito, asking Realclimate to counter his arguments with data, and calling him a negationist, means trying to silence him with phisical threats. Tell me, Gavin, what have you done to scares the beejezus out of Morabito?

    Comment by Marco Ferrari — 4 May 2008 @ 3:37 AM

  112. Gibbertarian?

    Comment by Hank Roberts — 4 May 2008 @ 1:02 PM

  113. Hi all,
    I have been reading this writeup on Ed Lorenz and the subsequent discussions and comments with very great interest. It is very nice to be privy to the unfolding of profound new areas of knowledge such as chaos theory, and applications such as climate science. In the process I’ve started to also sense (and enjoy) a little of the personal aspects of those who work in the area – for example, gavin’s mature and knowledgeable responses to various queries, the subtle philosophical background to raypierre’s way of putting things across (I’ve read your rebuttal to Dawkin’s The God Delusion) and so on. I was reminded of Hermann Haken, whose name I came across in a book by Giuseppe Caglioti titled “The dynamics of ambiguity”. This book deals with the dynamics of human perception and I personally found many parts of it to be thought provoking, for example this bit in the preface:

    “Caglioti proposes a return to the image of the scientist-philosopher, that is, of the man of science who doesn’t limit himself to the role of ‘Lord of Technology’, but who emerges as a thinker whose purpose is to investigate, albeit in technical-scientific terms, even those problems inherent to man’s psychic-cognoscitive equilibrium”

    Cheers,

    Madhu

    Comment by Madhusudan A. Padmanabhan — 5 May 2008 @ 5:18 AM

  114. Re #107

    I am preparing a relatively long commentary on what I am learning from this blog and its comments. For now let me clarify that I do not think that current climate models are based on incorrect physics.

    The black-body radiation equivalence still holds though, as what looked like a relatively minor nuisance (“noise”?) to your average XIX century physicist, was the basis for a whole new understanding of the whole science of physics.

    Think of genetics: yesterday’s “junk DNA” is (in part) today’s “gene switches”. Who knows what tomorrow will bring.

    As for the comments policy, in the past I have seen some thoughts of mine not published, for whatever reason. I am pleasantly surprised that nothing of the sort is happening this time around, and hopefully the situation won’t change.

    Comment by Maurizio Morabito — 7 May 2008 @ 7:03 PM

  115. Another clarification

    Re #109

    I have never categorized RealClimate as “goebbelite”.

    As a matter of fact, in order to mantain a proper perspective on life and climate and everything, I suggest that on Real Climate we all stick to what is written in Real Climate. No website can become the repository of all discussions entertained and comments made somewhere else.

    =======

    I have posted a simple question in #101 but only received an answer so far, by Ray Ladbury in #104 where he says he “suspect[s] that if predictions were consistently off by more than half a degree, the models would be judged incomplete”.

    Does anybody else think along the same lines?

    And am I right in assuming, based on Hansen et al’s cited in #101, that to talk of “climate” we must consider a period of time of 25 years or longer?

    It surely would be nice to know

    Comment by Maurizio Morabito — 8 May 2008 @ 12:14 PM

  116. Maurizio, Just where do you think the incorrect physics lies, and how do you think you can figure that out without a thorough understanding of the physics?
    I should elaborate on my previous answer–it may be as important HOW the models are off as it is to know the magnitude. Is the problem in the “noise” in the system? In this case, there may be some source of variability not in the models. Are the predictions consistently high or low? That’s something else altogether.
    I would say that 22 years (a full solar cycle) is an absolute minimum for a climatic trend to be truly climatic. However, depending on the noise one is willing to accept, the required period could be longer or shorter.
    I used to have a calculus professor who said “What is epsilon? Give me a delta, and I’ll give you an epsilon.”

    Maurizio, you have a very incomplete idea of science. It goes way beyond Popper or Kuhn. I’d recommend reading Fischer and von Neumann and Bohr. But if you are going to criticize climate science, then for the life of me, I can’t understand why you wouldn’t want to study it systematically first.

    Comment by Ray Ladbury — 8 May 2008 @ 1:29 PM

  117. Maurizio Morabito (115) — Real Climate has a funky spam filter. I’ve been hit by it a few times. Also remember that the internet is ‘best effort’ only, no guarantees. As traffic grows by leaps and bounds, more messages may end up in the bit bucket.

    Yes, for ‘climate’, longer time periods are better. A standard of sorts is 30 years. I suppose that is the standard because it is what the National Weather Service uses. Better still would be 140 years, about 2 quasi-cycles of the PDO. But of course it depends upon what one is attempting to understand.

    Comment by David B. Benson — 8 May 2008 @ 2:01 PM

  118. Re # 114 Maurizio Morabito
    “Think of genetics: yesterday’s “junk DNA” is (in part) today’s “gene switches”. Who knows what tomorrow will bring.”

    Good question. But, the realization that some of the so-called junk DNA codes for gene switches did not, in my humble estimation, bring about a whole new understanding of genetics. As far as I know, most genes still code for proteins (some code for RNA), and proteins still pretty much run the show inside cells. There is plenty to learn about these things,of course, but I doubt the basic concepts will be overturned.

    Comment by Chuck Booth — 9 May 2008 @ 11:10 PM

  119. I understand that GCMs account for the effects of CO2 on radiative forcing. What about the convective and conductive components which would be greatly affected by turbulence? Is it justifiable to describe these effects simply as noise and hence affecting weather as opposed to climate? These perturbations would be non-linear and on different spatial scales. After all the average global temperature should depend on the difference between the energy coming in (from the sun) and energy going out and this should depend on all heat transfer phenomena. Would someone care to comment on the last paragraph in the attached essay by Prof. Tennekes? http://www.sepp.org/Archive/NewSEPP/Climate%20models-Tennekes.htm

    [Response: Turbulence is indeed a huge problem. But given that we will never have a full understanding or capacity to model it in the atmosphere or oceans at all the relevant scales, Tennekes position would seem to be there is no point in doing any modelling at all. This ‘a priori’ dismissal of climate models completely ignores the fact that they work for many aspects of the problem. His attitude presupposes that there are no predictable aspects of the problem that are robust to errors in the small scale turbulence. However, he is simply wrong. The large scale temperature and circulation variability is usefully predictable – look at the cooling following large volcanoes or at the LGM or the ocean circulation during the 8.2kyr event or the tele-connections to El Nino events or the rainfall patterns in the mid-Holocene etc. All of these observed changes are a function of large scale changes to the basic energy and mass fluxes and are robust to uncertainty in the details of the turbulence. – gavin]

    Comment by Gautam — 11 May 2008 @ 6:11 AM

  120. Re: #116

    Ray:

    (1) You ask “Just where do you think the incorrect physics lies?”. Actually, I have written: “I do not think that current climate models are based on incorrect physics”

    (2) Of course, science is not just about falsification. But there is no science without falsification (in the sense of, the possibility to falsify). And of course that should not be treated simplistically: I can forecast a temperature tomorrow for London between -40 and +50C. Such a forecast is falsifiable, and it may turn up to be false if something very very strange is going to happen…still, with such a huge range given, none would call it a “scientific forecast”. It’s just a very, very safe guess.

    Re: #118

    Chuck: in truth, epigenetics is “overturning” some very basic concepts of old genetics, including the relationship between nature and nurture. You may think of it as if in the past, “nature” and “nurture” were considered separate “forcings” on the organism, whilst now we know of ways for “nurture” to significantly influence “nature”. Separated-twin studies are likely to have to undergo a major rethink, apart of course from those based around “pure” mendelian inheritance.

    May 2008’s SciAm has a great article on “gene switches”. But of course we’re getting far from climatology…

    Comment by Maurizio Morabito — 15 May 2008 @ 7:41 AM

  121. Ray, it is nice to talk with you in HGS building. I read your article again tonight and enjoy it very much.

    In a interview by Dr. Taba of WMO bulletin, Lorenz himself desribed how he found chaos. I think it approriate to paste his words here, though the sotry is well-known, and I believe
    he will be always a legend to young scientists.

    ——————————-
    WMO Bulletin,1996,Vol 45, No2

    Some statistical forecasters claimed there was mathematical proof that linear regression was inherently capable of performing as well as any other procedure, including numerical weather prediction. I was sceptical and proposed to test the idea by using a model to generate an artificial set of weather data, after which I would determine whether a linear formula could produce the data. if the artificial sequences turned out to be periodic, repeating their previous values at regulars intervals, linear regression would produce perfect forecasts. For the test, therefore, I needed a model whose solutions would vary irregularly from one time to the next,just as the atmosphere appears to do. I started testing one model after another and finally arrived at one that consisted of 12 equations. The 12 variables represented gross features of the weather, such as the speed of the global westerly winds. After being given 12 numbers to represent the weather pattern at the starting time, the computer would advance the weather in six hour time-steps, each step requiring 10 seconds of computation. After every fourth step- or every simulated day- the computer would print out the new values of the 12 variables, requiring a further 10 seconds. After a few hours, a large array of numbers would be produced and I would look at one of the 12 columns and see how the numbers were varying. Ther was no sign of periodicity. At times, I would print out more solutions, sometimes with new starting conditions. It became evident that the general behavior was non periodic. When I applied the linear regression method to the simulated weather, i found that it produced only mediocre results.

    At one point, I wanted to examine a solution in greater detail, so I stopped the computer and typed in the 12 numbers from a row that the computer had printed earlier. I started the computer again and went out for a cup of coffee. When I returned, about an hour later, the computer had generated about two months of data and I found the new solutions did not agree with hte original one. At first, I suspected trouble with the computer but, when I compared with the new solution, step by step, with the older one, I found that the solutions were the same at first and then differed by one unit in the last decimal place; the difference became larger and larger, doubling in magnitude in about four simulated days until, after 60 days, the solutions were unrecognizable different.

    The computer was carrying its numbers to about six decimal places but,in order to have 12 numbers together on one line, I had instructed it to round off the printed values to three places. The numbers I typed in were therefore not the original numbers but rounded off approximations. The model evidently had the property that small differences between solutions would proceed to amplify until they became as large as differences between randomly selected solutions.

    This was exciting: if the real atmosphere behaved in the same manner as the model, long-range weather prediction would be impossible, since most real weather elements are certainly not measured acculately to three decimal places. Over the following months, I became convinced that the lack of periodicity and the growth of the small differences were somehow related and I was eventually able to prove that, under fairly general conditions, either type of behavior implied the other. Phenomena that behave in this manner are now collectively refered to as chaos. That discovery was the most exciting event in my career.
    ——————————-

    Comment by Jianhua Lu — 1 Jul 2008 @ 9:30 PM

Sorry, the comment form is closed at this time.

Close this window.

0.671 Powered by WordPress