All your explanations make sense, and to a listener just interested in learning, they should settle the matter. But sometimes people get ideas which just seem too good to give up, no matter how clearly you explain why they are wrong. This can lead to more and more contrived arguments to try to save the argument. I have done things like this in my mathematical work and I’m sure that all scientists do it on occasion. But an honest scientist eventually has to give up when it becomes clear the idea can’t be saved. A non-scientist however, not really understanding the subject, is not bound by such limitations and treats it more as a debate than a search for the truth. And of course sometimes scientists do the same. Thus are born denialists.
Another good way of explaining it may be to point out that climate scientists are not trying to predict precisely which future days and which locations will be warm, they are only predictinng that the average temperature of all days combined will be warmer then currently. Maybe the cocktail crowd will intuitively understand that it is easier to say with confindence that the next decade will be warmer on average than in the past, than it is to say with confidence that next June 17th’s high in Los Angeles will be 97 degrees F.
Weather is like the hour-by-hour fluctuations in the Dow Jones index.Predicting that is hard. Predicting climate is more like predicting the longer term trends, except,even even easier since the forcings tend to be better understood ! (i.e. intrest rates -vs- GHG forcing).
Here’s another possible perspective. The increase in hurricanes and hurricane strength is at least partly due to warmer SSTs. The other influences are global and regional weather patterns which change the shear, dry layers, and other factors. It’s important not to oversimplify the mechanics, and thus important not to make oversimplistic predictions like more and stronger storms this year. Otherwise an amateur skeptic might say “aha!” when there’s fewer storms like this year so far.
The better approach is to emphasize the long term, emphasize that storm strength is only one result of climate change and perhaps a minor one at that. Better to do that than to get front page Time magazine stories about megastorms or something like that.
IMO, we can predict the weather beyond couple of two weeks. It is true that the prediction error increases and the effect due to initial conditions (state of system today) should get less weight than the local weather history, but it doesn’t mean that we cannot predict.
I agree with L. Evens. D. Donovan makes a good point except no one can predict, with better than 50% chance of success, the long-term dow, either. One of the difficulties is that the climate models do in fact have considerable holes and produce a significant amount of uncertainty. If one admits to that the antagonist says, “SEE! I told you so!” But if you don’t you become one of those denialists. None-the-less the basic premise of the treatise is correct (short term specifics and long term trends are completely different from an analytical and scientific view) and I think explained very well. Admitting to the modeling uncertainty may advance your credibility; beyond that I don’t think any “improvement” in the assertion will change the outcome at the party.
Beginning in the 1970s/1980s, the National Weather Service Climate Prediction Center began issuing one and three month outlooks for precipitation and temperatures, in relation to the latest 30 year averages (normal). http://www.cpc.noaa.gov/
Some people have said that there is only a 50/50 chance for outlooks calling for above or below normal to be accurate on temperaratures. Not so for northern Minnesota in the future with rapid climate change happening. The increasing Dec-Feb temperature trends are so obvious in northern Minnesota that I can predict right now with a high degree of confidence that the winter of Dec 2006 to Feb 2007 in northern Minnesota will once again be above 1971-2000 temperature averages, and will continue to be above the 1971-2000 temperature averages for hundreds of thousands of years to come.
Rasmus, here’s my rough outline. When predicting climate for the whole world you don’t need to know whether it will snow in Oslo on December 20th 2006. The climate record from Oslo tells us that it could, and more importantly, the climate model predictions show that it could. The climate models also incorporate the effects of forcings like man-made CO2 and are able to predict regional changes like the prevailing winds or ocean currents and their effects on local climate in Oslo.
I would conclude that a simple climate model without weather can do a fairly adequate job of knowing the average weather in Oslo for the time period around next Dec 20th. Is that sufficient for accurate climate predictions? No. The reason is that the physics of the model depends greatly upon accurate depiction of weather. This is because the primary warming mechanism, far beyond all others, is water vapor feedback and water vapor is controlled by weather. Some of the other lesser effects like snow cover are also controlled by weather.
So models like CCSM and ESMF incorporate weather by simulating its effects in coarse spatial and temporal resolution (e.g. 40km and 30 minutes). The purpose is to establish realistic weather patterns from the climate parameters and use the resulting weather measurements to give realistic feedback to the climate model. Higher resolution might be required for tropical weather than for Oslo. The resolution is required to adequately depict climate-effecting weather, that has been studied and explained in the CCSM papers.
Here’s a simple example. An average weather pattern for Oslo might be a day of snow and 4 days of partly cloudy repeated. But it could snow for five days in a row. This weather phenomenon will clearly affect subsequent local climate (temperature, remaining snow cover, diurnal clouds, etc) and all weather phenonema across the globe will affect global climate.
Good explanations but in these settings I always think that short and simple is the key. So I just agree that you can’t predict the weather all that well but say that you can predict the season, e.g., chances are it won’t snow in August, and predicting the climate is more like predicting the season than the weather. Then I leave it at that unless they want to continue.
The question is whether climate in principle is predictable. All the examples presented in the post used measured data and actual physical realizations; no mathematical models or computer programs are involved. These are not examples of predictions of either weather or climate in the sense of using models and codes for long-term time scales in the future. In fact, no calculated results (predictions) have been shown to compare with the measured data. In this sense the title of the post is very likely misleading.
Here are two issues associated with AOLGCM that I find to be totally unique to the applications of these codes.
Firstly, it is my understanding that the calculated results from most large complex AOLGCM codes cannot be demonstrated to be independent of the discrete representations of the continuous equations. That is, the results are functions of the size of the discrete representations (or truncated series) of the spatial and temporal scales used in the calculations. Independence of the discrete representations so that the solutions of the discrete equations converge to solutions of the continuous equations is the most fundamental concept taught in every numerical methods textbook; every textbook without exception. In the absence of this property the discrete equations have not been solved; the numbers printed do not represent solutions of the discrete equations. The lack of grid independence indicates that numerical errors are in fact present in the numbers. I think this situation is without precedence in all of science and engineering. If this is the correct situation, the calculated results are not solutions to the continuous equations and at the very best represent some kind of approximate “solution”. However, “solution to the continuous equations” is not a phrase that can be applied to these calculations. No other science or engineering applications of numerical solution methods tolerate this situation; it is always unacceptable. An example of an exception to my assessment will be greatly appreciated.
The closely related issues of consistency and stability are also very important.
Secondly, it is my impression that use of ensemble averages of several computer calculations that are based on deterministic models and equations is unique to the climate-change community in all of science and engineering. I can be easily corrected on this point if anyone can provide a reference that shows that the procedure is used in any other applications. (The use of monte carlo methods to solve the model equations is not the same thing). The use of ensemble averaging and the resulting graphs of the results makes it very difficult to gain an understanding of the calculated results; rough long-term trends are about all that can be discerned from the plots. The calculated daily, seasonal, and yearly variations, some of which are used in the post, are seldom compared with measured data. Neither are calculated results from various spatial locations, also used in the post, compared with measured data.
No other modeling applications in all of science and engineering will have these two characteristics. Bridges, elevators, flight-control systems, power generation systems, (add your favorite here); no simple or complex systems of any kind. Again, examples of exceptions to my assessment will be greatly appreciated.
Focus on The Global Average Temperature calculated by the codes gives a false indication of the robustness of the modeling and calculations because this is a solution meta-functional result that maps everything calculated by the code into a single number. A process that easily hides an enormous number of potential problems.
[Response:Actually, the approach for climate prediction is very similar to that used for weather forecasting, where indeed the predictions are compared with the outcome. The criticism of the discretisation applies to numerical weather models (NWM) as much as to general circulation models/global climate models (GCMs). Furhermore, ensemble forecasting is also used for weather forecasts. Despite the involvement of the range of different scales, the models produce useful results (on which aviation depends). -rasmus]
Making climate projections is very limited to input of data, some of which from sparse places on Earth, lacking resolution, GCM”s give ultimately bad weather prognosis. Same applies with Climate models, without very high resolution inputs equally applied at every point on Earth, it’s nearly impossible to be correct. Thus probability methods are used similar to quantum mechanics, with expected results. Dedication must be given to increase data resolution , as this is done, climate and GCM forecasts will be greatly enhanced. But there are also experimental methods, which are ignored, I use one of them, to make projections based on absolute or total atmospheric temperatures, independent of computer models and getting accurate results. These methods (must be many more I don’t know about), will greatly increase accurracy as well, and perhaps there is a lack of experimentation in the field, a bit of conservativism, more hurting, in the long run
the reputation of models than anything else. Heard that some long range seasonal forecasts are done by taking the average of 11 or more Climate projection runs, as Einstein might of said: there is no such thing as luck.
If my discussions are not helpful, please let me know and I’ll stop posting here.
Anyway, for 20 years in February and March of each year, I put together the NWS Spring Snowmelt Flood Outlooks for the Upper Midwest based on ‘normal’ temperature sequences for runoff in late March and April.
By the late 1990s it was clear that the timing for snowmelt runoff had changed. The melt was occurring earlier in the season, and the odds for having rain during the melt period was higher. I knew that climate was changing even before I knew that global warming was happening.
I have no experience with global climate prediction models, but it’s obvious to me that as climate changes more rapidly it begins to have significant effects on weather and hydrologic prediction, particularly if the models used for those predictions ignore the obvious climate change factors.
Re #12: Pat, thanks for fixing the link. I found your post on the twin cities site very helpful, as are your posts here. The snowmelt/runoff situation is essential to follow to understand and predict what is happening out west, where it appears that there have been fundamental changes.
You might want to concede that we cannot predict the climate 100 years from now, but point out that it is nonetheless important to have a practical understanding of the factors that influence climate. Then, give examples, starting with the “factors influencing the local climate.”
I make this suggestion after reading a related post, “Beyond the Mug’s Game,” on Prometheus (Pielke) about the value of economic models. It points to an interesting response (letter to the editor) to an interesting article on economic models in The Economist.
The gist of it is that complex economic models can be more useful at informing people than they are at predicting events. The letter and the article are readable and, to me, not as boring as their titles suggest.
Suppose that a ping-pong ball falls into a river. Due to the turbulent eddies in the river, predicting the ball’s EXACT position, e.g., within a few meters,is possible only in the short-term (in this case, over the next few seconds), but not in the long-term. However, if we knew the average rate of flow, we could predict reasonably accurately (within a few minutes) when the ball would pass under bridge 10 km downstream. Futhermore, if the streamflow were increased by a known amount (due, say, to increased discharge from an upstream dam), I doubt that anyone would argue with a prediction that a second ball would take less time to reach the bridge,even though it may not be possible to say exactly under which part of the bridge the ball would pass.
The question is not whether predicting the climate for the next 100 years is a reasonable undertaking. The question is, can such a prediction have the absolute certainty on which we can invest our treasure with the confidence that it will not be wasted. Put another way, has anyone ever successfully predicted a change in the climate of the earth over a similar 100 year period with our exact starting point (i.e. current concentration of greenhouse gases, state of human technology, current population of the earth, current land use, etc). You can show me a nuclear reaction over and over again and you can show me that an atom is real over and over again, no faith in your predictive capability is required. Can you offer proof that you have successfully predicted the climate for 100 years. Can you offer proof that just adjusting CO2 in the atmosphere will control the climate the way you predict
Short and sweet may be best, at least at the cocktail party level.
Which can you predict (to within say, 10% or 20%): the number of times heads comes up on a single coin toss, or on 100 tosses? The total number of runs scored on a particular baseball (or soccer or whatever) game, or the average number over a whole season? The height of the next person to walk around the corner, or the average height of the next 100 people?
Or this: you can’t predict where a particular leaf, or grain of sand, will end up after a wind storm, but anyone can easily predict that the entire planet Earth, leaves, sand, and all, will continue to spin on its axis once a day, and revolve around the sun once a year.
Folks may be amazed to realize how often they can predict big complicated, things much more accurately than small, simple things.
Predicting climate is like predicting what will happen when you boil a large pot of water. The modeler would need to know the heat input, the heat transport characteristic of the pot, the starting temperature of the water, the ambient temperature, atmospheric pressure, the contaminants in the water, etc. But anyone with a good knowledge of physics could predict approximately how long it will take to boil, what range of temps and what kinds of convections might arise.
Predicting weather is like trying to predict temperature and current flow in the pot to scales of a 1 cm.
I think its clear you can predict that the water will boil (climate) and even approximately when it will start given a few good measurements. Predicting temperatures and current flow to small scales obviously is only possible for a few seconds at a time given only with really good measurements of things exactly as they are.
[Response: A slightly different version of this analogy has been around for a long time (I don’t know the original source) and has been used by myself in talks and a couple of popular articles: predicting climate is like predicting that the boiling water in the pot has a temperature of 100 ºC at sea level, but only 90 ºC at a particular altitude in the mountains. Predicting weather is like trying to predict where the next bubble will rise in the boiling pot. This illustrates that the former is a boundary value problem – the mean temperature of the boiling water depends in a predictable way on boundary conditions, i.e. air pressure, just like the global mean temperature depends on the CO2 concentration. The latter, in contrast, is an initial value problem in a highly turbulent system and therefore a much harder problem. -stefan]
You haven’t followed your reasoning to its obvious logical conclusion. Suppose in fact that models can’t tell us with certainty the consequences of our ongoing experiment in increasing concentration of greenhouse gases. Since there is certainly no other way to answer that question, it would apprear foolhardy in the extreme to continue that experiment. I hope you are insisting to your governmental representatives that they take action immediately to limit greenhouse gas emissions.
My reply is to offer a bet: I will predict the average temperature of the earth for next year and they will predict the temperature of the city we are in for that day next year. While I have had a number of very creative reasons for they will not agree to the bet I have yet to have anyone take me up on it.
Re: #17 -“Suppose in fact that models can’t tell us with certainty the consequences of our ongoing experiment in increasing concentration of greenhouse gases.”
The so-called ‘experiment’ is an agreeable metaphor, especially for the scientifically-minded. But what is sometimes called an ‘experiment’ is actually a social adaptation, and since the adaptation is carried out by clever though not always rational bipeds instead of ants or termites, the components of the adaptation are immensely complex. Complex as these adaptations may be, anyone who has watched the long slow procession of late afternoon traffic queing into I 395 in Virginia out of DC will be able to conclude, easily, that social adaptations are not always rationally arrived out, even in homo sapiens: Wise guy.
The problem with my scientific friends is that they assume that given a spectrum of choices, human beings will naturally choose the most ‘rational’, meaning, the most obviously adaptive for the greatest number. They won’t. They can’t. That’s why societies, like species, always have finite lifetimes.
Adaptation in a developed early 21st century society means maximization of profit. The reproduction of money takes the place of the reproduction of descendants in the lower orders.
Make it more profitable to supply energy without changing the climate, and you will have solved the problem of climate change. Either that, or wait for the first really horrific crisis and see what happens. Don’t count on a rational response, though.
Gavin, I have noticed in my own work that when having difficulty explaining a complicated scientific idea in easy to understand terms for a non-scientist (like my wife), often I do not have a good command of the concept myself. In these instances, I may need to do some internal clarification. It is my suspicion that this is where you are on this issue (be honest with yourself).
Regardless how the IPCC defines the term “climate” or “climate system”, the issue is the actual manifestation of the system we are trying to model, and whether or not the real-world observations support our conclusions about its theoretical state. It is clear to many of us out here that you are laboring painfully to propogate the troubled hypothesis that the system you are trying to model is well-behaved (non-chaotic), therefore inherently predictable over multi-decadal time. In this respect, #1 may have a good analysis of your continued stained attempts to simplify this matter.
The new OHCA data presented by Lyman etal might at least give you the opportunity for retrospection. Many look forward to your coming analysis of this new paper, explaining the failure of the models to predict the 21% (likely) loss of heat (in two years) that was accumulated in the system from 1955-2003. This occured while you declared loudly in 2005 (from the model predictions of this well-behaved system) that the system was in a positive radiative imbalance. We will now see if these new results hold up under the coming tsunami of scrutiny. Return of the Jedi!!
Hey, Gavin. I agree, coming up with an “elevator pitch” reason for why folks should put some faith in the models is a good idea. I find that people are most rational about is money – everything else is up for grabs. For instance, laypeople have a hard time remembering that energy is conserved in all physical interactions – the first law of thermodynamics. Explain it to them in terms of money, and they get it instantly: “Oh, you could make free money by selling the energy you got for free – okay, that’s stupid then.” (Ironic, since money is decidedly *not* conserved at the level of governments.) This leads to approve of the “bet” way, #s 18 and 19 by John Cross. Another tactic is point out the silliness of their argument: “oh, you can’t predict anything about the weather out more than a few weeks, eh? I predict that in mid-January it’ll be colder than it is today. Think I’ll be wrong?” That works for me in Ottawa, your mileage may vary.
Comment by Steffen Christensen — 14 Aug 2006 @ 1:13 AM
I think any discussion of climate (to the lay) must include scale– both spatial scale and temporal scale. The current hot item is “global climate” at annual and decadal scales. But it is important to note that every point on the globe has a climate, and the points can be aggregated to regions, regions to continents, continents to hemispheres etc. And it is essential to note that you must “generalize out” (in the vernacular of cartography) some degree of detail every time you scale up. This is intuitively obvious to most but it never hurts to explain.
The temporal scale is another tricky item. People are currently hooked on annual temperatures, especially the annual mean daily temperature. This is the crudest information we can get over the course of a year. Providing slightly more detail is the mean high temperature in tandem with the mean low temperature. We can move to finer temporal intervals (seasons, to months, to days etc), and each time we do, we increase the detail of our understanding about the place(s) in question.
Also, it is helpful to recognize that we use climatic information to place weather into context. So, even though we want to avoid confusing weather with climate, we need to keep in mind that we require climatic information in order to identify daily weather anomalies. I only know that a certain day was “warmer” and “wetter” than average Minneapolis because I have climatological baseline values for comparison.
And of course, everyone who works with climate has his/her own, idiosyncratic definition of climate that falls somewhere along this spatio-temporal continuum. For example, I work on “events” of local or regional scope (on the order of tens of km^2 to tens of thousands of km^2), lasting days at most, which are then placed in the context of the preceding 30-120 years. One member of my Ph.D. committee does dendroclimatological work, and is therefore often interested in large chunks of continents, with whatever is resolved at the annual level placed in the context of decadal, multidecadal, and centennial patterns of the past several hundred years.
In terms of predicting the climate, the very general (e.g., the mean annual global temperature for a given year) should always be somewhat easier than the more specific (e.g., mean temperature of New England for a given March), so we should have less confidence in predictions that have more detail.
Does that make sense?
Comment by Kenneth Blumenfeld — 14 Aug 2006 @ 3:08 AM
Your explanation is fine, but it still needs more of a common touch. Depending on whom you are talking to, you might be able to make an argument from sports. The point you are trying to make is that predicting averages is easier than predicting details. Thus: you can’t predict exactly who will get which hits in the all-star game, but you are pretty confident that the AL will win.
Robin #19, the types of convection in the pot and the temperature distribution don’t really matter because there is no greenhouse effect in the pot so the physics is much simpler. The earth’s convection does matter because it distributes the water vapor which is the primary GH gas. Perhaps your analogy could be extended to a pot with a dynamically changing lid, but I’m not sure how to express that concretely.
In the absence of comparisons of calculated results with measured data you cannot be sure that you can ‘predict’ anything. Additionally, the calculated and measured data must be in agreement within some level of differences. While averages are somewhat easier to get more nearly correct, these values can also be calculated incorrectly. Show the caculated results with the measured data and then there is a possibility that you have shown that you can ‘predict’ physical phenomena and processes.
Here’s an analogy to consider for your next party or elevator trip. Ask the following:
“Do you invest in the stock market?”
For most, the answer is yes. Then say:
“OK…a financial advisor can’t tell you if the market it is going to up or down tomorrow, but you trust him when he tells you the influences on markets are such that they will probably go up in the long run. And there’s historical precedent. That’s why stocks are such a popular investment.”
“It’s the same thing with climate prediction. Just because we don’t always get weather right means NOTHING about our ability to predict climate. Assuming we understand the influences on climate (past, present, future) as well as we understand market forces (we probably understand climate *better* for predictive purposes), we can provide general predictions about the climate’s future course.”
I like the analogy in #16. Here’s one I use: Why is global climate changing? Because there’s more heat in the earth system. Imagine you have a very wide and shallow pan of water on a burner; the pan sticks way out on all sides of the burner. The water in the middle of the pan will heat up first, then move toward the cold water at the edges–that’s how fluids such as air or water work. That’s how you get weather– “differential heating.” But, then, suppose you turn up the burner a notch. Now, there’s more heat in the “system.” Water will still heat up first in the middle of the pan and then move to the edges, but it will be hotter and move faster. That’s like global warming due to more radiation being absorbed by the system (“radiative forcing”).
‘Re: #17 -“Suppose in fact that models can’t tell us with certainty the consequences of our ongoing experiment in increasing concentration of greenhouse gases.”
The so-called ‘experiment’ is an agreeable metaphor, especially for the scientifically-minded…’
The numbering keeps changing which makes it difficult to keep track of the discussion. I hope my quote within above indicates the history.
My initial point was that arguing against making changes on the basis of uncertain models is fallacious. The error is in thinking that the status quo is stable. Even if that were the case from the point of view of social and economic forces, it is highly unlikely to be stable from the point of view of the radiative characteristics of the atmosphere. Whatever faults our models have, no one has been able to show conclusively that the large scale changes in greenhouse gas concentrations that are in the works are so unlikely that we may not concern ourselves about them.
I do agree that ‘we’, whoever that may be, are not conducting a purposeful experiment to see what the effect of greatly increased greenhouse gas concentrations will have. But the atmosphere doesn’t respond to our intentions; it responds to what we do. The question is whether or not ‘we’ can change what we are doing. Pavel, on the one hand seems to adopt a social/economic determinism which suggests it is all beyond anyone’s control, but then contradicts that by suggesting that making it profitable to do otherwise would resolve the problem.
That of course is also based on a metaphor. One can argue endlessly about whether or not ‘free will’, the ability to make choices, exists either on an individual scale or on the scale of societies. But whether or not social choices are truly free, it is a necessary metaphor to consider them so. Those of us trying to convince our fellow human beings that action is necessary may ourselves be acting as a consequence of impersonal social laws, but that doesn’t mean we should just stop trying.
The real issue, as a practical matter, is whether or not the human species can/will change its behavior in these matters. I think that is not settled yet. There are examples of where the short term local motivations were successfully countered by long term concerns. When I was young, you were an oddball if you didn’t smoke. Nicotine is highly addictive, and the sale of cigarettes became very profitable for tobacco companies. But in spite of that, medical evidence about how our physical bodies actually responded to smoking, eventually led to change. Today, in much of the US, you are an oddball if you do smoke, and at least one cigarette company advertises the dangers of smoking. The case of CFCs and ozone is also illuminating. Once Dupont’s chemists convinced their management there was really a problem, ‘we’ switched to other refrigerants, despite significant cost and some inconvenience. The same arguments were made against doing anything by Fred Singer and other skeptics. They tried to pick holes in the science and claimed that phasing out CFCs would be highly disruptive. In fact, it was done relatively easily. It is true that cutting back on greenhouse gas emissions is a much harder nut to crack, but it is certainly not clear at this point that ‘we’ can’t manage to do it. And looking only at ‘profit motive’ as the way it will be done assumes unproven theories about how humans behave en masse. Today, free market apitalism is certainly an extremely important principle in understanding how human societies operate. But, that all human behovior is reducible to such analysis is also a metaphor and a not very good one at that. Consider for example why so many Americans drive low mileage SUVs. It certainly can’t be because they make sense economically. The reasons are complex, but it is clear that fashion plays a signficant role. The practical functions they provide to families were once provided by station wagons, which did much better in terms of gas mileage. But one thing is pretty clear. Congress provided a loophole to mileage standards car manufacturers could drive their ‘trucks’ through. Without that, we would be using relatively gas efficient station wagons. The same thing didn’t happen in European countries, and I don’t think it was inveitable that it had to happen here.
It was argued above that climate models were exceptional among numerical models used in science and engineering in that neither can they be shown to converge nor, if so, can they be shown to converge to solutions of the equations they are meant to model. I think what the previous poster said is in fact wrong, and it is quite common in numerical modelling for this sort of thing to occur.
I would like to see a response from a bona fide climate modeller.
I am hardly an expert in numerical modelling. But I have at times in the past tried to undersand something about the subject. It seemed to me that error analyses for numerical procedures, in order to make useful predictions, had to make assumptions which themselves could not be proved. In practice, the results of using the procedure were monitored to see if they were plausible in known situations and also to see if the original unproven hypotheses seemed to be holding. From the point of view of rigorous mathematics, the latter approach is clearly circular, but in practice it does seem to work. In the case of climate models, I think the situation may be even more complicated because one doesn’t try simply to model a set of equations all of which fit into a specific mathematical/physical formalism. One tries to model the actual physical system and that incoroprates other features dependent on chemistry and biology. In addition, different, mathematically distinct physical theories are modelled. In principle, you could use quantum mechanics to describe the wave function of the atmopshere and try to model that. In practice, you use classical fluid mechanics and thermodynamics for some things and quantum mechanics for others. These are entirely different mathematical theories involving different equations about different physical entities. What holds them together is the belief that these are all different ways to describe physical reality. Hence, the ultimate measure of the success of a model is not necessarily its mathematical behviour but the extent to which it is consistent with observations of that reality.
As a somewhat related aside, mathematicians have always been bothered by the cavalier way physicists argue, using series that don’t converge or integrals that are infinite. This doesn’t bother the physicists at all if they can by hook or crook pull out numbers which both match observations and fit into an overall conceptual understanding of what is going on.
Rasmus, you ask “So, my question is, do you think people get the message that I try to convey this way?”
I think that the answer is “No”, they don’t get the message. And I think that is why you are asking for help. You can see that they are not convinced.
I had thought that one reason they are not getting the message, is because you are answering the wrong question. So the message you convey is irrelevent to their question. The question they are really asking is “How can we trust your predictions, when so often even the weather forecast is wrong?” Unfortunately that question is even more difficult to answer.
It is all very well saying that saying with more CO2 then the world will get warmer just as it does in summer. But you have been stabbed in the back by the oceanographers who say Britain and Norway will cool as the world warms, because the ocean currents will switch just as they did in the Younger Dryas.
Moreover, those of us who were around during the seventies know that then there was a scare about an imminent ice age. See http://www.cpc.ncep.noaa.gov/products/outreach/proceedings/cdw29_proceedings/Reeves.pdf
Although Kuckla and Matthews got it wrong, there is a widespread belief amongst younger scientists today that scientists are infallible. I would trust their views on scientific matters more than those of the Pope, but science is only true by definition. The utterances of scientists are are no more true than those of the Pope until proved so, and perhaps not even after that eg Einstien’s revision of Newton’s work. Young, and some not so young, scientists seem to think that if a paper is published in a peer reviewed journal, then it becomes science and hence it is true. That is not the case, and now new evidence is showing that the breakout of Lake Agassiz did not coincide with the start of the Younger Dryas, and hence cannot be the cause, despite inumerable scientific papers claiming or assuming that it was.
So how can we trust the scientists to be correct about global warming? Well, although they may not know the cause of the Younger Dryas they do know that it happened. What they do know is that the climate changes rapidly. That discovery was what scared Kukla and Matthews. In a cooling world they feared a rapid cooling. Now we are in a warming world the danger is of a rapid warming. That would disrupt agriculture almost as much as an ice age, and without food people cannot exist.
So the message should be, “We cannot predict the future climate. However, we do know that a rapid warming is likely and we do not know when it will happen. It could be soon, so we must take action now!”
I know that you will find it hard to admit that you are ignorant, and that your fellow scientists will find it even harder when they have to admit they were wrong. But the only way to convince people is to tell them the truth. Lying about the ice age scare only makes the public less trusting of scientists, when they know that it happened. Pretending that the models will give the correct results is equally false. We will only know that when the time comes, and they have proved that they give the right answer. (Since there is such a wide spread of results, surely one will be correct :-)
[Response:Thanx for your views. Rather than looking at the skill of the models (detailed predictions), I have looked at the principle of whether the climate is predictable (initially without involving the models). If you ask how skilful the models are, then one needs to define a metric defining the skill – or ask what the model is used for. I’re right that in our case one important task is to predict the future climate. You can then look at hindcasts (forecast made for the past, but performed in such a way as if it were in the future so that the observations against which you want to evaluate the model are not used in making the predictions. If you look at the global mean temperature, then you see a fairly good ‘re-construction’ of past trends, given the past emissions, volcanic activity and solar variability. I think it’s true that we cannot make a proper prediction for the future, as we don’t know what drivers such as economic activity (emissions) will be like in the future, not do we know when there will be volcanoes, there will be further landscape changes, or if the solar activity will change. But we can still make useful scenarios to get plausible idea about what to expect. One thing that has increased my confidence in the climate models is that features that are observed in the real world, such as El Nino (ENSO – some models do not give as good a representation as others though…), Hadley Cell, the North Atlantic Oscillation Kelvin waves, Rossby waves, and Tropical Instability Waves (TIWs) drop out of these models (if I dont remember incorrectly, I think the TIWs were first predicted in the models before they were seen in the real world). There are other aspects that are not so well predicted (e.g. the south Asian Monsoon, the Madden-Julian Oscillation, other major uncertainties involve clouds, and this has received a great deal of attention in IPCC, TAR.), but it is still impressive how the models represent these features – they arise purely from the physical equation, and are not prescribed’… even if some matematicians may think that physicists are cavalier about their application of equations. -rasmus]
Re Your response: Despite the involvement of the range of different scales, the models produce useful results (on which aviation depends). -rasmus
The weather models do not get the cloud base right, but because they can be tested each day, the meteorologists know that and can make the appropriate corrections. But no one knows what corrections need to be made for a climate model looking 50 years ahead, because we have never been there!
[Response:The corrections are for systematic biases, as the geography/topography representation in the models is fairly crude, and the model results represent a volume whereas observations tend to be more ‘point measurements’. I think it’s true that models do have biases in their representation of the vertical profiles in the boundary layer (I’m not really an exert on the topic of boundary layer representation in models…), and clouds tend to be crudly represented (as unresolved parameterisation schemes in climate models). Besides a representation of the cloud base would be limited by how many vertical layers the model has and how dense these are. Another side of the story is that the cloud base height depends on the temperature profile and the humidity (converctive systems), or the structure of weather fronts. Observations for cloud base on a routine basis presumably come from radio sondes, by the way? How big are the errors, you reckon? By the way, for the future we can used empirical downscaling to get an improved representation of local surface variables. These predications can be – and are – tested in hindcast experiements on independent data. -rasmus]
Pat, I should have been more careful in my post 23. The link you gave in 14 includes this quote:
â??18,000 scientists signed a petition saying the global warming model as presented in the popular media is just plain wrong,â?? said Pekarek.
That has to be referring to the 1998 petition drive he was involved in, as per my post 23. That effort was discredited years ago, not months ago.
Has he published anything in the peer-reviewed literature that supports his assertions? If not, I think the explanation to others is that “Patarek is simply tossing out untested personal opinions that have not been evaluated by the scientific community.”
#26. Thomas, thank you for highlighting the fact that my attempt to lighten the mood with a little humor failed. All humor aside, the Lyman etal results http://www.pmel.noaa.gov/~lyman/Pdf/heat_2006.pdf
in fact raise some profound questions about the predictive skill of the models, and thus relate directly to this subject.
In the absence of comparisons of model predictions with measured data you cannot prove that you can predict anything.
It is often stated that the GCMs solve the basic equations of mass, momentum and energy conservation. So long as the results cannot be shown to be independent of the discrete representations of the continuous equations there are errors present in the numbers. All textbooks concerned with numerical analysis/methods will state this fact. The numbers do not represent solutions to the basic equations. Additionally, the predictive power of the models does not mainly reside in the basic equations of mass, momentun and energy conservation. The algebraic parameterizations carry the heavy work of getting the physical phenomena/processes nearly correct.
RE:#39 and also the original question. i think one of the most basic problems with the public in understanding climate prediction is the question they have regarding will global warming cause the collapse of the Atlantic circulation and hence a new ice age? or will temperatures on the average get warmer and warmer compared to what we have experienced. That’s is about as unpredictable as it can be for them and I think feeds the feeling that there is nothing that can be done about it, it’s all “wild” speculation.
Re #44 The only way we are going to get the public, and through them the politicians, to take action is to scare them to death. It was fear that got the millenium bug fixed, it was fear of the ozone hole that ended CFCs. It is fear that has mmeant the end of nuclear power, and fear that prevented a nuclear war. It is fear of terrorism that allowed GWB to take the US to war in Iraq.
The dangers from global warming are far worse than those from nuclear power stations, or the ozone hole but the scientists are not explaining that! How many people have died as a result of the ozone hole, or nuclear accidents, ot from terrorist activity. The heat deaths’s in France during summer 2004 far outnumbered anything from even the worst terrorist attack. As Gavin wrote, scientists are treating global warming as ‘It’s serious (and interesting) but don’t panic’. Nothing will happen until they say ‘It IS serious and I AM panicking.’
They don’t have to make up ‘scary stories’. Just point to the hurricanes, droughts, floods, and fires around the world, and explain that they will only get worse, and lead to famine. There is no easy fix if you don’t have enough food. And they must point out that it is only 10,000 years since temperatures climbed by 20C in Greenland over period of less than 40 years. If only they would have the courage to tell the truth!
#28, In a way Global mean temperatures, at least in my case for the Northern Hemisphere, is a bench mark, of which everything else transpires, it would be in fact very good to have one predicted for each coming month, aside from the usual GCM probability inspired seasonal forecast map. I like to see, if the models can in fact predict the most simplest, but most important parameter, if they do they are on the right track, if they don’t, it would be quite revealing. I remember with great disgust in October 2005, last falls forecast for a cold bitter coming winter. Was not sure if the radio announcer was on the same planet.
I’ll agree with L. Evans (#1); one of the critical aspects determining whether or not you’ll succeed in getting your point across, is whether or not the listener is open-minded.
The analogy I often use is the height difference between men and women. I point out that there’s no way to predict with much confidence who will be taller, the next man to enter the room or the next woman. But we can predict with *extremely* high confindence that the average height of men at the party is greater than the average height of women at the party.
When I tell this to the open-minded, they get that “oh – I get it!” look on their faces. When I tell this to the closed-minded they invent a reason to invalidate my analogy (sometimes very creatively, sometimes ridiculously silly).
I can usually tell, even before I offer the analogy, whether or not the listener is receptive. But even for the non-receptive, I offer the analogy anyway. It may not influence their opinion, but there are always other interested listeners, and they often seek me out later to find out more.
Re: #37 -“The case of CFCs and ozone is also illuminating.”
Most certainly. If I gave the impression of a sort of fatalism permit me to retract. In the case of stratospheric ozone depletion there was an obvious global emergency in progress, and a ‘fix’ that was not so disruptive as to cause exceptional distress, or for that matter a decline in profits.
There is an analogy here to the problem of GW greenhouse gases, but not an identity in terms of social scale, tempo or predictability.
As a Catholic I don’t hold that economic determinism is the main moving force in human societies.
Nevertheless, if you want change you must find the appropriate operational handle.
A good start for a discussion about AGW is by using examples and metaphorism.
One example can be found in James Hansen *Defusing the Global Warming Timebomb published 2004 James Hansen Archive
HUMAN-MADE climate forcings, mainly greenhouse gases, heat the earth’s surface at a rate of about two watts per square meter – the equivalent of two tiny one-watt bulbs burning over every square meter of the planet.
The full effect of the warming is slowed by the ocean, because it can absorb so much heat. The ocean’s surface begins to warm, but before it can heat up much, the surface water is mixed down and replaced by colder water from below. Scientists now think it takes about a century for the ocean to approach its new temperature.
If we are conducting an “experiment in increasing concentrations of greenhouse gasses”, it is also pretty clear that it is not a controlled experiment, and that the 0.6 degrees C warming over the last century has occurred in the context of one of the highest levels of solar activity in the last 8000 years.
Yes, climate has been relatively stable and predictable for a couple of centuries, and even the changes over the last few millenia, while locally significant, have not changed the overall pattern. Absent, mode changes or tipping points, even the most sensitive models do not predict changes in climate patterns over the next century.
However, just because climate has order and modes, and thus is predictible, does not mean that we can predict it with current model technology.
Since the recent warming is apparently due to net globally and annually averaged heat fluxes of under 1 W/m^2, understanding and attribution of the relative causes of that warming given the number of inputs that are changing will require model accuracies of better than 0.1 W/m^2. Projections with any degree of precision for multiple decades, may require sensitivities to the various forcings another order of magnitude more accurate.
We haven’t had observations accurate enough to validate models to such accuracy for more than a couple decades, if that.
The IPCC diagnostic subprojects have shown the models to have errors of multiple watts per meter squared (e.g. Roesch 2006), and to the extent that they manage to balance energy budgets to observations, there must be compensating errors of a similar magnitude in effect in ajustible parameters.
Yes, climate is predictable, but the IPCC diagnostic subprojects show that predictive capability is not here yet, although, the optimisitic among us, hope it is just a few years away.
All of the examples in this post would work well, but as usual it all depends on the audience.
What people want from weather forecasts are temperatures, precipitation, and wind speeds – these are calculated based on the location of high and low pressure systems, jet stream positions, and current land and sea surface temps. This is rather like calculating a trajectory; a decent analogy is the path a ball takes as it is dropped into a spinning roulette wheel (people are always interested in gambling). You might guess the first bounce position, but after that ‘sensitive dependence on initial condition’ takes over; the same is true for weather systems. There is some interesting history here as well – in the 50’s, there were those who dreamed of controlling the weather on large scales (yes, military planners who wanted targeted blizzards, etc.) but the work of people like Ed Lorenz put an end to that – how can you control what you can’t even predict? (Cocktail parties must always include tasty tidbits).
Climate study involves oceanic circulation and ice sheet/glacier dynamics, and the timescales involved are fairly long (not nearly as long as geologic-tectonic process timescales, however). The turbulent mixing of the atmosphere occurs on much faster timescales, which is really what limits the weather forecasts to a week or so. Weather models rely on a constant stream of data from satellites and weather stations to make their predictions. Even if ‘perfect’ knowledge of the current state of the atmosphere down to every cubic millimeter was available, weather prediction would fail in something like two weeks, according to the experts – there is in the study of chaotic systems a relationship between timescales, probabilities and predictabilites – beyond that, you’ll have to ask a mathematician for the details.
Climate is the averaged weather over long timescales, and again the roulette wheel analogy is useful. The house always wins in the end, because they get one more slot then the players do; they rely on averages rather then specific events, and the same is true for climate models. So if we take the pre-industrial greenhouse effect as the level playing field, and anthropogenic forcing as the sum effects of deforestation and fossil fuel combustion, then we are playing with loaded dice. Keep in mind that averages can hide a lot of variablity – there is always the risk that a random series of big wins will wipe out the house (Katrina, for example).
“The outgoing IR radiation is strongly absorbed by clouds and by trace amounts of certain gaseous components of the atmosphere, notably water vapor, carbon dioxide, and methane. Those constituents reradiate both upward and downward. Remarkably, the surface receives on average more radiation from the atmosphere and clouds than direct radiation from the Sun. (my italics) The warming of the surface by the back radiation from the atmosphere is the greenhouse effect.”
Most people will be surprised by the italicized statement. Since the ‘atmospheric IR window’ is not saturated with respect to the greenhouse gases, adding more CO2 means more back radiation to the surface. I don’t know if trying to wow the audience with jargon is a good idea, but this does lead into quantum physics and ‘perturbations of the radiative-convective equilibrium state’.
Climate models, just like weather models, are influenced by field data on ocean circulation and ice sheet dynamics – but the longer timescales involved mean a longer range of accurate predictions. A hundred-year climate forecast then seems reasonable; a million-year climate forecast would be viewed as nonsense (but not a million year plate tectonic forecast!). At this point in the party (hopefully you aren’t depressing everyone) one could bring up the melting ice sheets and tropical glaciers, the warming oceans, the melting permafrost, the destructive hurricanes, the ecological disruption, the record heat waves, the drowning islands and the waves of environmental refugees – essentially we’ve got a looming catastrophe of truly epic proportions.
However, the ‘explosive catastrophe’ portrayed in “Day After Tomorrow” really seems like the wrong approach; those who concern themselves with ‘image and message’ seem to think that people only respond to shock treatment. Consider Ebola virus (a terrifying ‘flesh-melting’ virus) vs. HIV – the slow virus is the more dangerous one, since it ‘sneaks along under the radar’; thus AIDS kills far more people and has had greater economic effects than all the ‘Hot Zone’ viruses. Likewise, global warming will do more damage then all the oil spills and industrial accidents put together. The real test of the adaptive value of human intelligence is whether or not we can respond to long-term problems that play out over many generations.
A common question is “Well, then why can’t scientists all get together and give us a straight answer?” – but that is exactly why the IPCC was formed – to gather data and opinions from a wide variety of climate scientists who work, as scientists do, on narrow problems – from those who collect ice cores all over the world to those who couple atmospheric models to ocean circulation models to those who study tropical disease epidemiology. However, if you follow the news you learn that the fossil fuel industries have funded a massive public relations campaign to confuse the issue and prevent any regulations that would limit CO2 emissions from being put in place – and part of that campaign as been a series of very public attacks on the IPCC reports.
You can point out that those vested interests could reinvest in renewables (and reforestation) on a very large scale – and then they would no longer have to foot the bill for the public relations campaign (they won’t do this without government involvement). This often leads into a discussion of whether renewables can actually do the job or not – which is another topic, but by this point, perhaps you’ve managed to convince the audience of the reality of the problem. Renewables are capable of doing the job, and you have to replace a global fossil fuel energy infrastructure that is valued at something like $10 trillion – and the technology does exist to do this. There are many corollary benefits – less pollution, lots of jobs, and fewer conflicts over scarce energy resources. If you are going to discuss the problem, it’s always good to end on a positive note – unless you want the party to dissolve in a fit of depression and despair.
Re #52 Pat, Thanks for that link. I think the answer that Rasmus needs is contained there. It ends
Because of the accumulating evidence, Dr. Wilson thinks we need to act now to slow or stop global warming.
– I don’t think you can wait until the ambulance comes around the corner until you take some corrective action, and that’s why I’m in favor of developing a balanced plan to do that now instead of waiting until it is too late.
Dr. Wilson feels the best way for people to take corrective action is as simple as trading a gas guzzling vehicle in for a smaller one.
He did that himself recently.
Rasmus, if you tell your party guests that you are an expert and that you have bought a smaller car, then they will believe you!
The 3 month lead forecasts done by the US NWS are an example where “weather” forecasting is really at the lower bounds (temporally) of climate forecasting. If I were a young and ambitious PhD candidate, I might well consider a study looking at the success record of such forecasts. I might then look at some of the errors and determine there root causes, and finally propose improvements to the GCMs to remedy them.
For perspective, my research is in carbon sequestration. So in addition to the problem you have I then have the problem of answering ‘what can be done’, ‘why do you think this is possible’, and (because I live in NYC) ‘aren’t hybrids the complete solution’. And worst of all this is inevitably at parties where I don’t really feel like explaining what carbon sequestration is to an audience that probably doesn’t care about the details nor feel like giving a mini-lecture. Anyone care to venture a stock paragraph covering climate change, carbon neutrality, and carbon sequestration?
My last thought is unfortunately a downer. How come it is perfectly acceptable to claim that Rasmus’ job is ultimately a completely and total lie right to his face: ‘Oh, you predict climate? That’s not possible.’ But it would be considered a major faux pas to go up to a stock broker, lawyer, or anyone really and tell them you don’t believe their job is possible. I don’t imagine any other scientist would be so approached nor any other job where presumably the person in the field has gone through the trouble of studying the issue in depth. Not that I ever raise such a thing in public, but whats the take home message for a young scientist on that particular issue?
Pat, I really do not understand the disconnect between NOAA and NWS, since I thought the latter was part of the former. I cannot imagine who Mike Stewart was speaking for, but he seems not to have read this page from NOAA, updated on January 11. Hmmm. A careful rereading of the page reveals that it never actually says that climate change is due to human activities, though one could easily make that deduction from the information presented. Puzzling…
One of them is that the climate changes are too slow to register with most people, even if the change is easily perceived by a focused expert. Something dramatic must occur – a new world record is perhaps a minimum requirement. In fact the “temperature record” claims have generated a lot of attention already.
Another point in my opinion is that the “global average temperature” is an abstract concept. It does not relate directly to anyone’s daily concerns. One degree this way or that, why should he/she care? Developing more meaningfull descriptions of the impacts is a priority (it is a difficulty, too). Precipitation changes are already much more significant, a great many people can see their impact on basic agricultural production, or the tap water supply.
1990s studies by scientists in NOAA’s National Climatic Data Center showed that rainfall intensity increases with climate warming. Based on my studies and career (1976-2005) in hydrologic modeling and river forecasting, rainfall intensities and flooding have been increasing in the Midwest. I think the day to day weather forecasts are already there, unless I don’t understand your terminology for ‘lower bounds of climate forecasting’.
Lyman observes a loss of 3.2 x 10^22 Joules of heat from upper ocean between 2003-2005.
Hank Roberts asks in 29: >loss of heat – loss of ice? So here’s a back of the envelope calculation:
How much heat does it take to melt one cubic kilometer of ice?
Latent heat to melt ice: 355,000 j/kg x 1,000 kg/cubic meter x 10^9 cubic meter/cubic km
= 3.55 x 10^17 joules to melt one cubic kilometer of ice.
And then how much ice is melting? Jianli Chen in Science thinks Greenland is losing 239 cubic km per year, and Antarctica is losing 150 cubic km per year according to Velicogna and Wahr. If total ice loss in 2 years was a round 1,000 cubic km, the total latent heat loss is: 3.55 x 10^20 joules.
This is 1% of the heat loss that Lyman found. Then there is the heat to warm all that ice, first to melting point and then to the average temperature of the ocean.
How’s my arithmetic so far? And what is the temperature of the oceans that the ice melt mixes with?
You can then look at hindcasts (forecast made for the past, but performed in such a way as if it were in the future so that the observations against which you want to evaluate the model are not used in making the predictions.
The major problem with hindcasts is that people just select the ones that work. It is human nature. That means that hindcasts are biased, in a major way.
ie. You build a model.
You configure it that it fits the test set. (1)
You then test it as a hindcast against data after the test set
You then throw away the models that don’t work in a hindcast (2)
(1) For a financial model, this is fine. For a climate model, its wrong. The model should be based on physical constants first, then on observation if you can’t get that to work. Empirical models in my opinion are suspect.
(2) Throwing away the hindcasts introduces a bias and effectively makes the test data used in the hindcast part of the set of data used to fit the model.
[Response: That’s not how it works. The tunings are done on present day climatology (mean observations over the satellite period for instance). The tests are done on the transients (volcanic forcing, 20th Century etc.). The physics is not adjusted to make the transients fit better. – gavin]
Jonathan, your comment is thoughtful but misses the point. No one is claiming that the modelers job is a “lie” or that modelers do not have great value in this current revolution taking place in understanding the earth/atmosphere system. Many other science-educated people are mearly issuing a caution to the most eager in the community, not to promise too much, lest the entire scientific community lose credibility when the predictions fail to come true. This was articulated well recently by the Roger Pielkes. How can this statement possibly be controversial? It is indeed an extreme viewpoint to hold to the notion that we have all the answers to this amazingly complex system, when the science is only about 30 years old. If we cannot reach common ground here, carrying any discussion forward will be difficult. Regardless of the theoretical controversy about whether or not the climate system is chaotic from a strict technical definition, the point should be that climate is not now predictable across yearly, much less mult-decadal time, as graded by the measuring sticks agreed upon by the climate science community at large. To perpetuate a belief that they they indeed are (contrary to the evidence), falls more in the realm of faith than science.
“Masao Fukasawa of the Japan Marine Science and Technology Center in Yokosuka, Japan, and five other researchers discovered the temperature change by going to sea on three research vessels and measuring deep-water temperatures across the North Pacific. Then they compared those temperature measurements to measurements made by researchers in 1985.
“The warming isn’t caused by an impending El NiÃ±o, one of those quasi- cyclical Pacific warming episodes that turn California either into a floodplain or a drought land, experts say. That’s because El NiÃ±os concentrate farther south and in shallower water.
“… the Nature article reports that by some unknown means and much faster than assumed, the solar-heated water is finding its way to the ocean floor in decades, not centuries. In fact, the researchers estimate it reached the bottom within 50 years.
“The Fukasawa team’s findings show a warming in the deep North Pacific over a “shorter time scale and larger spatial scale than (has) ever been believed,” Fukasawa said in an e-mail. “As (far) as I know, our result is the first which shows such a large-scale temperature change in the global thermohaline circulation,” that is, in global heat and salt circulation via ocean currents.
Joseph Reid, a veteran oceanographer at Scripps Institution of Oceanography in La Jolla (San Diego County), not associated with the research, said the undersea heating “is surprisingly large over that period of time, some 20 years. It’s the first convincing evidence of warming at that depth. I can say I was surprised.”
“It’s much too soon to blame global warming for the deep-sea warming, Reid and other experts caution. “To go from this (observation) to say, ‘This is global warming,’ is just a guess,” Reid said. In any case, “this (Fukasawa result) is something that should be watched very carefully.”
“At the very least, the Fukasawa finding indicates that computer models of ocean circulation — which are vital for monitoring climate change — are badly in need of a tuneup. The discovery was not explicitly predicted by any known computer models of ocean circulation. “
62. reads: … Many other science-educated people are mearly issuing a caution to the most eager in the community, not to promise too much, lest the entire scientific community lose credibility when the predictions fail to come true. … How can this statement possibly be controversial?
Bryan, there is nothing controversial about the fact that many of the world’s best scientists have observed that rapid climate change has been going on already. That needs to be acknowledged by everyone.
I’ve brought this up before, but there are also examples from social sciences. Sociologist Durkheim noticed that suicide rates stay fairly stable year to year, which led to his dictum “social facts (not psychological facts) cause social facts.”
So while each individual suicide may have its unique causes & have to do with psychological aspects & be quite unpredictable at the individual level, the fairly constant & predictable (year-to-year) suicide rate is affected more by social aspects (I won’t bore you with Durkheim’s theory of anomie).
It’s a matter of the level of analysis — climate & weather are 2 different levels.
Comment by Lynn Vincentnathan — 14 Aug 2006 @ 5:17 PM
Re: #59. Good idea, but unfortunately I do not believe it is correct (Levitus et.al,2000) . The change in ocean heat storage (OHCA)is equal to the non-equalibrium forcing (Pielke, 2003). Put more simply, the ocean heat change is approximately equal to the net radiative imbalance at the top of the atmosphere. What you are describing is a transfer of heat from one component of the system to the other, not a loss of heat to space.
A viable possility however, is that the heat is still in the system, like the deep ocean. Gavin Schmidt really knocked that idea in the head however in his critique of the maligned Bill Gray. Your viable options to explain this statistically significant global cooling are thus: 1) the heat has been transfered to the deep ocean like some kind of solid nuclear waste, 2)aerosols negatively influenced the amount of shortwave reaching the ocean surface, 3) a negative flux in shortwave reaching the top of the atmosphere, or 4) a strongly negative water vapor feeback.
A forecast has a likelihood, the likelihood of weatherforecasting
limits itself to ten days. Combined all the likelihoods of the input
parameters and the uncertainties of the parametrisations for climate
makes it impossible to make a forecast with a 70% likelihood for the
next 50 tyears:
Let me list the uncertainties in climate for 50 years:
Some say a new maunder minimum is imminent, others extrapolate
the current increase. That alone accounts for an uncertainty of at
least a full degree (+0.5 or -0.5) for the next 50 years.
Every Pinatubo like eruption causes a drop of 0.2 degrees
over two year. We cannot predict plinian eruptions.
El Nino: we cannot predict long range el nino.
These are only the uncertainties in natural factors.
Nobody can predict economic growth, that’s why there is a huge
bandwidth of SRES scenarios, all optimistic. However: China is having
currently a major boom, this will without doubt lead to a worldwide
recession when they hit their first growth limitations, unaccounted
for in all SRES scenarios.
The richer the country the lower the emission rate, the younger the
economy the faster the technological adaptations (look at Japan,
Korea and Singapore)
So we cannot predict CO2 emissions for the next 50 years
CO2 emissions need to remain in the atmosphere in order to have a
climate forcing. Sink modeling assumes either a sink saturation
(Joos) or an increasing sink (Dietze).
Climate sensitivity uncertainties:
The range of climate sensitivity for CO2 is between: 0.5 (negative feedback) and 3 degrees (positive feedback) for CO2 doubling.
Which for the next 50 years means that the uncertainties in natural
factors still outnumber the antropogenic effect.
Therefore we cannot predict climate for the next 50 years.
[Response: That is exactly why we (mostly) talk about projections, storylines and scenarios. -gavin]
Lynn, to pick a less depressing social analogy, the people who believe in AGW, dire consequences and the need to do something represent local warming. The ones who don’t know or care are steady climate, and the skeptics and denialists represent cooling. The global average temperature or global AGW sentiment can be added up by adding up all the people. But there are social forcings and feedbacks occuring. The broadest forcings are media stories, Al Gore’s movie, etc which I would liken to CO2. The feedback comes from political changes brought about by general awareness of warming through more media buzz, discussion, education, etc. Denialists with a media outlet provide the opposite feedback.
I would maintain, as I have all along, that modeling such a complex system would require modeling prototypical people, just like prototypical weather, the lower layer. Without that modeling you really will have no accurate notion of how one person’s opinion, or the local weather, will change based on overall awareness or climate. But as some here will argue incessantly, people can just be parameterized and averaged for good enough model results.
RE#57 “‘Global average temperature’is an abstract concept. It does not relate directly to anyone’s daily concerns.”
A good analogy that people do readily understand is body temperature: It varies greatly depending on where, and how, you measure it. But, measured consistently and in the right location, it is an index of mean body core temperature and conveys meaningful information about your state of health. As I’ve noted on other threads,it takes only a very small change in body temperature to make us sick: A one degree C rise is classified as a fever, and is a sign that something is wrong. A three degree C rise is, for an adult, a severe fever that could put the patient in the hospital. And a 1-3 degree C temperature rise is in the ballpark for predicted temperature increases due to AGW; I like to point this out when skeptics scoff at such a small change being important to living organisms.
For an analogy that most people will “get” at once, use Las Vegas gambling. Even though everyone understands in advance how to play the games (the materials, physics, rules, odds, etc) people play exactly because of chance, and not knowing the outcome. This is like weather; even understanding every single issue, there is chance in the outcome. Climate has an element of chance, but is less like gaming and more like the house take. The house will *always* be profitable. We know this. The exact profit and how it will be made is unclear, but it will happen with utter certainty. That’s why there are gaming houses at all. We play the odds while they play the certainty. These co-exist within the same system. If they could not co-exist the game would be over. Weather uncertainty and climate prediction co-exist in much the same way, as they are about very different things.
It’s a BAD analogy. I didn’t say it was good, I said people would get it. Sometimes you gotta go with what you got.
Given the “we can’t predict weather past 5 days and you want to tell us what’ll happen in 2100?” argument, I usually say something like, “Don’t confuse weather and climate. I don’t know what the temperature will be in Cairo, Egypt, tomorrow, but I’m pretty darn sure it will be higher than the temperature in Stockholm, Sweden.”
Note that using the latent heat of melting as the basis for predicting melting glaciers misses the dynamics of ice sheets (they slide, in other words). If the ice ends up floating in the ocean, the heat balance is still there – but what if you have patches of dark ground where there was once white ice? Then you have to add in an albedo effect on the total heat budget. This is different from the Heinrich ice-rafting events during glacial maximums; I don’t know their causes ( http://en.wikipedia.org/wiki/Heinrich_event ).
We may be entering a climate regime that hasn’t been around for millions of years – see the Hank Roberts post above on the warming ocean depths – hard data to get as you can’t just look at sea surface temperatures. The reports of ‘dead zones’ (better known as oxygen minimum zones; anaerobic bacteria like these areas, not much else) off the Oregon coast and in other areas make one wonder if global ocean stratification will set in, resulting in a new era of oceanic deep-water hydrocarbon formation like the one some 90 million years ago that produced the Mideast oilfields. This kind of shutdown is quite unlike the proposed ‘cooling effect’ that would presumably weaken the Gulf Stream. (Please note that it would take many more millions of years of tectonics and heat/pressure to actually form deposits of oil).
RE:#7 Not so fast Pat. If you had looked at the trend from 2000-2005 in Alaska, you might have predicted a continuation of the warming in 2006. Alaska has cooled significantly in 2006 however, and the first half of 2006 was one of the coolest for the same period in the last 20 years. The climatology found at the Alaska Climate Research Center will support me on this. I also have noted that summer sea ice in 2006 is back near the 2000 level after a huge reduction in 2003. It is still diminished, but looks to be on the rebound. No predictions here, but I wouldn’t bet the farm on a warm 2006 Minnesota winter.
Re #62 and “It is indeed an extreme viewpoint to hold to the notion that we have all the answers to this amazingly complex system, when the science is only about 30 years old.”
Fourier discussed the greenhouse effect in 1824, Tyndall showed that carbon dioxide absorbed infrared light in 1859, and Arrhenius made the first quantitative estimate of global warming under doubled CO2 conditions in 1896. How do you get “only about 30 years old” from that? The year at present is 2006.
Re: Barton, thank you for challenging me on this. There has been a revolution in climate science in the last 30 years. I give a good deal of credit to the modelers, because they have challenged many of the antiquated ideas. Let’s give credit where it is due. So I think it is fair to say that the modern era of climate science really began in earnest around 30 years ago.
With these new tools (and better data collection), there have come a new set of paradigms, questions, and hypotheses (technical and ethical). Like scientific history has shown, often new insight raises more questions, and these new questions are different from the old ones. In my opinion, this is about where we are in this process.
Although I think I have a pretty good understanding on how NOAA and NWS operated until 2005, I have no connections to what’s going on at NWS now (not since I was handed a July 15, 2005 Memorandum to Remove me from government service). NWS managers and staff were not shy about their skepticism of global warming, not even when Gore was V.P. I think they were insulted by Al Gore’s comments about global warming in the mid 1990s, especially in that Gore was not even a meteorologist! I had a similar problem in talking about global warming, being a hydrologist without a meteorology degree. Things got worse for me after Oct 2003, after I did a press release on climate change in the Upper Midwest. I was told by my supervisor that John Mahoney, NOAA Administrator and head of the Climate Change Science Program (CCSP) for the administration wanted me fired because of the press release. I suppose that’s the real reason it happened. BTW, it used to be that federal agency heads, like the director of NWS, were allowed to run their own agencies. I think that all changed about 4 years ago. Now NWS is called NOAA’s NWS by the Vice Admiral and others in management.
I can’t explain what the disconnect might be now between NOAA and NWS because I’m no longer there and have no connections (no going away party for me). Perhaps someone more knowledgeable about the latest on NOAA/NWS can fill you you in here.
RE: Ike: #75 Oregon coast and in other areas make one wonder if global ocean stratification will set in, resulting in a new era of oceanic deep-water hydrocarbon formation like the one some 90 million years ago that produced the Mideast oilfields.
Ike, I am glad you are finally talking about hydrocarbons. FYI, the Middle East hydrocarbon source rocks were deposited in the Tethys Sea. This was a shallow inland sea, that generated restricted anoxic marine source rocks. The massive carbonate reservoirs of the Middle East oilfields are mainly shallow water karsted and fractured dolomites. You are so far off base on this, that it is really hard to even begin to take what you said seriously. Wrong tectonic setting, wrong source rock environment, wrong reservoirs, wrong structural setting. Try again! Global ocean stratification forming oilfields? I have never heard of this in my career as a petroleum geologist. There is way too much junk science on this website.
I could only get the melting of the ice to account for 1% or so of Lyman’s lost heat (in #59), but I would call it is a transfer of heat from atmosphere and hydrosphere to the cryosphere, and as you point out it is not radiated to space. And the other 99% of the heat could be going deep into the ocean, as Hank suggests in #63 and #64.
Re 75: Ike – I agree that ice can slide off into the water as well as melt directly. I was only estimating the amount of latent heat from global ice melt as a way of finding some of Lyman’s lost heat. Just noting that if a lot of ice slides into the ocean it will suck heat out while it melts.
Re #76: I also have noted that summer sea ice in 2006 is back near the 2000 level after a huge reduction in 2003. It is still diminished, but looks to be on the rebound.
You might care to note, however, that if you take a slightly larger view (as the NSIDC do), the modest positive ice anomalies north of Alaska are more than offset by huge areas of negative anomalies, especially north of Russia.
More like sloshing around than a rebound. Or slushing around, perhaps.
[Response:The sea ice tends to blow around – i.e. moves with surface winds and ocean currents. In addition, there are freezing and melting processes. -rasmus]
It shows that last year’s July ice was a record low, and this year’s July ice was yet another record low. It seems that we have passed the tipping point for Arctic sea ice and it is only a matter of time until it disappears completely :-(
Response: That is exactly why we (mostly) talk about projections, storylines and scenarios. -gavin]
This is what worries me. This statement comes across as spin. By not making a prediction with a confidence interval, there is no mechanism where the models or GW itself can be shown to be wrong.
Lets say there is a prediction of average temperature + 1.5 degrees +- 0.2 degrees in 10 years. In 10 years, if the average temperature is 0.5 degrees rise, then we have shown that the models are wrong, and we should not trust them.
There are lots of people laying the ground such that things can be plausable denied if the ‘climate p*o*r*n’ predictions don’t pan out.
At the end of the day, the science is the science and there is a scientific method. Prediction and test is at the core.
[Response: I think there is a confusion here between the word ‘prediction’ as used in a scientific context (which is a tightly constrained concept) and prediction in the colloquial sense of what’s going to happen. A scientific theory is successful if it makes predictions in very prescribed conditions where as much of the experiment is controllable as possible. So for instance, radiative transfer theory predicts that the infra-red absorption will change in a specific way when CO2 is increased (holding everything else constant). This theory has been shown to be accurate over and over. On a larger scale, increasing the aerosol optical depth in the lower stratosphere following a large volcanic eruption can be predicted to cause cooling because over the short term that is relevant very little else is changing. These kinds of predictions were made in the immediate aftermath of Pinatubo and came in very accurate. But when you start going forward 50 years or so, you are no longer in the realm of a controlled scientific experiment. Scientists and policy makers recognise that and act accordingly. If any of the storylines end up being reasonable approximations of what happened to emissions, the projections for the climate response can be evaluated. This is exactly what has happened with the projections made by Hansen in 1989 – the scenarios closest to what happened gave climate responses closest to what was observed. Contrary to the belief of some, the scientific method is alive and well. – gavin]
Any suggestions on how to explain for laypersons not connected to the Internet?
I think that we have all been rather badly let down by the broadcast media. I am knocking on a bit so I can recall a time when hard scientists could command serious dedicated air-time. Not often, but much more commonly than now.
I can recall that the BBC once gave over a whole evening to particle physics! Quarks, gluons, spin, strangeness, charm, up & down, symmetry groups, the works. That was an extreme example. I can also recall Herman Bondi giving a series of lectures on Relativity and Cosmology.
The media were, it seems, more prepared to risk enlightening at the hazard of boring some or many. However it could be argued that those that turn off are not the ones that you need to communicate with.
At a guess, many of the people, in positions to move the debate forward are of a similar age as I. Baby boomers, and one time protesters and hippies. People with strong self-opinionated positions that are commonly disillusioned with science. For children that grew up with sputnik, it can seem that much was promised and so little delivered.
Media output that is preachy or scare-mongering cuts very little ice with us. Many of us are simply too jaded or wise to take any of it at face value. What is required here is depth and doubt. Doubt is important. Appeal to our scepticism! The expression of the doubts gives weight to the stuff that we are know lots about, like the propagation, absorption and scattering of EM radiation by gases.
Presentations that say that it is a done deal cuts no ice with people when they have heard it all before. You should be advised that I was in junior school when we were being prepared for the next ice age. Catastrophic global freezing was looming. Global warming was strictly Sci-Fi, as in “The day the earth caught fire” (Film 1961).
So what should one do. Firstly it would be good if the media could take it all a bit more seriously. Regular slots, climate strands, one off science productions dealing with the hard science. The hard science ought to be a strength, thermo-dynamics has a good track record, think of all the things that rely on it (Cars, power stations, jets, steam engines). The basis for the effect of CO2 on IR radiation was illuminated by Einstein (1917) in work that also gave us the framework for MASERs and LASER light, and we all know that works. The precise way that absorption bands are built up is detailed and complicated and is described in quantum-mechanical terms. Now we should know that quantum mechanics works (this message is being brought to you by solid state electronics).
If, with all our knowledge and technology we can not get the basics across at varying levels of depth to match the receptive powers of the audience segments then one must wonder if we can tackle the greater challenges of fixing the problem itself.
If the mass media is no longer interested in getting serious points across without burdening them with a lot of spin, then perhaps the situation is hopeless. For, in one sense, the alternative, (the internet) is the last field on which one would wish to fight this battle. It is too democratic in its most pragmatical partial sense. Those that have the most time, patience, money and sheer bloody-mindedness will win the argument by simply out-shouting (out-Googling) the rest.
Finally, I would say that, in one sense, predicting the future climate is irrelevant. Sorry about that! The basic science tells us that if we progressively block one or more of the channels by which the earth cools itself then the system will evolve in ways that compensate for this. This is enough to shout “Climate Change”. If you poke a frog hard enough it will jump, it is not really necessary to know where that it will end up. There is a strong correlation between the amount of energy that you put into a system and the increased dynamism of that system. What it will do precisely is a complicated question. That it will not be the same again is basic science.
You might disagree, but for me, it is not that the future will be warmer on a globally averaged sense by 1C, 6C or 10C. It is that I have know idea what the parochial effects will be. The parochial, anecdotal evidence is that things have changed and that the changes are accelerating. The scale of these parochial changes is, seemingly, at odds with the limited increase in global temperatures. The climatic system is operating at a higher or at least different level, akin to when an economy is over-heating or a reactor moves outside it design parameters. We are operating the climatic economy using energy budgets that are beyond our reference points. What precisely it will do is irrelevant provided we can accept that the climate system must rapidly evolve and that we might not like the consequences. Are you feeling lucky?
Here I would depart and say that future climate might have some shocks install just like future weather does. There could be some serious regime change, the system could start to behave in unanticipated ways with consequences sufficiently sudden and dramatic that we would be simply along for the ride.
I think that effort should be concentrated to getting over enough basic science and a few home truths about what could happen. Arguing over whether it is X degrees C or X+1 degrees C is almost akin to counting angels on a pin head. That we have embarked on a global climatic experiment that could soon spiral out of control is a strong message. Worrying as to whether one is going to burn, or freeze, drown or dehydrate, starve or be annihilated by a dominant world power during a water war is not the issue, that these outcomes are becoming increasingly likely is.
I think that only the mass media organisations can get the message over in a consistant fashion. If they are no longer prepared to tackle hard subjects squarely and fairly then perhaps we have got what we deserve and have only ourselves to blame.
I apologies to any scientists or people of a certain age that I may have offended with unfavorable generalisations.
[Response:By waiting for the evidence – outcome -why then bother with making predictions? The ability of making predictions, or at least plausible scenarios for the future, is in my opinion one of the reasons that we can call ourselves a highly civilised society. -rasmus]
Comment by Alexander Harvey — 15 Aug 2006 @ 8:41 AM
I do not believe you can make the assumption that just because someone cannot explain something means that they do not have a good understanding of their subject. Complex and non-linear systems can be notoriously difficult to explain, sometimes even impossible to some people. Many people have minds that can only deal with linear concepts.
My brother, for example, is a Doctor of Mechanical Engineering. He correctly identified the flaw in the Mathematical models used by the Finite element analysis software packages that are used by many truck makers to design their Chassis. He actually wrote his own modeling software that correctly factored in the non-linear inputs and it was so accurate that it could identify damaged Chassis rails (to the surprise of the Truck owners). And yet, he has great difficulty describing his ideas simply.
I guess most languages do have limitations in dealing with non-linear relationships.
RE; #84 and #83. Good points. I should have been more precise in what I was talking about (Alaska and the region). Your points about the overall trend of NH summer sea ice are noted and acknowledged.
Overall, it can also be seen that the current high latitutde SST anomolies are also very positive. Since they are negative for a good swath of the southern ocean, this is no doubt an important regional effect. Because the sea ice extent is diminished, the albedo to incoming shortwave is also reduced over the high latitude oceans. This makes the overall heat loss for the entire ocean even more interesting.
52: … “Bottom line is we’re not really sure what is causing global warming or if it is even going on,” … (NWS chief in Duluth, May 2006)
65: … there is nothing controversial about the fact that many of the world’s best scientists have observed that rapid climate change has been going on already. That needs to be acknowledged by everyone. (My Aug 14 post at RC)
68: Agreed, I do not think there is much controversy here. (Bryan Aug 14 post at RC)
Question: What do you think needs to be done regarding comment in 52?
#88. Thank you for your response. I may be wrong to apply this broadly , but it is certainly true with me. The important point is my reponse in #62.
Regardless of the theoretical controversy about whether or not the climate system is chaotic from a strict technical definition, the point should be that climate is not now predictable across yearly, much less mult-decadal time, as graded by the measuring sticks agreed upon by the climate science community at large. To perpetuate a belief that they indeed are (contrary to the evidence), falls more in the realm of faith than science.
I predict that there will now be a crusade by some to destroy these measuring sticks they had already agreed upon. Gavin Schmidt’s response to Roger Pielke Sr. on the Climate Science website sure indicates that this may be the case.
[Response: You are reading way more into this than is warranted. Please keep focussed on scientific discussion though. – gavin]
[Response:I’d like to add that chaotic systems tend to have a ‘strange attractor‘, i.e its states tend to be confined in some way. The limit to predictability is typically referring to a given state on this attractor (e.g. weather). But, it is possible that the shape or the location of this attractor is affected in a systematical way by external forcing (a climate change). To illustrate this, let’s consider following thought experiment: imagine that you have a very powerful spacehip and some vert strong long cables. You fix the Earth to the space ship and tow it away from the Sun. What do you expect, that the surface mean temperature is constant as the distance between the Sun and the Earh increases? I don’t think so, but I’d like to see an argument to the contrary if somebody has one to offer. In this thought example, systematic forcing changes the systems (Earths weather) strange attractor. This is more about a principle, rather than the prediction skill of a model. Now, the next question is: how well can we predict the climate? Let’s start at the shallow end – the annual cycle. Don’t forget that most climate models reproduce the annual cycle of the temperature rather well. The seasonal variations in the recived solar energy at a given latitude and the temperature response represent a basic test for how well the models handle variations in the forcing, at east on a local scale. However, there are a number of issues relating to the field of model evaluations, and this is just one of them. It is interesting to note that with the state-of-the art very-high resolution climate simulations coming out from Japan, it’s hard to see the differnce between the model results and those from observations (satellites). I haven’t had the chance to carry out objective assessments/evaluations for these simulations (but I have for other climate models), but these arguements follow the spirit of the post – to argue in a hand-waving style at the party… Then there is the difference between a scenario and a forecast. A scenario could be regarded as a conditional forecast: given that the forcings behave in certain way, then the predictions tell us what to expect. -rasmus]
Actually Pat, I think it is more than just that many of the world’s best scientists have observed rapid climate change. They have observed, documented, and published the results in peer-reviewed scientific journals rather than the grey literature or places such as the WSJ op-ed pages. Which should carry much more weight that someone just giving the “bottom line” off the top of his/her head per the NWS Duluth chief comment. Sadly it often appears that it does not, due in part to the lack of science leadership from the top. Such as in the White House. Just consider who the President’s past science advisor was, their background and where they are now.
Re: #91. Gavin, If I read too much into this, I aplologize. It may be a really good idea for me to step back and let this thing run its coarse (as it inevitably will) before drawing too many conclusions. Thank you for the clarification.
There is too much emphasis on one paper, the AGU paper about a possible cooling in the ocean. It’s like a “reverse hockey-stick” strategy. Contrarians threw all of their efforts to cast doubt on one study to bring down all climate science. The emphasis on this one AGU paper is supporting one study to cast doubt on many parts of climate science.
Any one paper needs to examined carefully and examined in relation to all of the research before a valid conclusion can be made.
Comment by Joseph O'Sullivan — 15 Aug 2006 @ 11:21 AM
RE: #76 – In fact, the entire West Coast had a really cold Spring and a cool early Summer. We had an unprecedented “Siberia Express” pattern (due to its lateness) right into mid June. Only at the actual Solstice did the Pacific High finally assert itself in the manner which normally occurs 6 weeks earlier. Late June was a bit warm but nothing remarkable – the temperatures were what would normally be expected late April through late May (e.g. prior to the normal Summer marine layer – offshore flow cycle setting in). July started with a heavy onshore push with rain in the north. Only during the second half of July did this break. When it did, a set up more typical of September or early October, with a triple barrel High (Rockies, Four Corners, East Pacific) increasing heights and resulting in the infamous “heat wave.” This is of interest, for rather than expressing some “expansion of the tropics” what it really meant was a synoptic regime typical of Early Fall at 40 deg N latitude happening 2 months early. Now, we are in the midst of a rather cool August, with the jet dipping well south. The rains are knocking on the door and we have a squeeqie effect driven 2500 foot thick marine layer on the southern half of the coast. One long range forecast actually states a chance of showers as far south as the Central California Coast both this coming weekend as well as again the following week. Interstingly, the NWS called for above normal temps and below normal precip for California for Aug-Sept-Oct. I am closely observing what actually happens. In September I will make my own call regarding the likelihood of an early start of the Gulf of Alaska Storm / rainy season.
RE: #81 – As a degreed geologist myself, I concur with the Tethy’s sea mechanism for the formation of the petroleum deposits of Southern Asia and extensions of that belt. The association with the karst topo also is how I learned it. UC, ’84. ;)
Re: #95. This will be my last entry on this for now. John, the purpose of pointing out this new paper or any other serious technical work, is that it helps us better understand how this climate system actually works. It is not at all my intention or the intention of most other real scientists, to discredit climate science. To the contrary, there is tremendous scientific progress being made. As the science matures and is refined, it will have great benefits for society in general. In science however, new understanding occurs through professional, spirited debate and discussion of the technical work. This process works, and is a beautiful thing. To dismiss other truly outstanding scientists who have a different viewpoint, and try to chill discussion by calling them names like “contrarians” is not at all productive if you really want to get to the right answer. If one’s goal is something else, that is a discussion I am not interested in.
RE: #83 – Here is something for yet another young, ambitious PhD candidate. A study looking at the changes in salinity in the Arctic between 10 and 100 Deg E Latitude owing to the massive diversions of North flowing rivers by the USSR (and maintained by the CIS). That would be an amazing study.
Sink modeling assumes either a sink saturation (Joos) or an increasing sink (Dietze)
There’s a huge difference here. Dietze asserts that CO2 has a short half life in the atmosphere, based on a 1st order model and a few cherry-picked years of data. Since his model is linear, the sink feedback has a constant gain, and sink uptake rises with increasing atmospheric CO2. Such a model neglects chemistry (partial pressure of CO2 rises faster than concentration in seawater) and biology (CO2 is not the only limiting nutrient etc.). In fact, it doesn’t even conserve carbon. Worse, Dietze’s parameters aren’t even correct; reestimating the model making full use of available data leads to longer time constants.
The saturating sinks in carbon cycle models like Bern and ISAM aren’t simply assumed in; they are based on structure that conserves physical quantities, experimentally-verifiable behavior (buffer chemistry) and calibration to a much wider range of data (including bomb isotope distributions). If one wants to assess uncertainty in CO2 concentrations for a given emissions trajectory, it makes sense to start with a good model, then look for alternate parameters or structure that remain consistent with the data and first principles. If one chooses instead to beleive that Dietze’s deficient model reflects genuine uncertainty, one might as well also believe that perpetual motion machines will solve our emissions problems.
# 73, the same would apply to insurance companies (climate) v. individual events, such a car accident (weather).
Re the gambling analogy: I went early in the morning & saw them “fixing” the slot machine at which I had won pretty good the night before. That’s like human GHG emissions. They said they were only fixing the print-out tape (denialists), so hoping they were right I (regular shmucks) poured down money in the same slot & lost all my winnings, plus some.
Comment by Lynn Vincentnathan — 15 Aug 2006 @ 12:58 PM
re 93. Dan, what you said is true, that many scientists have observed, documented, and published their results showing that rapid global warming has been taking place, with many years of good work (e.g. Lonnie Thomson at Ohio State). I’ll ask my question from 52. again… what action do people here think should be taken regarding the NWS Duluth MIC (Meteorologist In Charge) statement to TV media? He said:
“Bottom line is we’re not really sure what is causing global warming or if it is even going on,” … (NWS chief in Duluth, May 2006)
With regard to how the government agencies work these days, what we have is a government attack on basic science – a “stick your head in the sand” approach. The demise of the National Biological Survey is one example; the bloodletting at the USGS in the mid 90’s is another. NASA is now refusing to include Earth science study as part of its research program- and minor functionaries within NOAA and probably the NWS have been attempting to censor senior scientists in those organizations. (Thanks to pat neuman for the discussion of the hydrology of the Western US.) This attack on data collection is incredibly short-sighted, but depressingly familiar.
As far as petroleum formation goes, if you want a good explantion read “Hubbert’s Peak” by Kenneth Deffeyes ( http://www.princeton.edu/hubbert/ ) he includes a fascinating discussion of oilfield formation and is a highly respected petroleum geologist. Oil is a biotic product, but it is not enough to have marine sediments rich in organic matter; the geological history must be correct. An example of how tricky this can be is the Mukluk, Alaska story – a $2 billion dollar dry hole drilled into a ‘perfect formation’ in the 80’s- except that it was full of salt water, and no oil.
Why is there no oil being formed today? With a few exceptions (such as the Santa Barbara sill area), anoxic bottom waters don’t exist, so the algal detritus is recycled back to CO2. If you want to read a non-‘junk science’ explanation of eastern Meditteranean sapropel formation, take a look at http://www.geo.unimib.it/Conisma/FinalSummary.htm ; you will see that increased nutrient fluxes and bottom water anoxia played important roles in producing the organic-rich sediment layers that are the source of the MidEast oil deposits. My comment was intended to show the drastic changes taking place in the world’s oceans, and as I said, it was ‘purely speculative’. The main sequences for mideast oil were laid down some 90 and 120 million years ago, according to Deffeyes, when climate conditions were quite different. Similarly, the world’s coalfields were created in terrestrial enviroments hundreds of millions of years ago under very different climate conditions. You can read about that here: http://www.uni-muenster.de/GeoPalaeontologie/Palaeo/Palbot/ewald1.htm We are raising CO2 to levels not seen for millions of years; we should expect drastic changes as a result.
In the modern world, human nitrogen fixation and agricultural runoff produces high nutrient fluxes to areas such as the Gulf of Mexico and off of South Carolina (pig farms are the culprit here); that’s one driver of dead zones. By the way, including nitrogen is very important in any analysis of biological responses to increased CO2 and global warming. If you add a slowdown of ocean circulation (halting bottom water formation around the globe) then deep ocean anoxia is a real concern – along with ocean acidification. The dead zone off the Oregon coast appears to be more due to low oxygen levels in upwelled sub-Arctic water then to high terrestrial nutrient fluxes.
Finally, I have a hard time accepting climate change skeptics who have a vested interest in continuing all-out fossil fuel use for the obvious reason that economic interests tend to collide head on with scientific accuracy. If the head of Shell (or of Hewitt Minerals) says ‘there’s nothing to worry about’ – well – that’s not the most disinterested source, considering that they are bidding on new oil leases and would hate to see a drop in demand for their product. If they want to stoop to personal insults – well, that’s just a sign of desperation. To be fair, I am very interested in seeing renewable energy companies take over the energy market – but the science supports my position.
#102, That meteorologist should post his opinions, in a detailed and convincing way, since there was something like “bottom line” sum up with very few bits of science explained. Bottom line for me, statements from people using their title doesn’t convince unless its from some powerful scientist which really know what they are talking about. I found the Russian cooling of last winter an important point which needs to be discussed one day….
I am not sure that this will help but I shall give it a try. I will take some liberties, this is only meant to be illustrative not rigrous. Please feel free correct anything I get really wrong.
Chaos can be displayed in systems that are totally deterministic, that is one of the things that makes it so intriguing. Systems that are totally described by state equations can be chaotic but are not random.
Return the system to its initial state and it will evolve in the same way. Return it to its initial state but then change its state only slightly at it will exhibit its chaotic nature by evolving through diverging route in its state space. (This is totaly true of deterministic models but less so for systems that suffer from uncertainty (Quantum Mech.) but it should be undertood that the chaos does not arise from uncertainty it results from completely deterministic behaviour.
Now we have to be careful how we define the state space. If we take a gas and define the state as a grand Hermitian of the position and momentum of every molecule then the states will diverge very quickly, the behaviour will be highly chaotic at the nanosecond scale.
On the other hand if we define the state of exactly the same system in terms of temperature, density, pressure, and volume. We get the gas laws and the behaviour will be highly predictable (within certain limits).
It is tempting to equate the former with weather and the later with climate.
The weather is chaotic but it never strays too far from its norm. Its state describes a non overlapping path in its state space but it does not often stray into the realms of the highly unlikely. Also on occasion it will very nearly repeat itself (come close to some previous part of its path) and then diverge again. This is chaos folks, but perhaps not quite how we imagined it. Chaos refers to the inability to predict where in state space the weather will be in ten days but whatever weather we have in ten days it will be just plain old regular weather just like many days that have passed and many that will follow. Also a divergence that gives rise to rain instead of blue skies in ten days will be seamless there will be no discontinuity. Rain, shine, or snow we will have weather never, boiling oceans nor vacuums.
The time it takes for its state to loop around state space to once again approach a prior state is not fixed but has a probability (expectation). In time intervals greater than this expectation the system can be viewed using a set of statistics (based on some type of averaging) of the original state parameters and these will have that layer of chaos smoothed out of them. This is not cheating, it is the very complexity of the systems that make this not only possible but inevitable. See gas laws above. (Strictly speaking it is the timescale before the system returns to a state that is similar, e.g. it does not matter if one exchanges all the particles, one can and must regard all the O2, etc. molecules as indistinguishable.
So far so good, the ability for the gas laws to exist even though (or because) the underlying system is chaotic does not mean that a gas can not behave chaotically at a macroscopic level. Clearly it does, weather is chaotic, airflow around a body can be chaotic. But again a change in state parameters could once again restore low chaotic order. This is what is believed to be possible for climate modelling. Changing the state parameters and particularly the timescale can make a world of difference to what one can say about a system.
It does seem that climate has a very low level of chaos. At least on any timescale relevant to the current argument.
There is however a small fly that might be in the ointment. As long as the climate is not evolving it may have a tightly confined path in its state space. Cause it evolve and it may stray into regions that are more chaotic or worse still contain cusps (remember catastrophe theory?).
If the first is the case then climate may become increasingly difficult to predict (more chaotic) if the second is the case the climate may suddenly transfer to a different part of its state space move away from the cusp and commenced to behave again in a well order but markedly different way.
I believe that the current position is that gases are molecularly chaotic (position, velocity) but normally non-chaotic (pressure, density). The atmosphere is chaotic (weather) but non-chaotic averaged over long timescales (climate).
The climate seems to have tracked in a closely confined non-overlapping path in its space for sometime and there is no good reason to suppose that this will not continue nor that any chaotic climatic behaviour will not become evident from running the models repeatedly for climate just as we would do so for weather to test for its chaotic nature.
The one thing that we can not so easily know for certain is that there are no cusps in the system.
Finally, I have to say that although the gas laws arise from the statistical treatment of molecules they have a very diferent form to the basic laws of motion. I do not know if there exists a similar set of climatic laws based on the statistical behaviour of weather. Laws that are above the vaguaries of the weather but totally due to it. Ideally one should be able to model climate with confidence without reference to weather at least in the long term (millenial or greater) perhaps this is the case. We may be unlucky in that there is not a sufficient jump in orders of magnitude in time between weather and near term climate to see the bigger picture and operate at that level. At that level there would be no argument about the predictive power of climatic laws no more than there is for the gas laws. Unfortunately the relevant state parameters might be termperature, pressure, density, humidity as averaged over 10,000 years. It is all a matter of timescales, statistics and convenience.
Comment by Alexander Harvey — 15 Aug 2006 @ 3:04 PM
ike solem in #103 wrote: “To be fair, I am very interested in seeing renewable energy companies take over the energy market – but the science supports my position.”
Well, here’s something hopeful — according to these two energy experts, we are on the verge of a “revolution” in which the availability of cheap photovoltaics will fundamentally change the way we produce and distribute electricity:
Solar Cells Change Electricity Distribution
By Dave Freeman and Jim Harding
The Seattle Post Intelligencer
Thursday 10 August 2006
In separate announcements over the past few months, researchers at the University of Johannesburg and at Nanosolar, a private company in Palo Alto, have announced major breakthroughs in reducing the cost of solar electric cells. While trade journals are abuzz with the news, analysis of the potential implications has been sparse.
We approach this news as current and former public electric utility executives, sympathetic with consumer and environmental concerns. South Africa and California technologies rely on the same alloy – called CIGS (for copper-indium-gallium-selenide) – deposited in an extremely thin layer on a flexible surface. Both companies claim that the technology reduces solar cell production costs by a factor of 4-5. That would bring the cost to or below that of delivered electricity in a large fraction of the world.
The California team is backed by a powerful team of private investors, including Google’s two founders and the insurance giant Swiss Re, among others. It has announced plans to build a $100 million production facility in the San Francisco Bay area that is slated to be operational at 215 megawatts next year, and soon thereafter capable of producing 430 megawatts of cells annually.
What makes this particular news stand out? Cost, scale and financial strength. The cost of the facility is about one-tenth that of recently completed silicon cell facilities.
Second, Nanosolar is scaling up rapidly from pilot production to 430 megawatts, using a technology it equates to printing newspapers. That implies both technical success and development of a highly automated production process that captures important economies of scale. No one builds that sort of industrial production facility in the Bay Area – with expensive labor, real estate and electricity costs – without confidence.
Similar facilities can be built elsewhere. Half a dozen competitors also are working along the same lines, led by private firms Miasole and Daystar, in Sunnyvale, Calif., and New York.
But this is really not about who wins in the end. We all do. Thin solar films can be used in building materials, including roofing materials and glass, and built into mortgages, reducing their cost even further. Inexpensive solar electric cells are, fundamentally, a “disruptive technology,” even in Seattle, with below-average electric rates and many cloudy days. Much like cellular phones have changed the way people communicate, cheap solar cells change the way we produce and distribute electric energy. The race is on.
The announcements are good news for consumers worried about high energy prices and dependence on the Middle East, utility executives worried about the long-term viability of their next investment in central station power plants, transmission, or distribution, and for all of us who worry about climate change. It is also good news for the developing world, where electricity generally is more expensive, mostly because electrification requires long-distance transmission and serves small or irregular loads. Inexpensive solar cells are an ideal solution.
Meanwhile, the prospect of this technology creates a conundrum for the electric utility industry and Wall Street. Can – or should – any utility, or investor, count on the long-term viability of a coal, nuclear or gas investment? The answer is no. In about a year, we’ll see how well those technologies work. The question is whether federal energy policy can change fast enough to join what appears to be a revolution.
Dave Freeman has been general manager of multiple utilities, including the Tennessee Valley Authority, Los Angeles Department of Water and Power and New York Power Authority. Jim Harding is an energy and environment consultant in Olympia and formerly director of power planning and forecasting at Seattle City Light. Also contributing was Roger Duncan, assistant general manager of Austin Energy in Austin, Texas.
Thanks Alexander for the very infomative comment on chaos as applied to climate and weather.
There is another reason why global deep ocean anoxia (stratification was probably the wrong word) wouldn’t result in oil formation – plate tectonics and seafloor spreading. The oldest seafloor is some 100 million years old and most is younger, due to the formation of new seafloor at the mid-ocean ridges and seafloor burial at continental margins. The only place that a geological record of ancient ocean anoxia would be preserved is in the scraped-off sediments on continental margins (I believe the term is opheolite sequences). A discussion of modern deep water circulation with images can be seen here: http://oceanworld.tamu.edu/resources/ocng_textbook/chapter13/chapter13_01.htm
The original atmosphere of the Earth is thought to have been high in CO2; photosynthetic fixation of that CO2 and burial of the resulting organic carbon explains the statement that ‘free oxygen in the atmosphere is a debt against buried carbon’. Happily, we don’t have to worry about stripping the O2 out of the atmosphere (I think) due to the fact that only a tiny fraction of buried carbon is in the form of oil and gas deposits.
Consider oil and gas formation in California’s Central Valley, which used to be a shallow inland sea (similar to the Tethys sea). The continental location allowed for relatively shallow burial of sediments, and heat and pressure ‘cracked’ the organic carbon down to oil and gas. You can look at a pdf image of California oil and gas fields here:
The northern areas (red) are gas fields, and the southern areas (green) are oil fields; this reflects the different geological history of the two regions (higher temps and pressures produce methane instead of oil due to more complete cracking; too high temps produce black carbon). The coastal zones are somewhat different, a great history of those regions can be found at http://www.mms.gov/omm/pacific/enviro/seeps1.htm
Now, California produces about 30% of the hydrocarbons that are needed for in-state energy demand. By replacing all hydrocarbon energy imports (including foreign oil and out-of-state electrical transmissions from the Southwestern coal-fired power plants) with renewable solar, wind and biofuels, we’d have the ~70% reduction in fossil fuel CO2 emmissions that is needed to slow global warming. There are a number of pending bills in California (AB 32 and proposition 87) that will help meet this goal.
Not so fast, Bryan. I followed your non-link to the Alaska Climate Research Center, climate.gi.alaska.edu, and find it does not support your suggestions in Comment 76.
If you look at the trend for 1945 to 2005 for Alaska mean annual temperature, climate.gi.alaska,edu/ClimTrends/Change/TempChange.html, you can clearly see how noisy the annual averages are compared to the 5-year moving average, with deviations 2.5 def F up or down being quite common. If you look at the time lines for individual cities (also available on the site), they are even noisier. If you look at the monthly summaries for January through July of 2006, you find some months warmer than usual and some colder. In the warmer months, usually a couple of the six cities summarized will have been cooler than usual, and vice versa in the cooler months. There is nothing in the data for the first 7 months of 2006 at all inconsistent with the high variability of Alaskan (and indeed all northern high latitude) weather/climate. Seven months of Arctic temperatures do not a trend make.
Others have noted the inaccuracy of your comment about the sea ice. The ACRC web site also has a link to the U. of Illinois cryosphere snapshots, arctic.atmos.uiuc.edu/cryosphere/IMAGES/arctic.shade.3.jpg,The current snapshot, 15 August 2006, shows clearly how far back from the Arctic coastline the sea ice has retreated, from Norway east to Banks Island, Canada.
Someone in the current thread seemed to be suggesting that the Arctic was becoming more saline because of diversion of the large Siberian rivers that feed the Arctic. I seem to remember a recent paper establishing exactly the opposite, increasing freshening of the Arctic due to increase inflow of fresh water from Siberia. I’ll hunt for the citation.
Wayne, it’s not people like you that need convincing in any way. Bottom line is that there are 120 NWS offices in the US, each with their own chief (MIC). Maybe just a few of them talk publicly like that but their comments off the record to media and people in other gov agencies are just as harmful to building any kind of public support to reduce GHG emissions as those who are more up front, probably even worse. Thus, unless they crack down on them all, I wouldn’t want to see them go after a guy that spoke out… but I agree that they should at least say what they’re basing their conclusions. Making general statements which are false is very harmful, if they know they’re false and say them anyway I’d call that criminal behavior of the worst kind.
Pat , Scientists asking for having faith in them, should practice religion. It’s the facts, and interpretation of those facts that persuade, even for the lay, often ignored for their intelligence. Its encouraging that RC has such a high readership, it may just be that people are hungry for correct answers.
Today’s climatologists remind me of some historians. After something has happened, they write their histories as though no other outcome was possible, that the actions of individuals counted for nothing, events were driven by large social, geographical, economic, etc. forces.
If climatologists were confident enuf in their predictions, they could make millions of dollars playing the commodity markets. Afterall, how tough is it to forcast drought? Everytime we have a drought (somewhere) they say: See, we told you so. Climate change.
Or, how about hurricanes? If you could really predict bad hurricane seasons, you could make millions. How? Wall Street. Go long or go short on insurance companies with Florida exposure, depending on your prediction of the hurricane season coming up. Sounds easy.
Many pragmatic people, like me, notice that the climatologists are best at predicting the past. Like those historians.
Climate is what you expect. Weather is what you get.
Re # 99 – RE: #83 – “Here is something for yet another young, ambitious PhD candidate. A study looking at the changes in salinity in the Arctic between 10 and 100 Deg E Latitude owing to the massive diversions of North flowing rivers by the USSR (and maintained by the CIS). That would be an amazing study.”
This is a new one on me, since I thought those river projects had been defeated in late Soviet times by a Soviet ecological movement led by Siberians.
Here are a couple of the papers you may be thinking of:
Nature. 2003 Dec 18;426(6968):826-9.
A change in the freshwater balance of the Atlantic Ocean over the past four decades.
Curry R, Dickson B, Yashayaev I.
Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA. email@example.com
The oceans are a global reservoir and redistribution agent for several important constituents of the Earth’s climate system, among them heat, fresh water and carbon dioxide. Whereas these constituents are actively exchanged with the atmosphere, salt is a component that is approximately conserved in the ocean. The distribution of salinity in the ocean is widely measured, and can therefore be used to diagnose rates of surface freshwater fluxes, freshwater transport and local ocean mixing–important components of climate dynamics. Here we present a comparison of salinities on a long transect (50 degrees S to 60 degrees N) through the western basins of the Atlantic Ocean between the 1950s and the 1990s. We find systematic freshening at both poleward ends contrasted with large increases of salinity pervading the upper water column at low latitudes. Our results extend a growing body of evidence indicating that shifts in the oceanic distribution of fresh and saline waters are occurring worldwide in ways that suggest links to global warming and possible changes in the hydrologic cycle of the Earth.
Science 17 June 2005:
Vol. 308. no. 5729, pp. 1772 – 1774
Dilution of the Northern North Atlantic Ocean in Recent Decades
Ruth Curry, and Cecilie Mauritzen
Declining salinities signify that large amounts of fresh water have been added to the northern North Atlantic Ocean since the mid-1960s. We estimate that the Nordic Seas and Subpolar Basins were diluted by an extra 19,000 Â± 5000 cubic kilometers of freshwater input between 1965 and 1995. Fully half of that additional fresh waterâ??about 10,000 cubic kilometersâ??infiltrated the system in the late 1960s at an approximate rate of 2000 cubic kilometers per year. Patterns of freshwater accumulation observed in the Nordic Seas suggest a century time scale to reach freshening thresholds critical to that portion of the Atlantic meridional overturning circulation.
However, the story may be more complicated:
Science 16 September 2005:
Vol. 309. no. 5742, pp. 1841 – 1844
Influence of the Atlantic Subpolar Gyre on the Thermohaline Circulation
During the past decade, record-high salinities have been observed in the Atlantic Inflow to the Nordic Seas and the Arctic Ocean, which feeds the North Atlantic thermohaline circulation (THC). This may counteract the observed long-term increase in freshwater supply to the area and tend to stabilize the North Atlantic THC. Here we show that the salinity of the Atlantic Inflow is tightly linked to the dynamics of the North Atlantic subpolar gyre circulation. Therefore, when assessing the future of the North Atlantic THC, it is essential that the dynamics of the subpolar gyre and its influence on the salinity are taken into account.
The ability of making predictions, or at least plausible scenarios for the future, is in my opinion one of the reasons that we can call ourselves a highly civilised society.
“Civilized” is only part of it. Passively waiting for the future to push us this way or that is fatalism and destrutive. I don’t think Global Warming skeptics are actually fatalists: they’ve made a choice and simply want their choice to stand unexamined. Or at least unchallenged.
RE: #113 – According to Russian salinity maps, there is pronounced intrusion of higher salinity waters from the Atlantic, consituting a vast tongue that amazingly corresponds with the seemingly growing ice free area North of Europe. However, by the same token, it would appear that higher precip in Eastern Siberian and far Northwestern North America has resulted in higher runoff and subsequent increases in sea ice extent especially north of Western North America. Looking at the imagery of the annual minima over the past 27 years, one will note that the ice free area in this latter region at the minima has tended to decrease and in fact, this year is almost non existent. Bottom line – salinity matters.
#113 (Chuck Booth) & #108 (Jim Dukelow)
RealClimate compared these studies in the “Saltier or not” post.
#102 (Pat Neuman)
I would ask who the “we” is when the NWS chief in Duluth says “we’re not really sure.”
#98 (Bryan Sralla)
I understand the scientific process. You seemed to be over-interpreting the importance of one paper. You seemed to be trying to use one paper to prove your doubts about the ability to predict climate and to model climate. It may be an important study or it may not be important. It is too early to tell.
Who are the truly outstanding scientists I referred to when I used the word contrarian? I think you are over-interpreting my comment ;)
Comment by Joseph O'Sullivan — 16 Aug 2006 @ 11:25 AM
you wrote “You fix the Earth to the space ship and tow it away from the Sun. What do you expect, that the surface mean temperature is constant as the distance between the Sun and the Earth increases? I don’t think so, but I’d like to see an argument to the contrary if somebody has one to offer.”
In fact it would say under normal circumstances, ie those which have prevailed over the last 4 billion years, it would be constant. During that time the sun has increased its intensity by about a third and global temperatures have remained broadly constant. So if we dragged the earth out to where solar radiation is about a third less, then the temperature would remain constant. Beyond that, at some point, we would suddenly get a Snowball Earth. This “broadly constant temperature” means one suitable for life, but not neccessarily human life.
Of course, it will be argued that it was high levels of CO2 that kept the earth warm when the sun was cooler, and for my “normal circumstances” I am assuming that the level of CO2 is close to the average over those 4 billion years. That may seem like cheating, but if you take current conditions, then the Milankovitch cycles provide a sort of tow rope making solar conditions oscillate. For the last 5000 years those cycles should have caused the planet to cool but until now the temperature has remained fairly constant.
Of course we are not in normal conditions, we are in an ice age, and if the planet had been towed away before the Industrial Revolution, then it would have cooled because of the positive feedback from the growth of the ice sheets. But even with that effect, so far during this ice age, the earth has not become a Snowball Earth. So we could move the earth some way away before the whole planet became unihabitable.
The point to realise is that the climate is a dynamical system, in which catastrophes can happen. There is a tendency to equate dynamical systems with chaotic states, and think that a strange attractor is a dynamical system, but it is not. It is only one state of such a system. A dynamical system can change from one stable state to another, in other words from one strange attractor to another. While this is happening the system is in an unstable chaotic state – a catasrophe. As Woodcock and Davies explain in “Catastrophe Theory” page 42 “the passage from the initial state or pathway [strange attractor] is likely to be brief in comparison to the time spent in stable states.” You can see this in the D-O cycles.
This is of course related to what Andrew Harvey posted in #105. If we take a volume of gas as a system, then we can see that as we raise the temperature we move montonically from one stable state to another. However, if that gas is a mixture of hydrogen and oxygen, when the temperature gets to 500C there is an explosion (catastrophe) and we then have a new stable state of a gas called water vapour. Knowing the gas laws it is not possible to predict this catastrophe, so knowing the laws of weather does not mean that you can predict a catastrophic change in climate.
Responding to my post #38 you wrote “One thing that has increased my confidence in the climate models is that features that are observed in the real world, such as El Nino (ENSO – some models do not give as good a representation as others though…), Hadley Cell, the North Atlantic Oscillation Kelvin waves, Rossby waves, and Tropical Instability Waves (TIWs) drop out of these models …” But I am saying that the models cannot reproduce rapid climate change. You appear to be saying that the glass is half full while I am saying it is half empty! However, you seem to be claiming because it is half full it is (kind of) full. I am saying because it is half empty it is not full.
Your inquisitor at the cocktail party wants to know if unknowns exist with weather how can you be so sure about climate with its unknown unkowns. Honesty is the best policy :-(
[Response:Right, honesty is the best policy! You’re also right that the models are not yet perfect, and they are still evolving. Still, they are the best tools we have for making scenarios for the future. I do not agree with the half-full/half-empty analogy, though. The question of rapid climate change is still not very well understood, at least to my knowledge, and then it’s difficult to judge if the models are bad. The reason for this is that we don’t know if it was due to changes in the forcings(?). Is it due to unstable small-scale (up-scling) processes not yet resolved (we don’t know if the models would predict them, if the spatial resolution was higher)? Or is the interpretation of the empirical data right (are we talking about rapid changes long time ago, or very recently, i.e. 1976/77)? The models still do have certain systematic biases (eg ENSO, sea-ice, sea surface temperature)- could those inhibit the models? My motivation for this post was to argue that I believe from empirical evidence that climate is in principle predictable – at least to some extent (the question of degree of skill is the next step) – when I am confronted with statements saying it’s useless. If you look back 50 years, the numerical weather models (NWMs) were not as good as they are today, and they have improved slowly but steadily over time as the computers have become more powerful, numerical algorithms and observational network have improved, and as we have learned more about the Earth’s atmosphere. There are still limitations to the NWMs as well as global climate models (GCMs), and there is some debate about weather a higher resolution may improve them. There are approximations and limitations associated with the description of complicated small-scale processes not resolved by the models (eg clouds, which I did mention, but was omitted from the quotation) and the representation of for instance sea-ice and snow cover (I mentioned the Monsoon, didn’t I?). Still, the models give a very realistic picture: hence me bringing up examples such as ENSO, the NAO, the Hadley circulation, Kelvin- & Rossby waves, and TIWs. They are not prescribed in the models. You may also examine the annual cycle, which for the local climate at least, represents a case where the forcing is undergoing periodical changes: the models do quite well although not perfect. Hence, the GCMs give a realistic picture, but not a 100% exact solution. I’d say, pretty impressive, and very useful for making scenarios (thus the reason that the IPCC stress the word scenario). But, to wind up, my original purpose was to try to avoid referring to models, but try to find some convincing arguments based on empirical observations (a bit like the blue sky proof for atoms and the sun for nuclear reactions, really). Finally, I’d like to thank everyone for their thoughtful comments! :-) -rasmus]
Oh, and I will not be backing down what I said in 7. that: The increasing Dec-Feb temperature trends are so obvious in northern Minnesota that I can predict right now with a high degree of confidence that the winter of Dec 2006 to Feb 2007 in northern Minnesota will once again be above 1971-2000 temperature averages, and will continue to be above the 1971-2000 temperature averages for hundreds of thousands of years to come. I suppose if I’m wrong in 7. I’ll be wrong about 2100.
Re #115: Steve S., I have to say I get a bit peeved when on the one hand you demand greater precision in climate science, then say things like: “Looking at the imagery of the annual minima over the past 27 years, one will note that the ice free area in this latter region at the minima has tended to decrease and in fact, this year is almost non existent.” This statement is nonsense since we are as yet a month away from this year’s minimum. Quite a bit of ice will go away between now and then. If you want to do a comparison of current sea ice levels, try July of this year versus July of last year.
Re #117 and comments – there is always the possibility of a big catastrophic event that is linked to the geological and astronomical systems – meteorite strikes and so on. My favorite area to visit is the Eastern Sierra, and when one drives by the Long Valley Caldera, one can’t help but wonder what the effect of suddenly injecting 500 cubic kilometers of material into the atmosphere would be (as happened ~800,000 years ago). I only mention this because of the original cocktail party theme of this thread.
Maybe this has been mentioned somewhere here. I think the simplest explanation is that there are many more factors affecting specific local weather, producing relative weather chaos, than affect global averages.
Thanks for your long reply. I know how time consuming preparing these can be. Also on behalf of the other poster that you thanked can I say how nice it is to be acknowledged. It is nice to know that our posts are being read even when they do not get a response.
You asked us for arguments to be used at a cocktail party to persuade a fellow guest that the climate models can predict the future climate. Here’s a rhetorical question – What arguments do you suggest I use to persuade serious scientists that their models contain a fatal flaw, and they should listen to my suggestions to correct it?
I have tried honesty, and claimed that I have come up with the answer to rapid climate change, but that only makes me vulnerable to the accusation of being arrogant.
I have argued that the current models do not reproduce rapid climate change, but you argued that it may be caused by an unknown forcing. But all the forcings are known and have been shown to be either not present or too small a scale to produce the effect. The surface temperature of the earth depends on only solar flux, albedo, and the greenhouse effect. We know from geological investigations that the forcing agents for these factors did not change sufficiently to cause the rapid warmings, thus it must have been the feedbacks that did it: water vapour and clouds. The forcing agents were mainly unchanged viz:
1) Solar flux did not change because the Be isotope remained constant.
2) Although there may have been changes in albedo due to changes in snow cover, these would not have been abrupt enough to cause rapid climate change.
3) Changes in sea ice cover could have been abrupt because of the ice albedo effect, but the areas affected were not large enough to produce the effects seen.
4) The evidence shows that the “fixed” greenhouse gases carbon dioxide, methane, and ozone did not change.
That leaves water vapour, and it means that the strength of the feedback from water vapour is being underestimated.
Agreed that the models are performing well at present, just as the gas laws work well for a mixture of hydrogen and oxygen at less than 500C.
But you say you don’t want to discuss the models. I can see your eyes glazing over in disbelief as I try to get my point across. Oh well, I am not alone. You have the same problem at cocktail parties :-)
[Response:I think you bring up an important point, but I don’t know the answer to why the models cannot reproduce past rapid climate changes. There is an article by Wally Broackner (Was the Younger Dryas Triggered by a Flood?), which questions which mechanisms could be responsible for the initiation of the Younger Dryas episode (a ~1000 year of cold, characerised by abrupt changes, after the last glacial period had ended). If certain episodes such as accumulation of melt water suddenly pours into the ocean causes such events, then climate models with very crude ice representation may not capture these. Nor if such events were triggered by meteorites or volcanoes. If this is the case, would you still think that the models contained a fatal flaw? (by the way, I don’t think you can be accused of being arrogant for asking this kind of question). I’d call it a limitation, as the models obviously can describe other climatic aspects with realism. Mind you, we may have different definiutions of ‘flaw’ and ‘limitation’, but it really depends on what you want to use the models for. If the climate models are being used to predict rapid climate changes, such as in the past, you’re right they are flawed. But if you use them for making scenarios for the future given a slow change in the forcing, then the models are OK as long as no rapid climate change takes place (also, the Thermohaline Circulation could break down, but the oceanic models probably do not have a sufficient resolution to provide an accurate description of the ocean currents – which can be narrow – which play a central role). Sure, there are caveats associated with the climate models, and the future have many surprises in store. Climate researchers are aware about this – see for instance the discussion on thermohaline circulation and preception of risk. -rasmus]
Is there a governing principle for climate and or weather?
Something along the lines of:
Climate evolves (tends) towards a state that maximises/minimises measurable climatic parameter(s) X(s). E.G minimising temperature maximising entropy.
Or of similar importance:
A deterministic climatic model evolves (tends) towards a state that maximises/minimises measurable climatic parameter(s) X(s).
In the later case, I presume, when climate models are allowed to run with constant forcings, etc. some collection of parameters stabilise in the long term. (Is this so?)
If it is then, it may be possible to construct governing equations that describe climatic tendencies in terms of the controlling parameters (by using the model with incremental changes of parameters to determine the governing equations, and to apply these governing equations to predict stable climates without the boredom of running the model each time. (I presume that something akin to this is done at least in the sense that micro-events like light showers can not be modelled individually but must be handled in terms of their regional significance.)
I haven’t the time to think hard enough about this just now but I think this is the same as saying that the fabric of climate space is relatively “smooth” e.g. free of cusps, deep holes, etc. and so would always tend to a unique point regardless of how far away the initial conditions are (is this true of the models?). Or do the models contain metastable states or are they multistable?
This is kind of important to know, I think.
If they do not tend towards a unique point is it true to say that a similar law has local validity.
Climate and its model evolve (tend) towards a state that locally maximises/minimises measurable climatic parameter(s) X(s).
This is I think the same as saying that climate is locally stable in the long run. It might take a complex and chaotic path towards stability but that the tendency is clear and inevitable.
Can anyone say what is known of the models in these type of terms. Not whether they produce particular results or that they are useful but whether they stabilise in a unique way?
Comment by Alexander Harvey — 17 Aug 2006 @ 7:47 AM
Re #117 and “if we dragged the earth out to where solar radiation is about a third less, then the temperature would remain constant.”
No, it would drop precipitously, as some simple radiation calculations show. The time scale for adjusting to changes in Solar radiation is extremely slow, at the very least in the millions of years. You could probably glaciate Earth completely by suddenly dragging it out to where there was a third less sunlight.
If climate is inherently unpredictable it means that no matter how much we struggle we can not know which of two or more significantly different futures is in store for us. More than that, it implies that the future is indeterminent. That is is impossible to know anything meaningful about future climate and that there is nothing in the current state of affairs that can determine it, and nothing that we can do to influence the outcome in a desired direction.
I think that this alternative is nonsense (opinion) and even if it was so it is irrelevent (opinion).
It would imply that increasing CO2 might make the world warmer, or cooler or do nothing. (Not simply that we can not currently predict what it would do, but that it is impossible to predict what it would do, starting from here it could take divergent paths, and that starting from here again it could do something quite different in a re-run. Also that even if it did either of the first two (heat up/cool down) , that decreasing or stabilising CO2 could either remove, enhance or do nothing to that trend.
Now if the sun went out, would the planets get warmer of cooler?
Cooler I think.
So, climate is predicable to some degree at least.
Someone needs to advise me here. The climate modelling of current interest is presumably for a future where CO2 forcing is varied. Over the short run different initial conditions may lead to different results. When these models are run with stationary forcings do they each converge in a meaningful sense to their individual stable points?
Is there anything inherent in the process that says that they must or that they won’t uniquely stabilise? If models are inherently divergent (within each one, not in comparison) then they can not predict the future in the often accepted sense.
This is a smaller but I think relevent consideration. Before we can anwser about the predictability of the real climate one needs to know if the model predictions are predictable (individually provide unique outcomes insensitive to plausible changes in initial conditions).
Do the models predict anything? In the sense that multiple runs with different initial conditions produce a single cluster with a recognisable probability distribution. I presume the answer is yes, and I believe a mass trial of a weak model (climateprediction) with varying CO2 is in progress, does it confirm that the model is inherently capable or incapable of predictions?
I should like to know if the big models stabilise if all other things are left unchanged and particularly how quickly they stabilise. The stability time constant might tell me something about the difference in timescales between weather and climate.
[Response:I think you bring up an important point, but I don’t know the answer to why the models cannot reproduce past rapid climate changes. There is an article by Wally Broackner (Was the Younger Dryas Triggered by a Flood?), which questions which mechanisms could be responsible for the initiation of the Younger Dryas episode (a ~1000 year of cold, characerised by abrupt changes, after the last glacial period had ended). If certain episodes such as accumulation of melt water suddenly pours into the ocean causes such events, then climate models with very crude ice representation may not capture these. Nor if such events were triggered by meteorites or volcanoes. If this is the case, would you still think that the models contained a fatal flaw? (by the way, I don’t think you can be accused of being arrogant for asking this kind of question). I’d call it a limitation, as the models obsiously can describe other climatic aspects with realism. Mind you, we may have different definiutions of ‘flaw’ and ‘limitation’, but it really depends on what you want to use the models for. If the climate models are being used to predict rapid climate changes, such as in the past, you’re right they are flawed. But if you use them for making scenarios for the future given a slow change in the forcing, then the models are OK as long as no rapid climate change takes place (also, the Thermohaline Circulation could break down, but the oceanic models probably do not have a sufficient resolution to provide an accurate description of the ocean currents – which can be narrow – which play a central role). Sure, there are caveats associated with the climate models, and the future have many surprises in store. Climate researchers are aware about this – see for instance the discussion on thermohaline circulation and preception of risk. -rasmus]
Comment by Alexander Harvey — 17 Aug 2006 @ 8:53 AM
#123 Alexander. There are numerous papers and reports about modeling the climate, and other characteristics of planets, using a principle of maximum entropy production. Google the phrase “maximum entropy production” with ‘climate’. Bejan, Lorenz, Partridge, and others have papers and reports on this subject.
[Response: While this theory is aesthetically pleasing, it isn’t terribly well thought of in the community. For one, it doesn’t consider rotational effects at all, and doesn’t match lab experiments when rotation is included. And when it’s used to to predict anything, it gives the same answer as was known already (Kleidon et al, 2003). But for another view read Ozawa et al (2003). – gavin]
Are you saying that the large literature on application of entropy maximum to analyses of the properties of planets has been dismissed by the climate change community?
It is my understanding that while GCMs do rotate neither the daily or seasonal variability are considered to be reliable. If this is not correct will you kindly point me to published results for these quantities.
[Response: As far as I am aware, MEP theory has not produced any useful results, possibly because the climate fluctuations of entropy production are actually a very small fraction of the total entropy production and so are not usefully constrained by the theory (Stephens and OBrien, 1993; O’Brien 1997). Think of it this way – if the uncertainty in the climate sensitivity to doubled CO2 is a function of the micro-physics of clouds, how can a theory which neither contains greenhouse gases, clouds or micro-physics succeed in predicting that quantity? What about regional climate changes from the addition of aerosols? etc.
With respect to the diurnal or seasonal cycles in GCMs, why do you think they are not well-modelled? They’re not perfect, but the main features are well captured. I will however read the paper you point to and report back… – gavin]
#127. Gavin, and why should a model not prdeict what is already known? The GCMs do not predict at various spatial and temporal scales what is already known. This latter characteristic (not predicting) seems to be a major problem in contrast to the former (correct prediction) which many consider to be a major requirement of successful, useful modeling. The lack of the former in the GCMs is the focus of much discussion and debate.
We can take this offline, but I can’t find your e-mail address on RC. You have not correctly characterized some of the MEP models. And I hope you will provide corrections online. Additionally, I asked for publications in which the daily and yearly variability from a GCM is reported. You merely stated that ‘it is good’. Kindly send me reference citations and/or report them on RC. And, kindly send reference citations to predictions of regional climate and comparisons with measured data.
One of the issues that plays into any discussion of climate physics is this: many people unfortunately still have a nineteenth century view of physical theory (this is not directed to anyone on this thread; but it is a fact). The notion that physics is an ‘exact science’ is at odds with 100 years of quantum theory and chaos theory research; that is why the term ‘constrained’ is used in place of ‘determined’. The primary culprit here is likely basic science education, which really should introduce these concepts at an early stage.
Those constraints can be very, very accurate, but as gavin and alexander note the timescales are of fundamental importance. Imagine that we were coming out of a glacial period, and that vast pools of liquid water existed all over the Northern Hemisphere continents, held back by flimsy dams consisting of ice sheet remnants. At some point the walls would be breached and catastrophic floods would occur, but we should not expect a climate model to accurately predict the timing of such events – perhaps they would be triggered by an earthquake (try predicting those!). This relates to Alastair’s comment in #123, which includes the statement “but all the forcings are known” – but those forcings may be quite random, or be the products of systems with a high order of chaos. I would instead think of a situation in which a system builds up large potential energy, which can then be converted to kinetic energy by a little trigger (for example, Atlantic hurricanes are triggered by easterly waves, or in other words, “a subcritical bifurcation of the radiative-convective equilibrium state”).
In that case, an important aspect of constraining rapid climate change is to carefully examine the system for such buildups of potential energy (i.e. warm sea surfaces and warm upper ocean profiles in the case of hurricanes). Then ask what mechanisms would result in a rapid and violent dissipation of that energy vs. a slow and gentle dissipation. At the end, you come up with a probabilistic assement, not a deterministic one.
One other thing is chaos – which alexander brings up in #124. Experimental study of chaos is usually carried out in tiny ‘macroscopic’ systems – chaos in a test tube. This is quite tricky itself and one issue that arises is why reversible but chaotic microscopic processes lead to irreversible behavior – and applying this to something as large and complex as the climate system using the notion of ‘states’ and so on seems a big reach, but is perhaps a good guiding principle.
Re #127 and the notion of maxium entropy production: Thermodynamics is often more useful in local situations where all the likely mechanisms can be included. Energy is the sum of enthalpy (heat) and entropy (disorder). I’ve been rereading Kerry Emanuel’s article over and over; he says it much better, so here you go: http://www.physicstoday.org/vol-59/iss-8/p74.html :
“The relatively high surface temperature also means that atmospheric radiation exports entropy to space. The reason is that the atmosphere is heated at approximately the surface temperature, but it cools at the much lower effective emission temperature of Earth. In equilibrium, the planet must generate entropy, and the vast majority of that entropy is produced in the atmosphere, mainly through the mixing of the moist air inside clouds with the dry air outside them and through frictional dissipation by falling raindrops and snowflakes. Were it not for moisture in the atmosphere, the entropy would have to be produced by frictional dissipation of the kinetic energy of wind. The resulting air motion would be too violent to permit air travel.”
This concept is then applied to the local situation of hurricanes. The balance of power generation and dissipation is then used to get an expression for the maximum wind speed a hurricane can generate, using the thermodynamic concept of the Carnot heat engine. As gavin points out a global use of entropy (maximum entropy production) would likely hide too many details to be useful. Cheers!
“At some point the walls would be breached and catastrophic floods would occur, but we should not expect a climate model to accurately predict the timing of such events – perhaps they would be triggered by an earthquake (try predicting those!). This relates to Alastair’s comment in #123, which includes the statement “but all the forcings are known” – but those forcings may be quite random, or be the products of systems with a high order of chaos.”
The breaching of a dam wall wall is not a climate forcing. It may trigger a climate forcing by creating sea ice in the North Atlantic which changes the albedo, but all those forcings which could be triggered have been measured. The change in water vapour as a result the sea ice, has not been measured, and I am arguing that the climate modelling of that feedback is wrong. That is why the past cannot be simulated correctly by the models.
Of course an eruption of a super-volcano could occur, which would invalidate all the models which predict a rise in temperatue of 3C. You are confusing the testing of the models with past data which can be done, with using them to predict the future under all circumstances which can never be done. We can only be sure they are correct when the future arrives!
But what I am saying is that the models are predicting a montonic rise in temperature as a result of global warming. We know from the ice cores that the climate does not change that way. It switched out of the Last Glacial Maximum to the warm B-A inter-glacial, then switched into the cold Younger Dryas glacial where the temperature was fairly steady for 1000 years, then it switcted out of the Younger Dryas into the Holocene interglacial, and temperature has remained stable for the last 10,000 years. Without Man’s intervention it would have switched back to glacial conditions, but now we are warming the planet the next switch will be similar to two of the last three, ie to a hotter climate.
A tipping point seems to have been reached where everyone believes that rapid climate change is the Younger Dryas stadial. The only people who still seems to have any sense are Seager and Battisti. See R. Seager, D. S. Battisti, in The General Circulation of the Atmosphere, T. Schneider, A. S. Sobel, Eds. (Princeton Univ. Press, 2005), and “Is the Gulf Stream responsible for Europe’s mild winters?” R. Seager ; D. S. Battisti ; et al, Quarterly Journal of the Royal Meteorological Society 128, 586 Page: 2563 — 2586. http://lysander.ingentaconnect.com/vl=4597195/cl=12/nw=1/rpsv/cgi-bin/linker?ini=rms&reqidx=/cw/rms/00359009/v128n586/s1/p2563
Even if the THC stops, Europe will still stay warmer than Newfoundland. Its climate is kept warm by winds blowing over the Gulf Stream, which is a western boundary current and will not stop flowing provided the Earth continues to rotate!
[Response:I believe there are plans for a post on the Seager & Battisti paper in the near future, so I won’t comment on it here. -rasmus]
There seems to be a slight inconsistency in your argument that I’ve seen over and over again when it comes to the notion of an oncoming ice age as a result of a stable glacial-interglacial cycle. On one hand, you state that “You are confusing the testing of the models with past data which can be done, with using them to predict the future under all circumstances which can never be done. We can only be sure they are correct when the future arrives!”.
This is a common problem in modelling; the preferred scientific approach since the Renaissance has been to conduct experiments under controlled conditions (Joseph Priestly is just one of the originators of this approach – now there was a great scientist!). Unfortunately, we don’t have a parallel Earth in which we could have kept things in a pre-industrial state and observed the resulting effects – we would actually have needed at least three such parallel Earths, and two more industrialized Earths, to make a minimal statistically valid comparison. Thus, the best we can do is detective work, and the only possible model tests are to compare the model output with the actual data (which again is why comprehensive data collection is so important).
However, you then go on to say that “Without Man’s intervention it would have switched back to glacial conditions”; that’s a pretty definitive statement. You may be right, but the evidence for that is very slim – far less certain then the evidence for global warming. The last 8,000 years of climate have been remarkably stable compared to the previous glacial-interglacial cycles, according to the geologists. Why should this long stable period have dipped back into another round of glacial-interglacials? In fact, the only evidence I’ve seen for that is anecdotal reports of the horrible blizzards of the late nineteenth century, the local “Little Ice Age” that sent the Scandinavians southwards on ocean exploration voyages, and the ‘hockey-stick’ climate record that has come under so much attack in Congress.
So, how do you resolve that contradiction? Shouldn’t we base our actions on the known data and models, and if so, shouldn’t we take drastic steps to meet that ~70% reduction in fossil fuel CO2 emissions that the models say might stabilize the climate?
MEP is just the sort of governing principle I was grasping for. So thanks again.
It will take me a while to get up to speed but already I feel that the reservations as to how it can be used predictively (Gavin: “how can a theory which neither contains greenhouse gases, clouds or micro-physics succeed in predicting that quantity?”) could be its great strength.
By that I mean that the principle (OK, hypothesis) cares not about the mechanism. It will exploit what ever means are available to maximise work in the system. It might however fail to predict the nature of the weather in general or the climate in fine detail.
This is to say that it might not by itself produce better ways to model climate but it is something that is clear enough to hold in one’s head, and think about in order to see its consequences, which I shall try and do.
Comment by Alexander Harvey — 17 Aug 2006 @ 6:15 PM
RE: try July of this year versus July of last year.
Been there done that, both July and current are have a negative anomaly with a lower amplitude than last year. I stand by my prediction.
FYI …. report from yesterday from the NWS Anchorage ice desk …. highly dependable, the “Deadliest Catch” gamblers depend on it:
THE MAIN ICE EDGE LIES FROM 71.4N 140W TO 72N 146W TO 70.5N 145.5W TO
71N 150W TO 72.4N 153W TO 71.4N 160W TO 72.6N 160W TO 71.5N 171W
TOWARD WRANGEL ISLAND. THE EDGE IS MAINLY 3 TO 6 TENTHS YOUNG…FIRST
YEAR THIN…AND NEW ICE.
FORECAST THROUGH MONDAY…FROM BARROW TO PRUDHOE BAY…NORTH WINDS
WILL PROBABLY PUSH ICE CENTERED AT 148W SOUTH 10 TO 15 MILES BY
SATURDAY. MOST OF THE THIS ICE WILL BE 1-3 TENTHS CONCENTRATION AND
FROM PRUDHOE BAY TO KAKTOVIK…LITTLE NET CHANGE IN ICE EDGE IS
WEST OF BARROW…NORTHEAST THEN EAST WINDS WILL PUSH ICE CENTERED AT
160W TOWARD THE SOUTHWEST THROUGH THE PERIOD.
Re #133 Re However, you then go on to say that “Without Man’s intervention it would have switched back to glacial conditions”; that’s a pretty definitive statement. You may be right, but the evidence for that is very slim – far less certain then the evidence for global warming.
Here is the CO2 and Temperature records from the Vostok core that the Russians drilled in Antarctica. You can see that over the last four years there have been short periods, roughly 100,000 years apart, when the temperature was as high as that today. They did not last long, so it is reasonable to suggest that this interglacial would be similar, and that Antarctica would soon descend slowly back into glacial conditions. http://www.grida.no/climate/vital/02.htm
But it is more complicated than that. This interglacial has lasted longer then the two preceding ones but not that 400,000 years ago. Why is that? Is it because the Milankovitch cycles are in a similar state now to those 400,000 years ago, or is it because Man has kept the climate warm as a consequence of the development of farming? We don’t know, but I am assumng if Man was not here then the planet would return to another glacial period sooner or later.
The “Litttle Ice Age” which happened 400 years ago would not show up on the Vostok ice core even if it were there. This Ice Age began 2,000,000 years ago and it contains glacial periods when it is cold, and interglacial periods when it is warm like today and during the Little Ice Age. In othre words the little Ice Age was not an ice age, it was not even a glacial period. It was just slightly cooler.
Those two pages showed what happened at the south pole, but the vast maajority of people live in the Morthern Hemisphere. The ice cores from Greenland only go back to the last interglacial, (note this chart has the time flowing from right to left) http://upload.wikimedia.org/wikipedia/en/6/66/Ice-core-isotope.png
But they show much more violent changes in temperture than in the SH. Unfortunately it is not clear what happened at the end of the last interglacial in Greenland because the ice at the bottom of the core may have been distorted.
It seems that these violent changes were due to the formation and rapid disapearance of sea ice sheets triggering runaway changes in water vapour content, because of the Clausius-Clapeyron effect. The sea ice no longer streches down to the Irish coast as it did during the Younger Dryas, but when the Arctic sea ice disappears, then we will have another rapid warming. It is too late to prevent that happening by reducing CO2 levels. We, or at least the USA, will just have to put a sun shield into space, but they better do it damned quick!
re 103. A thank you note to ike solem for expressing appreciating. My research on earlier snowmelt runoff dealt with the Red River (ND/MN), the St. Louis River (near Scanlon/Duluth MN) and the St. Croix River (MN/WI). I haven’t done my own research in the Rocky Mtns, but I have read articles as I post them to my yahoo group called ClimateArchive. I thought the series in 2004 by Jim Erickson, Rocky Mountain News was good.
Re: Alastair in #117 and Rasmus earlier – towing the Earth.
If you tow the Earth out today a significant distance, it’ll freeze. Explaining the very long-term climate stability of our little home here is tricky, and not currently well known – save that the answer certainly has to do with CO2 concentration, and quite possibly some of: methane, availability and abundance of continental shelves to evaporate lots of H2O to the air, relative amounts of geological energy going down over geological time as the U235 decays off, placement of continents near the poles or near the equator, and possibly a few others I can’t think of off the top of my head. Google “wikipedia supercontinent cycle” and related links for a plausible theory of how the Earth froze solid a couple of times while staying right in its present orbit. However, past performance of the Earth is no guarantee of future performance. Absent a plausible physical model (towing asteroids around?), about the best you can do is plug the numbers into a GCM and see what happens. As Barton pointed out around #125, certainly you’ll get a proper Ice Age as the drop in incoming solar flux overwhelms the current deglaciation of the planet that our little adventure in carbon burning. In a feedback-free Arrhenius model of the Earth, towing the Earth out to 1.225 AU – where the sunlight is 2/3 of our present sun – drops the global mean temperature by some 6.9 degrees Celsius. Feedbacks should amplify that effect somewhat. In GCM terms, that’s a global longwave forcing of -22.2 W/m^2. I don’t know how easy it is to reconfigure a GCM, but if that doesn’t freeze the oceans, I’m not sure what will. Also you’d have to move the moon, else we lose tides and such. ;)
Comment by Steffen Christensen — 17 Aug 2006 @ 9:52 PM
On the issue of ice ages. I remember reading about a year ago a report I am no more able to locate.
Some French astronomers had re-computed the Earth orbit using latest and best values for the sun, the planets and all their moons. This for the past and the future 25 million years.
Their conclusion was that there would not be a next ice age, or other major variation driven by celestial mechanics. Earth orbit has assumed a relative stability for some considerable period of time.
You state, “but when the Arctic sea ice disappears, then we will have another rapid warming. It is too late to prevent that happening by reducing CO2 levels.”
I fear you are correct, but the magnitude of that rapid warming may be reduced by drastic actions – a massive reforestation program and a massive renewable energy program, and a total ban on fossil fuel combustion (that’s about as drastic as you can get, I suppose).
Warming at the poles is agreed by all to be the most intense, so people who say that undersampling the Arctic ocean is not important are way off base. Shutting down thermohaline circulation seems unlikely to cause a cooling in Britain because of the compensating increased polewards heat transfer. The thing to look at will be the salinity. Increased evaporation vs. freshwater dilution? Hard to say, I suppose.
If I had to bet a large stretch of coastline on your statement being correct, well – just look at insurance company policy these days on coastal areas. They seem to agree with you as well.
This paper, on calculations of orbits states that chaos limits the accuracy of the calculations. The last sentence in Section 4 states, “The ultimate obstacle to achieving higher accuracy is chaos and not simulation errors, assuming that we have a sufficiently accurate physical model.” I read this to say that the presence of chaos ultimately limits what can be predicted and the accuracy of the predictions.
All corrections to my understanding are welcomed.
[Response: This is an initial value problem (just like weather), and just like weather, the accuracy through time is limited by the size of the Lyapunov exponents which describe the divergence of two close initial states. You might want to check what the time-scales for that to be felt in orbital calculations and then report back as to whether you think it’s relevant on the ten-to-fifty thousand year time scale….. – gavin]
re:145. “Oh my goodness.” ;-) There is so much information on this web site about climate modeling that it is astonishing and sad to read comments like that. Unless of course it is simply another in the long line of “drive by” postings with no scientific support. Guess we shall see about that. Meanwhile, since around 1970, global natural forcings can not explain global temperature trends over the same period. In short, your “boundary values” (natural forcings) have been overwhelmed by man-made forcings. Which have been modeled quite well within reason (i.e. there is always some scientific uncertainty). Future modeling is based on various scenarios due to a variety of reasons. The results produce a range of possibilities that are consistent.
RE: #145 – Do you understand what he meant by boundary values? Why is the mention of them sad? Are not boundary values the essence of getting to solution sets for complex thermal scenarios? This is basic stuff, basic upper division calculus, basic upper division heat flow. Did you ever take such courses?
I’m reporting back re: chaos, initial values, boundary values, and Lyapunov exponents. The question is not what the Lyapunov exponents are for the calculations of orbits. The question is what are the values for the continuous equations for long-range climate calculations. Are they of such values that the accuracy is not limited for climate calculations over the time scales of interest, or of such values that the accuracy is limited. In the sense that climate is chaotic, these are questions that demand rigorously determined answers such as provided in the linked paper. The Lyapunov exponents for orbit calculations are not those obtained for AOLGCM calculations.
A major part of the paper linked in #143 is devoted to Verification of the accuracy of the numerical integration methods. And because the basic equations are ordinary differential equations ODEs the Verification can be rigorously determined. The calculations of orbits is an initial value ODE problem because the boundary conditions were set by the physical model mentioned in the quoted sentence.
Climate models are partial differential equations (PDEs) and require initial and boundary values. Unlike the case of ODEs, determination of the accuracy of numerical solution methods for PDEs is more nearly an art; not a mathematical science as for ODEs. Numerical approximations for boundary conditions, by the way, can destroy the most carefully-laid plans of numerical solution methods. So far as I know, the Verification of the numerical solution methods for the PDEs used in AOLGCM models has not been demonstrated. The order of the accuracy of the numerical approximations in the asymptotic range has not been numerically demonstrated. The order of the as-implemented truncation errors for the entire model and its applications procedures is also unknown. And, additionally, the Lyapunov exponents for the continuous equations have not been determined. If this statement is not correct, let me know. Additionally, any papers in which the Lyapunov exponents have been rigorously estimated for the case of numerical integration of a system of non-linear PDEs would be appreciated. The specific case of any AOLGCM is especially of interest.
One successful method that is usually applied to numerical solution methods for PDEs is demonstration that the calculated results are independent of the sizes of the discrete temporal and spatial scales used to integrate the equations. This exercise will allow the as-implemented-and-applied order of convergence and truncation errors to be estimated. If this exercise has been conducted for any AOLGCM model and code and calculation I have not been successful in locating it. Any assistance along these lines would be appreciated.
Equally important is this sentence from the conclusions of the paper, “The present results, therefore, need to be checked by researchers who work independently of our group and use different simulation methodologies, if possible.”
re: 147. Oh yes, graduate degree and 23 years experience in fact. Did you? Seems unlikely. I am not sure why this has to be repeated but here goes: Natural forcings alone can not explain global temperature trends since around 1970. When added, anthropogenic forcings do. Natural forcings alone among the set boundary conditions do not reproduce the warming. The anthropogenic forcings are now stronger. This is basic modeling stuff about boundary conditions so do try to understand, take a fundamental modeling course again, and follow along.
How will we know when we have an adequate climate model? What is the criterion for knowing when to stop and agree that we have what we need?
I ask this as it seems that modelling the climate goes back over 100 years and predictions for a doubling of C02 have ranged from about 0C to 6C from the earliest days. The models have got better and better but the range seems to remain in that ballpark. We must surely know a good deal more about rates of change but the range does not seem to narrow consistently.
Given the progress so far. When is it predicted that we will have a concensus that the range has converged to a limit of what can be known or sensibly asked?
This will provide a time limit to a watch and wait (for the models to agree) stategy.
Given that we can put a date to this. What is the best guess temperature rise by that date?
That is to say will the models achieve accepted results before they are overtaken by events?
Unpromising answers to these questions would lead one to ask whether the modelling has not already done enough and quite possibly that the primary goal was reached many years or decades ago.
It worries me that pursuing the question of “how bad” relentless only gives rise to a “wait and see” reaction. That we must sit back and wait for the modellers to sort their models out. If the real answer is that it is going to be bad but “how bad” is unaswerable in a useful timescale then a whole excuse for doing nothing evaporates.
I worry that we are behaving a bit like a liner steaming full-ahead in an endless night through fog, in a region known to be prone to icebergs; whilst waiting for the guys in the back-rooms to perfect radar in the hope that it will enable us to accurately see the hazard that is looming up in that fuzzy outline dead-ahead in time to take avoiding action.
Comment by Alexander Harvey — 19 Aug 2006 @ 10:05 AM
Re # 149. The numerical solution can only be indendent of the grid resolution/numerical accuracy for linear PDE’s. In case of nonlinear PDE’s (like in turbulence, ocean and atmospheric flows) the solution will always diverge, no matter how fine the grid or accurate the numerical method are. That’s why weather forecasts always will have a limited predictability. I work on simulation of turbulent flows. Two different numerical codes or the same code with different resolution will always give different solutions after some time (given the same initial conditions). However, the statistics (average velocity, spectra, variance, PDF’s) are the same if the resolutions and the numerical accuracy are high enough (and agree with observations). That is what I (can) demand of my code and what I am interested in, in case of a chaotic system, not that the solutions are exactly the same after some time. I guess the same demands concern GCMs: the probabilities of the projected temperatures should be more or less independent of resolution and numerical accuracy, not the temepature itself.
Comment by Geert Brethouwer — 20 Aug 2006 @ 4:19 AM
RE: #153. Linear vesus non-linear is not the issue; the basic, not-averaged Navier-Stokes equations are actually quasi-linear, I think.
Your example of fine-scale resolution of turbulence is the same as I mentioned in #149. I tried to contrast that with modeling and calculations of mean flow fields. I’m certain that you expect calculations of the mean flow field, with whatever closure approach employed, to be both (1) the same between different codes and (2) the same between two runs of the same code with different spatial and temporal resoutions. The latter case I have also tried to summarize as a necessary process for Verification of the calculations preformed with AOLGCMs. Until the point of a resolution-free solution is attained, the discrete equations have not been solved.
Let me know if I have not correctly characterized what you would expect from code(s) designed to model and calculate the mean flow field of a turbulent flow.
I agree with your summary of the approach that is used whenever the small scales in a turbulent flow are resolved. Direct solution of the Navier-Stokes equations produce the local-instantaneous small-scale motions of the fluid. The mean flow and stats of the turbulence will agree both between codes and with experimental data.
And here is where I am having difficulty. I understand that the AOLGCM models and codes do not resolve the small-scale fluid motions. The basic mass, momentum, and energy equations are written to describe the motions of the mean flow. Turbulence is modeled by only algebraic closure equations. The spatial scales used for numerical solutions of the equations are enormous; on the order of kilometers, I think. And the calculated numbers are taken to represent the motions, and temperature, of the mean motions of the fluid; not the stats of the numbers as when turbulence is calculated directly from un-averaged Navier-Stokes equations.
That is, the temperature of the mean motions is not rolled up from applying statistics to the calculated numbers. So, it seems to me that the calculated results from AOLGCMs are sometimes implied to be equivalent to what is calculated by direct solutions of the Navier-Stokes equations, yet the calculated numbers are not handled in a manner consistent with what is required by that approach.
If what I am thinking is not correct, I hope Gavin and others will shed some light on where I have messed up. It would be very interesting to see that the spectra, variance, PDF’s, etc. of the fluid temperature calculated with different codes and with different runs of the same code, especially runs with sightly preturbed ICs, do in fact demonstrate more or less independence of resolution and numerical accuracy.
Thank you for your information and I hope we can continue to explore these topics here on RC.
The topic seems to have shifted significantly from the beginning, so let me try to go back to the original issue: “Short and simple arguments for why climate can be predicted”. If I may, let me offer an simple analogy that I often use in my own field of Solar System dynamics.
Most of us have played pinball. The path the ball follows through the playing field is chaotic because of the bumpers set up to make the game exciting. The location of the ball becomes increasing more difficult to predict with time as tiny differences in initial conditions make it nearly impossible to predict exactly how the ball will hit each bumper. In this manner, predicting the “weather” is a lot like predicting the future trajectory of the ball. Short predictions are much more accurate than long term ones.
When we consider the long term evolution of the ball, however, the slant of the table means the *secular* trend is downhill, particularly if we do not use the paddles at the bottom. If we shoot 100 balls through the game, each one will eventually end up at the bottom; the only real issue is timescale. Climate predictions fall into this category; they are much more easy to predict over long timescales than weather.
Global warming can be considered an important factor in determining the slant of the table and how fast the balls roll downhill. The greater the slant, the faster the secular trend and the shorter the overall timescale needed to reach the bottom. There are short term perturbations to the system that can produce the opposite trend (i.e., the ball hits a bumper and goes uphill), but they do not seriously affect the secular trend.
Thus, our goal in dealing with global warming should be to try to make the table much flatter than it is now and reduce the secular trend.
Ike Solem reminds us that volcanos can have a large effect on climate.
There was an impressive swarm of earthquake activity along the southern edge of the Long Valley Caldera during mid November, 1997. The activity reached an initial peak of some 800 quakes per week, then about January 1998, another period peaked at around 900 quakes per week. A few quakes had intensities of near M5. The activity then receeded over a period of a few months. Last I heard, the lava dome had also “deflated” to some degree.
“So, my question is, do you think people get the message that I try to convey this way? ”
No. I don’t get it and I’m a scientist. The things you are talking about “predicting” such as winter or cold air at high altitude are are simply “climo” and not a forecast which implies a temporal component.
This doesn’t mean that I don’t understand the difference between forecasting with climate models and weather models, I just don’t understand the analogies made here.
BTW, where can I find infomation about actual climate forecasts made in the past twenty years ? I can never seem to find this information.
[Response:Climatology is about time variations and about expectations (forecasts), due to changes in the forcings. furthermore, predictions do not necessarily have to be a ‘just a weather forecast’, you can also use models to make a prediction of the surface temperature on Venus. -rasmus]
“I can predict right now with a high degree of confidence that the …temperature averages, and will continue to be above the 1971-2000 temperature averages for hundreds of thousands of years to come.”
Perhaps you should be a little more cautious. I’ve made my living by forecasting and I have learned to be careful of the assumptions I make. The future is a strange place and there are plenty of things in the next few hundreds of thousands of years that can hose your forecast.
My reply is to offer a bet: I will predict the average temperature of the earth for next year and they will predict the temperature of the city we are in for that day next year. While I have had a number of very creative reasons for they will not agree to the bet I have yet to have anyone take me up on it.
Comment by John Cross â�� 13 Aug 2006 @ 9:01 pm
The standard deviation of annual temperature fluctuations of the earth are roughly 20 times (just a guess) smaller than the standard deviation of the daily temperature fluctuation at a random city.
Your bet says nothing about the skill of climate forecasting. In fact, daily temperature forecasts have far more skill than annual earth temperature forecasts when measured by the percent of periodic variability which is explained by the forecast.
Is there a simple explanation some one can give me as to why calculating climate sensitivity from the warmth due to GHG divided by the forcing is not accurate? 33C and 148 W/m2 respectively for a sensitivity of 0.22 C/W/m2.
[Response: For one, this assumes a linear response over the whole of climate history, and would clearly fail the ‘faint young sun’ test. Secondly, where does 148 W/m2 even come from? The forcing number you would want to use (assuming that you would erroneously want to use a linearised estimate) is the net radiative change at the tropopause when you remove all greenhouse absorbers (H2O, CO2, clouds etc.) while keeping the albedo the same. I don’t know what that number would be, and I don’t think it’s particularly relevent in any case (see first point). As far as I can tell this ‘calculation’ originates from Douglas Hoyt (and doesn’t include feedbacks – which is the whole point), and has no back up anywhere in the literature. In contrast, estimates of the climate sensitivity derived from observational constraints that have plenty of support in the literature give numbers around 0.75 C/W/m2 (see our previous posts on the topic: most recently here). Pulling numbers out of thin air is no substitute! – gavin]
I agree it is important to get assumptions right and that users need to understand the assumptions.
For example, people who use NOAA’s NWS hydrologic prediction products for preparedness need to be aware that the NWS hydrologic predictions assume global warming is not happening. Therefore, floods and droughts are likely to be more severe than predicted by NWS and will probably come with less warning time.
Another example – Hank made an assumption (60. of RC fact-fiction-fiction thread) that I should get a good attorney to work on my employment issue. Hank assumes that a competent attorney will tell me that my public complaint is not only easy to ignore but has to be ignored legally. Hank also assumes that publicly posting my story over and over makes me appear disgruntled and makes me appear like I believe that NOAA administrators and NWS did nothing wrong legally.
Hank assumes that the reason I’ve been posting my complaints about NOAA’s NWS supervisors having fired me in 2005 for my efforts on climate and hydrologic change is me, and that my actions aren’t helping me. Hank has it all wrong though. My reason has not been for myself but to bring attention to the need for us to reduce our greenhouse gas emissions. Oh well, sorry if my bring this to you bothers you too. It’s not always easy to figure out where people are coming from that post here. Sometimes we can’t be too cautious or we miss the life boat.
Regarding Dan Huges’ comments (e.g. #10 and #43 above) about numerical convergence:
In post #149 above, Dan refers to a Sandia NL report on validation and verification (V&V) in codes. The codes Sandia is primarily interested in are radiation-hydrodynamics codes used for weapons simulations. This is actually a bit ironic, because these codes are ALSO (just like climate models) applied to systems with large Reynolds (Re) number, and they therefore also of necessity fail the very same convergence tests that Dan seeks for climate models, in the very strict sense of convergence that I believe Dan is using here.
In fact, the very same criticisms that Dan levels at GCMs could equally well be levelled at just about anybody who does computational fluid dynamics (CFD) applied to high-Re systems. Go ask the engineers simulating the airflow over an airbus A380 what the Lyapunov exponent of their physical system is. They don’t know, either!
The notion of a Lyaopunov exponent, while interesting, it not as useful in this context as it is in the case of orbital simulations which have a very very small number of degrees of freedom. The number of degrees of freedom in a high-Re CFD simulation is astonishingly huge, whether you are simulating the turbulence on the Sun, the climate, or a nuclear weapon. You ain’t getting convergence, in the strict sense of the word, in our lifetimes.
This is a big problem. But, it’s a problem for ANYBODY doing high-Re CFD.
This problem does not, in and of itself, however, negate the possiblity of prediction, depending upon what one means by convergence. That is, the answer depends upon what you mean by asking that the results be independent of the numerical resolution. In fact, all of the texts on CFD that I know of devote a hefty amount of space to turbulence modeling, and as any CFD researcher knows well, the results of any simulation depend upon what turbulence model you use. This is why it’s always a good idea to use more than one turbulence model if you can. And why are there models? Because it is not only impossible to get strict convergence for an extremely high-Re problem, it is more importantly besides the point. So, if you want to complain about lack of convergence in climate models, please define your terms precisely.
I must concur with Geert (#143). And, Dan, in #144, you are also incorrect here. The N-S equations are indeed nonlinear and that is the problem. The advective term, the v dot grad v term, causes these problems, whether you are simulating flow over an F-18, predicting the weather, simulating a nuke, or doing climate simulations. Everybody knows this. Yes, it is also true that the N-S equations are quasilinear, but the class of quasilinear PDEs is not disjoint from the class of nonlinear PDEs.
re: 165. Absolutely. I could not agree more with “Numerical approximations for boundary conditions, by the way, can destroy the most carefully-laid plans of numerical solution methods.” However, if the boundary conditions in any model are not significant with respect to domain emissions over time (especially if we are talking orders of magnitude), they are not likely to be significant forcings in comparison. I use a model every day in my work. It is highly dependent on the boundary conditions because they “set the stage” so to speak for the model to run and provide background levels. Those boundary conditions are quite significant relative to the conditions within the model domain. In other words, the forcings are similar magnitude. Not so much the case now with climate models where man-made forcings have been shown to have greater influence on global temperatures since circa 1970. I would love for a skeptic/denialist to cite any climate model work in the peer-reviewed literature which, when run solely with natural forcings, can replicate recent global temperature trends.
I do apologize and was absolutely wrong for completely misreading something very important in post 145: The forcings listed were “sun, volcanos, emissions”. I read “emissions” solely in the context of natural emission sources such as influences of the sun and volcanos, as they were linked together parenthetically. Obviously, if man-made emissions are included in the boundary conditions, my comments that followed did not make much sense at all as I was only considering natural forcings in the boundary conditions. Again, my apologies for my misread. I do disagree that solar influences, volcanic emissions and influences, and man-made emission estimates for the future are “unknown” for “future development” of boundary values. For example, various man-made emission estimates (scenarios if you will) can be and are quite useful. They have been used in regional air quality modeling for quite some time with quite reasonable and accurate results.
But again, my apologies to all for my obvious mistake.
re: #167. Kindly point me to text in the Sandia report in which it is stated that convergence of the numerical solution methods for models/codes for applications to large-Re turbulent flows cannot be attained. Include the book by Roache, too, if you need additional material for backup. If you do not supply such a pointer, then I will take that to mean that you cannot find it.
Also, kindly point me to text in any CFD textbook in which it is stated that the convergence of the discrete model equations for the mean flow field cannot be attained; no matter the time given to perform the demonstration. If you do not supply such a pointer, then I will take that to mean that you cannot find it.
Although I will take a lack of response to mean the requested text cannot be located, it would be very useful to see a straightforward summary of the information located posted here on RC.
Oh, anyone reading this is invited to supply the requested pointers.
“The Seven Deadly Sins of Verification:
Assume the code is correct.
Use of problem-specific settings.
Code-to-code comparisons only.
Computing on one mesh only.
Show only results that make the code “look good.”
Don’t differentiate between accuracy and robustness.”
[Response: I’m not going to get into generic CFD questions, but with regard to your seven deadly sins, GCMs come out looking quite good. 1) No one has ever said GCMs are absolutely correct – at GISS we’ve found at least three coding errors since the IPCC AR4 simulations were done, none of which thankfully appear to make any noticeable difference. I’m sure there are others. 2) Comparisons can be as quantitative as you like – download all of the IPCC AR4 simulations from the server at PCMDI and have away. 3) Model parameters are tuned to improve the simulation of present day climatology. All applications are done using the same settings. 4) Model to reality comparisons are done all the time. 5) Resolution is an easy thing to change and is done frequently within model groups and across them. 6) With all the IPCC AR4 data available, we are not in control of what comparisons are done and what results are shown. 7) Robustness of the climate sensitivity across models is pretty much the whole point of the AR4 comparisons and that is quite distinct from the accuracy of any particular model. You can see my (long) paper on the subject for more details. – gavin]
My posts #s 10, 31, 32, 43, 127-130, 143, 149, 150, and 154 addressed the AOLGCMs, not general CFD.
All I’m asking is for someone to point me to papers and reports in which it is shown that any AOLGCM applied to a fixed application has the property that as the discrete temporal and spatial increments are decreased the changes in the calculated results for the mean flow field approach fixed values; that is, the changes in the calculated values bewteen runs decrease and approach machine precision (or some appropriate small number) . I can provide a more detailed prescription, but it should not be needed in this setting. All I need is one specific citation.
I think it is the case that code-to-code comparisons are accepted as good practice under only very restrictive conditions. I can supply citations if anyone wants to see them.
Thank you in advance for the citation(s) that I’m looking for.
I did not say that convergence of the model equations to the mean flow can not be attained. Do not put words in my mouth.
It appeared (because of your comments re Lyapunov exponent) that you wished, not that the numerical methods be formally convergent to the model equations, but that there be demonstrated numerical convergence to the exact solution. I wished only to point out that it is impossible to converge in practice to the exact solution of a high-Re flow – that is to say, it is impossible to accurately simulate each and every eddy down to the Kolmogorov scale.
My apologies if I misunderstood you. Did I?
In any case, it is impossible to proceed further without a formal mathematical definition of “convergence.” Pick one you like and let me know what it is.
That doesn’t seem like such an onerous burden offhand, and a numerical convergence study along those lines would seem reasonable to me. I would guess for a number of reasons that convergence in GFD might be more difficult than in other areas of CFD, but I’m not a GFD person so I’ll keep my musings to myself here, other than to point out that defining a mean solution is a bit tricky given that we are talking about a time-dependent solution and there’s an ensemble with only one member.
I’d also guess that it’s already been done; in fact I’d be really surprised if it hadn’t been done.
Anyway, I’m a CFD/fluids person, not a climate person, so I’ll leave finding the references to somebody else. Sorry.
Speaking again as a CFD/fluids person (astrophysics), not a climate person:
In some of your other posts I’ve come across, you’ve raised some good points about V&V, SQA, etc. In my experience, the engineers really do seem to be taking the lead on establishing code standards. I’ve seen a lot of publications in the literature in my field with bad code V&V.
However, (1) the climate system is not as nice and self-contained as a laboratory experiment. This obviously complicates the process of V&V. You can’t just take engineering concepts of V&V and slap them on climate models without taking into account the unique aspects of the problem. There’s a whole host of epistemological issues here that you don’t get in the lab. (2) Again, not my field, but I feel pretty sure that the GFD people don’t have anywhere near the budget like you get at the national labs when you’re working on, say, some big ASCI project. Those guys can do all the V&V/SQA they want because they have CPU cycles to burn and the money to hire all the comp sci / engineering / science people they need.
I’d be curious to know how convergence works out for GCMs. But, on the other hand, I don’t take my lack of understanding of the codes &c as an excuse not to pay attention to the results. We can put more funds into improving the sims; in fact, I think it’s imperative that we do. Then, they might even be able to hire people like you, Dan, so that instead of critiquing their methods from the outside, you could work to solve the problems from the inside. Constructive criticism is of course always needed; inflammatory rhetoric is not.
I think you’d be a fool or in some serious self-denial, though, to think that somehow there’s some basic flaw in CFD applied to the GFD in the GCMs, and that everything’s going to be hunky-dorey if we just keep on with business as usual. We do that and we’re scr#%$d. The evidence is pretty dang overwhelming.
[Response:I prefer explaining it here so that we can keep the discussion at one site, rather than jumping back and forth (please be a gentleman, Roger, and keep the discussion that we started, here at RC…). For academic reasons, there may be many different ways of defining climate, but for all practical purposes, it’s useful to define it as the ‘typical weather patterns’. Or, we could call it ‘etamlic’, and use this new term to describe what we mean: the typical weather pattern, such as described by a distribution function of the important variables. It is then etamlic that is of greatest interest for the society, although the complex and intricate processes are themselves interesting, at least for the scientists and the model developer.
Yes, it’s trivial to predict the seasonal cycle of the etamlic – here we are in complete agreement – but this trivial knowledge is more profound than so. It leads to the next step: is it possible to push the limit a bit further? If you look at the distribution function for the temperature in winter and summer at high latitudes, you see that the distribution function (probability distribution function, or p.d.f.) of the temperature is systematically affected by external forcings. The same observation can be made for the diurnal cycle, and the time horizon for the prediction isn’t the main issue here, because it’s true for any day in the future (in hundred and one years, the night time temperature will still tend to be lowest). Thus, these are clear effects of the natural cycles in etamlic. They do not arise spontaneously, but are driven by periodically varying conditions (‘external forcings’). These periodic variations can be taken as empirical evidence that the local etamlic response is *not* insensitive to (natural and periodic) external forcings.
To progress beyond the trivialities, we must ask: how much better can we do than just using the seasonal variations as our predictions? Seasonal forecasting over Europe is notoriously difficult, but if we use the 1961-90 period as base line, almost all 3-month-temperature averages are above the base line. Thus, skill is boosted by the fact that there is a trend in the data! Precipitation is harder, and if you look at the seasonal cycle in precipitation e.g. for Norway, you see a much less marked annual cycle. In the tropics, the situation is different, with the wet and dry seasons being the main features. But, don’t forget that predictability from initial coditions and predictability from changes in boundary conditions are two different things. While it is true that predictability from a given state of the system (initial conditions) is very limited and seasonal forecasts have little skill beyond a few months, they can still predict that winter temperatures are generally lower than summer temperatures at higher latitudes even after hundreds of years. To re-iterate this point, climate models do also predict that afternoon temperatures generally are higher than temperatures at dawn, even after many years and long after the model has lost the ‘memory’ about it’s initial state (due to chaotic behaviour), given that the diurnal cycle is provided for the forcing.
The next crucial question is: what about the global response? Is there any reason to think that it is insensitive, in spite of the fact that the local etamlic undergoes such marked annual variations? The effects of vulcanoes – such as after Pinatubo – suggests otherwise. And then, the BIG question: is there any reason to think that the sensitivity stops abruptly beyond the seasonal forcings or vulcanoes? A more reasonable proposition, if we have no additional information, is that the sensitivity is a slowly varying functions with the strength of the forcing. This is a good null-hypothesis, but it is not entirely implausible that etamlic hypothetically could undergo abrupt changes (‘tipping points’). Are there any reason to think that the response/sensitivity is different for other radiative drivers, such as changes in the total solar irradiance, landscape, or greenhouse effect? We can look to paleoclimatic (empirical) evidence, such as the glacial cycles, and what do we see?
Now, the we are ready for the next question: how skilful are the models and what fidelity do they have? In the IPCC TAR, a comparison between simulations for the past and empirical data, suggest that at least some climate models ar able to provide a close description of the historic evolution of the global mean temperature, given estimates for the historic GHG emissions and known natural forcings. I think that models which reproduce the past trends and past climate realistically are useful for making scenarios for the future. -rasmus]
Pete Williams, Validation of any models/codes designed to be applied to analyses of inherently complex phenomena and processes is a very difficult undertaking. Almost all real-world problems of interest cannot be studied in a laboratory; take the case of a supersonic aircraft carrying external stores and firing/releasing some of them. Yet models and codes designed to analysis complex engineering and natural situations and equipment do in fact get Validated. It will very likely be very difficult and expensive for most codes applied to real-world analysis objectives. By the same token, many existing codes would never have gotten designed and built if the difficulty of the task was a factor.
BTW, Gavin, you have measured the successes of one AOLGCM by comparing the outcomes of some of the seven deadly sins. This is just backwards from generally accepted practice in scientific and engineering software. The accepted practices avoid the seven deadly sins like a plague and instead focus on the ‘best of the best’ good practices. After all, aren’t sins to be avoided.
[Response: I don’t understand your point. We do try to avoid sinning and I gave you list of how we avoid temptation. What is so objectionable? – gavin]
In reference to abrupt climate change, the concept of geoengineering to effect the global warming trend has been widely discussed. In particular, particulate injection into the atmosphere has been suggested as a way to mitigate global warming. This might not be a good idea, based on the following scenario:
The increasing freshness of the North Atlantic has been tagged as a primary component of a process that could turn off the oceanic heat conveyor. However, it is not well understood how this could occur in an abrupt manner. My idea is that a volcanic event could, as has happened in the past, cause a multi-year rapid cooling that would freeze the fresh water on the surface of the North Atlantic. This could cause an “instant” shutdown of the conveyor current, which would create a “snowball” cooling effect that might trigger another Ice Age.
Thus, the suggestion that artificial atmospheric cooling effect should be implemented is premature until all factors are examined.
RE: #177 – My take is as follows. Here is my statement to the would be “geoengineers:” Keep your grubby mitts off of our Earth. We can certainly debate the current and historical abuses layed upon her. But this talk of truely destructive meddling on an unprecedented mass scale makes me livid! Utterly livid!
Re your response to #169 I hope you don’t mind me pointing out that your response to the seven deadly sins is a beautiful example of the sixth sin – ‘Show only results that make the code “look good.”‘ In each of your defences, you only describe where the models fail to sin, and never mention the odd lapse, except in case of sin 1.
But your reply is a misinterpretation of that first sin. When it says assume the code is correct, it means that the algorithms of the code are correct, not that there may be a few trifling coding errors. By highlighting those, you have shown that you believe the algorithms are correct, yet in the abstract to your (long) paper it says “Data-model comparisons continue however to highlight persistent problems in the marine stratocumulus regions.” So even though you know there are problems you still asume the code is correct :-(
Sin 2 is to make qualitative comparisons, which you invite us to do, but that is thwe sin! When quantatitve comparisons are made we find on as simple a matter as climate sensitivity the values produced range from 1.5C to 5.6C and greater! No wonder you prefer qualatative comparisons :-)
At least you admit you are guilty of sin 3: problem specific settings used to improve the simulation of present day climatology, but then you try to spin your way out of it :-(
Sin 4 is more difficult to pin on you. You certainly compare code, since it is now it is all open source. And you compare it with reality, for instance with the results of the tempertures measured in the upper troposphere using radiosondes and satellites. It is just a pity that it is the results that are changed to fit the models, not vice versa :-(
It is true that the modellers are not guilty of sin 5, using one mesh only. The paleoclimatologists have been trying to get the models to replicate abrupt climate change by using smaller and smaller grids, but still have not managed it. Why does it remind me of Einstein’s remarks “Insanity: the belief that one can get different results by doing the same thing.” :-)
In trying to defend yourself from sin 6, which I have already shown you to be guilty of, you argue that you are not in control of what results are shown. I think you meant the results of the comparisons. However, you do inevitabley have control of the results you publish on which the the comparisons are made. One case where results were not published was the Climate Prediction experiment. Models which crashed, were left out of the published data, and there was some discussion of leaving out the highest sensitivities although this did not happen, as fas as I know :-?
Finally sin 7: not differentiating between accuracy and robustness. I not sure I understand that, or should I say I am sure I don’t understant that :-( Do you both mean precision and robustness? To me the model results are neither accurate (sensitivities of 3C +/- 50%) nor robust (sensitivities of 3C +/- 50%). :-(
All I can say is that with 7 out of 7 as a score for sins, it is just as well for you than my name is not St. Peter, otherwise I know where you would be heading ;-)
[Response: Well, I’m pretty glad you are not St. Peter, because your judgment may be impaired. You have assumed a whole bunch of things I never said. For instance, find one place anywhere on this site, or in any of my papers where I even hinted that I thought that the model code was a ‘correct’ emulation of the real world. I kind of assumed that it went without saying that everyone understood that climate models are only approximations of reality – so when asked whether the I think the code is ‘correct’, I assumed that this is a question about it’s bugginess. But in either sense, I have never assumed the code is ‘correct’.
With respect to 2, you are quoting a quantitative comparison as proof that we don’t do quantitative comparisons. I fail to see your logic. Number 3 you have completely mis-interpreted. The ‘sin’ is to keep changing the parameters to fit whatever latest application you find. That is precisely what we don’t do. Once the parameters are set for the climatology, they are set for all subsequent tests. We use the same parameters for the 20th Century runs as for 8.2 kyr experiments, and you know what, it works. Your other points are pretty trivial. – gavin]
Re #177 If a volcanic eruption could cause cooling which would be amplfyied by the Norh Atlantic freezing over, then surely the melting of the Arctic ice will have the opposite effect and cause a warming?
In the time available, geo-engineering is the only answer, otherwise the climate of the entire northern hemisphere will be altered on a similar scale to that which happened at the end of the Younger Dryas.
Re #178 It is a bit too late to say “Keep your grubby mitts off our Earth.” Basically the whole planet is scarred from our engineering, not least the atmosphere which has been severely altered by the byproducts of the engines we have used to engineer our lifestyles – the American Dream which is heading for a nightmare!
I also discussed the issue of climate as an initial value problem in Pielke, R.A., 1998: Climate prediction as an initial value problem. Bull. Amer. Meteor. Soc., 79, 2743-2746 [http://blue.atmos.colostate.edu/publications/pdf/R-210.pdf]. Thus the topic was not first introduced on Real Climate but has been presented to the community some time ago. Unfortunately, international assessments have chosen to ignore this perspective.
I would like your conclusion on what can the models skillfully predict on multi-decadal time scales. You state
“In the IPCC TAR, a comparison between simulations for the past and empirical data, suggest that at least some climate models ar able to provide a close description of the historic evolution of the global mean temperature, given estimates for the historic GHG emissions and known natural forcings.”
What else can the models skillfully predict on this time scale?
[Response:Thanks for your response, Roger, and your interest in this post. You are absolutely right – RC did not first introduce the concept of predictability of first (initial value problem) and second (boundary value problem) kind, but this topic goes back to the 1960s and the work by Lorenz. One prominent scholar on this topic is Tim Palmer (at the ECMWF), who has written about this years ago. I must admit that I have not had the time to look at the model results in detail, but work on finger print methods suggest that the models are successful at capturing the main structures of the atmosphere. And, of course there is the trivial observations that they capture the geographical distribution and large-scale circuation patterns. When it comes to more local details, then the models fall short, and one must apply downscaling techniques to make scenarios for the local climate. One the local scale, there are greater differences amongst the models on historical evolution, even the same model with same forcing. But, the main point here is the global temperature, which is an indicator of the state of our climate (like the temperature giving an indicator of a fever, but doesn’t tell you where the infection is…). -rasmus]
Re #181 It is a bit too late to say “Keep your grubby mitts off our Earth.” Basically the whole planet is scarred from our engineering, not least the atmosphere which has been severely altered by the byproducts of the engines we have used to engineer our lifestyles – the American Dream which is heading for a nightmare!
Comment by Alastair McDonald â?? 22 Aug 2006 @ 2:42 pm
I’m amazed at the people who hold the view that the earth and it’s ecosystems are incredibly sensitive and on the verge of collapse. To me, environmental changes seem small, the planet seems remarkably robust and it’s creatures very adaptable.
I could be wrong about all this but keep in mind, the burden of proof in the public debate will always rest with the doomsayers.
RE: #183 – While it may be done off line, there is no public in depth verification of the NWS 90 day (30 day lead time) precip and temp forecasts. The best they do is post achieved maxima and minima for a very small number of recording stations at metro area airports. It sure would be nice to see a more thorough scorecard. Would love, for example, to see a scorecard, month-by-month, for the NWS 90 day temperature Aug-Sep-Oct that was pushed out in July. The scorecard itself could also be a map of variance or sigma versus the forecast map. Or something like that. Put those computers to work looking at how well the prognostications came in. Do it for all climate forecasts. Publish results. Hide nothing.
# 175 Response says: … For academic reasons, there may be many different ways of defining climate, but for all practical purposes, it’s useful to define it as the ‘typical weather patterns’. …
‘typical weather patterns’ terminology for climate in early spring was used in narrative Upper Midwest Spring Snowmelt Flood Outlooks in the 1980s and 1990s by the NWS North Central River Forecast Center.
During that time, I was not aware that climate in the Upper Midwest was changing rapidly. Thus, initial numerical snowmelt flood outlooks which were issued then were often not high enough and untimely due to the earlier than expected snowmelt runoffs and heavier than ‘typical’ rainfall while the melt was going on.
Defining climate as ‘typical weather patterns’ is misleading now that we know the climate in the Upper Midwest is changing rapidly, especially during the winter – early spring period.
It would be interesting to update Figure 1 with today’s temperatures. It is not clear which dat sets were used for the red line, and whether other global temperature records would give different results.
[I could be wrong about all this but keep in mind, the burden of proof in the public debate will always rest with the doomsayers. ]
Yes, you are wrong about all this.
Read up on glacial meltback in the Andes and Himalayan mountains and Arctic ice meltback.
Then convince the hundreds of millions of farmers and villagers dependant upon river water flowing from those mountains that [these environmental changes seem small, the planet seems remarkably robust and it’s creatures very adaptable.]
Comment by John L. McCormick — 22 Aug 2006 @ 7:58 PM
Re #179 et seq
Sheesh. Look, there is much to be learned from the ideas of V&V which have been developed over the last few decades. But on the other hand, I’m going to jump to conclusions here and guess that the people ranting about this stuff are used to well-controlled experiments, lots of data, and known physics. Air flow over a wing is just the Euler eqn, plus some turbulence models that were developed with a lot of hard work. Nuke codes are verified thanks to ca. 1000 nuke tests, plus constant verification with hydro tests, etc etc. The physics is pretty much all known, thanks to an absolutely immense expenditure finding cross-sections, opacities, etc; the initial conditions are all defined; the gov’t spends a gazillion bucks on a proton radiography system so you can see what’s going on, and so on.
When push comes to shove, you use these same LANL/LLNL legacy codes and apply then to a supernova, and it fizzles. Why? Good question. The real world ain’t so simple.
So don’t complain that the models still show big discrepancies/uncertainties with long-term climate forecasts, unless you know what you’re talking about. Being a CFD/fluids person, by the way, does not count as being an expert, in this regard.
RE: #189 – What specifically do you mean by “Arctic ice melt back?” Do you mean changes in the mass or area of continental glaciers? Or are you refering to reputed “long term trends” (based on a rather recent base line) in Arctic Sea Ice extent minima? And now for the trick question, what is the Cpk of said areal sea ice extent minima? What is the demonstrated confidence interval of such a figure? Share your wealth of knowledge!
and share Dr. Bill Chapman’s wealth of satellite images and data.
Go ahead, Steve, knock yourself out. Spend an afternoon viewing nearly 30 years of evidence the warmer temp in high latitudes is real and having an impact.
Comment by John L. McCormick — 23 Aug 2006 @ 6:27 AM
RE: #195 – I am well familar with Crysophere Today. Now that you also know about it, Google to learn about Cpk and start to do some real work. Who knows what you might find? Of course, one major challenge you will encounter is the fact that the satellite data don’t go very far back in time. That’s all I’ll write.
Hi steve, I googled Cpk. First hit; California Pizza Kitchen. No help there.
Satellite data don’t go very far back in time? How far back would satisfy you? I’m more interested in where the melt is going to take the NA climate in the future (next 30 years).
See: Disappearing Arctic sea ice reduces availale water in the American west; Sewall & Sloan. GRL, Vol 31, L06209, doi:10.1029/2003GL019133,2004.
That’s all I’ll write.
Comment by John L. McCormick — 23 Aug 2006 @ 11:45 AM
RE: #197 – well if you are interested in understanding what would be considered innate variation versus abnormality vis a vis Arctic sea ice mean annual extent and annual minima time series, a precursor would be to have a large enough sample of past data to be able to know, at say, a 90% confidence level, what the “warning limits” and “spec limits” are. The fewer data you have, the wider the warning and spec limits need to be to prevent a false alarm trigger. Certainly, you can expect a certain amount of variation. The degree and sign of the variation may actually be modulated by some sort of overall factor. I believe that to be the case. That would be sort of a meta variation. So, we have a big problem here. We indeed have only 30 years of data. We therefore have a poor understanding of the innate level of variation and of the supercycles of extent. We also do not really know what to make of events like the 1999 minimum. Is it within innate variation, an outlier, what? If the 2006 minimum is not at pronounced as 2005, what if any thing does that mean? What if the next few years, the minima continue to become less pronounced, what if anything does that mean? Need data … lots of it.
re 182. [Response … But, the main point here is the global temperature, which is an indicator of the state of our climate (like the temperature giving an indicator of a fever, but doesn’t tell you where the infection is…). -rasmus]
It seems the infection is more serious than previously figured, do you agree?
I’m really not qualified to comment on the scientific debate above, but I do have a suggestion in answer to your original request, how to explain the predictability of climate when you can’t necessarily predict weather. The reason I have bothered to post is because the more I think about this idea, the more interesting it becomes. I hope it helps.
The Golf analogy.
Predicting the weather is like playing a stroke on a golf course. I know where the ball is, how far away the hole is, what obstacles may be in my way, how to stand, hold the club, address the ball and swing. But I don’t know where the ball is going to end up. This is because there are some things I cannot anticipate perfectly; the exact bounce of the ball, a slight ‘twitch’ in a muscle at the crucial moment, a sudden gust. But I can still get the ball closer to the hole. How can I do this? I take into account the known variables, apply the principles I have learned over many years’ practice, amd concentrate. Sometimes, on a good day, I’ll strike the ball perfectly and it will end up right by the hole. The next hole, I could play just as good a shot and end up 30 yards out. On a bad day, I’ll shank the ball and end up in the rough.
Predicting the climate, on the other hand, is like trying to win a Major tournament. A lot depends on it, including the well-being of my ‘family’. What I can predict is that I will complete the round, even though I don’t know where each shot is going to go. I also know that, if I am good enough, I should get round in in a number of strokes closely related to my handicap. When I start the round, I’ll look at the course, the conditions and my skill, and choose the clubs, clothing and, probably, the strategy I intend to adopt to maximise my potential to hit birdies, and minimise the risk of bogies. (If it’s a windy day on a links course, i might choose clubs with less loft, for example).
If someone asks me what score I’m going to make today, I can consider all of these things and give an answer, which I can expect to be close to my final score, because of what I know about golf. I can do this, even if I don’t know where my first drive on the first green is going to end up, or in fact, where any one shot will land, exactly.
Then, I realised, that this analogy can be extended to cover much of the debate between Realclimate and Climate Science. Realclimate: I’ve practised hard, I’m on the course, I have to do my best; people are depending on me. I know I can play a good round. Climate Science: You are not good enough; you are playing off the wrong tees; you don’t have the right equipment…
So, are climate models as good as Tiger Woods – would you bet on them? Or are they the gifted amateur with a dream, and a chance? Whichever the answer, we have to play the tournament – the future. But both sides of your debate have important points to make about the outcome of the event, and the ability to predict the outcome.
Like I said; it’s all comparable to golf. Trouble is, the stakes we are playing for are high, and even Tigers sometimes get beaten.
Steve I have gotten past Google first hit of Cpk [California Pizza Kitchen] and now cramming on Cpk – Process Capability Index. Doesn’t mean I will [do the real work]. Admittedly, that is beyond my pay grade.
But, I will read what I can undestand. And, I will consider variation in sea ice coverage inlcuding inputs of Asian north flowing rivers, shallow profile of the Arctic and influence of the Polar low. Then, I will see if I understand arctic meltback while using more data and return to this thread.
Comment by John L. McCormick — 24 Aug 2006 @ 3:31 PM
Someone may have suggested this above– I haven’t searched all the comments– but I often find a familiar analogy will carry more weight than an explanation that continues to focus on the topic at hand. For example, you could point out that predicting how an individual gambler in Vegas does on a particular day is difficult, but predicting how the average gambler does on that day is easy. Since climate really is average weather (in a sense that gets technical in many ways), we can predict it without needing to be able to predict the actual weather…
So are you saying that you could predict when the next ice age would come and when it would end, or if you lived before the last ice age you could have predicted when it would start and when it would end?
If you’re speaking to someone who likes to play Poker or other card games (and many people do, these days), here’s one way to explain it: If you deal out a billion hands, you can come up with extremely precise probabilities for how frequently certain cards, or certain combinations of cards, will come up. This is what makes card-counting in Blackjack possible. But no matter how accurate your probabilities are, it would be virtually impossible to predict with any accuracy exactly which cards would reveal themselves at any given moment in the game. So if you know the math and bet accordingly, you can ride the big waves to profit, but you’ll never be able to predict the small chop that you encounter in between those big waves.
My experience is that people on average understand extremely poorly probabilities. Would they, there would exist no games of chance in the world.
To my mind predicting future climate is analogous to predicting whether next day. While it is not possible to announce beforehand the exact minute rain will come next day, it is possible to be sure that in the evening next day it rains.
Re #206: Vagelford, I am but an amateur with regard to paleoclimate, but I believe the answer is yes. Well, yes provided you are willing to accept predictions with +- 2000 year error bars.
The paleoclimate records from ice cores, for example, clearly indicate the importance of what is called ‘orbital forcing’, that is, changes in the solar insolation in the far north due to slow, small changes in the Earth’s orbit around the sun. These changes can readily and accurately be computed far into both the past and the future.
If the change is enough, ice ages will start or end. For example, without AGW the next major coolings are predicted for 50,000, 100,000, 600,000 and 650,000 years from now. Any of these could have been enough to trigger an ice age. But with AGW?
If you meet someone at a party who asks you, with a snide smile, “How can you predict the climate in the future when you can’t even tell me what the weather will be like tomorrow?”, give them this to think about.
“You have been imprisoned in solitary confinement, for twelve years, for the new crime of Climate Change Denial. For twelve years you have lived in a small cell, with no window, and have been deprived of any contact with the outside world. You haven’t seen the sky for ten years, you live entirely indoors in a air-condition facility. At the end of your twelve years, on the day of your release, a guard comes in with some civilian clothes for you – a nice pair of Bermuda shorts, tropical shirt, straw hat and sandals.
The only problem is, that you are imprisoned in Chicago and it’s January 20th.
Re #209: David thanks for your comments. They give me the chance to comment further on the subject.
So David, what you are saying is that you can’t make a prediction, since you can’t “model” AGW, but given the history of ice ages and the fact that they are related to orbital forcing they should, but also could not, happen at the intervals that you are giving?
What I’m trying to say is that there must be models that have a number of parameters, such as the earth orbital elements or the solar insolation or the volcanic activity, that model the climate in the range of thousands of years. Are you suggesting that given a small uncertainty in one or some of the above parameters you can guaranty that the result after a couple of runs of the model will be the same climate (ice age – not ice age) in the same periods of time within the reasonable errors?
In the article titled Chaos and Climate, the author claims that the Lorenz model will not show a change in climate if there is a perturbation in the r parameter (since climate is not chaotic) and uses the mean value of z to demonstrate that, which is a bit of cheating since the Lorenz model is 3-D and the x and y would show a dramatically different behaviour. First of all the Lorenz model has a natural climate variability that is evident in the full 3-D by the characteristic two wings (an information suppressed by the choice of z) and secondly a plot of y would show that a small change in r would give a completely different oscillation between the two climates than the plot with no change (characteristic of the chaotic nature of the Lorenz system). http://www.geocities.com/vagelford/Science/Lorenz_comparison.gif
Are you saying that the case with climate is not analogues to the Lorenz system?
And since we are talking about the Lorenz model, it is a well known model with well known equilibrium points that the climate oscillates between them. My question is, how well do we know the terrestrial system and the equilibrium points of its climate?
[Response: The ‘climate’ of the Lorenz model includes the two lobes and the time spent on average in each (think of ENSO in the real world). A change in climate in that system would be a change in the location of the lobes or of their relative importance or something similar. The point being I would trace out essential the same butterfly wherever I started given enough time. Thus the ‘climate’ of the Lorenz system is stable. – gavin]
Re #211: Vagelford, my current amateur research project has led me to have great interest in the paleoclimate of the Alaskan Penninsula and the Gulf of Alaska coast during the period 50,000 to 40,000 years ago. This led me to read three books on paleoclimate. Also several papers. In this reading I noticed to strong effect of orbital forcing upon macro features of climate.
In my readings, four different papers offered the four different dates I mentioned as potential tipping points into the next ice age, assuming no AGW. If the authors of those papers offered probablities, I don’t recall them. What I do recall is that each in series seemed to me as more likely to led to an ice age than the previous one.
If you like mathematical models, I suggest thinking more along the lines of Markov chains, wherein the current system state does not lead definitely to just one fixed state, but rather assigns probabilties to the various transitions. To the limited extent that I know about climate modeling, it appears that this approach is not often used.
To get back to Rasmus’s original question, I think you may be able to make the same point with less technical detail (or at least withhold it until requested).
Here’s the analogy I used to distinguish between predicting climate and weather: you can predict with some confidence and a narrow margin of error the average height of the next 100 people who will walk past the sidewalk cafe where you are holding forth — but you can’t know the height of the next person to go by (except to say that if it’s an adult human, it’ll be between 120 and 240 cm). Any other example that built on the listener’s personal experience would likely work as well.
“Short and simple arguments for why climate can be predicted”
Does anyone think that any future IPCC assessment report will actually issue predictions for methane atmospheric concentrations, CO2 emissions and atmospheric concentrations, and resultant globally averaged surface temperature and lower tropospheric temperature?
[…] overall climate for the rest of the century. Sheer hubris. Answered in every single FAQ ever – here, for example. The short and sweet answer is, climate and weather are similar, but there are some […]