It seems to me a little hasty to state that the ITCZ is an emergent phenomenon. What about the ITCZ is emergent? Is it the organization of small convective elements over the large scale? If so, how do you quantify that organization in climate models? And is it really emergent beyond the self-organization of systems at the mesoscale? Or are you referring to the dynamics (convergence and confluence of the wind) that result from the organization of the convection at the larger scale?
[Response: The point is to distinguish physics that is coded for in a GCM – such as the dependence of evaporation on wind-speed, and large scale organised features that emerge from the interaction of all the physics. There is no subroutine for ‘doing’ the ITCZ in other words. -gavin]
Comment by Catherine Gautier — 3 Jan 2007 @ 9:08 PM
RE: Climate is instead a boundary value problem
I would make a distinction here. Boundary value problems of this type that deal with heat flow in solids have reasonably well behaved boundaries. Take for example the classic bar of uniform solid with a specific isotherm presented to one end of the bar. In climate, the mere task of determining just what should be the boundary value for a particular t0 parameter in question is perhaps not quite so straightforward. How stable do we assume things to be prior to t0? How uniform is the value presented at the boundary? Etc. Is there not a continuum between climate modelling and weather prediction, in point of fact?
At first blush, it would seem that the cumulus parameterization scheme(CPS) would have to provide fairly accurate moistening and warming rates in order that a realistic ITCZ show up. On the other hand, maybe the climate models are relatively insensitive to the CPS used? If so, ITCZ appearance in climate models is not so impressive.
[Response: Fair point – without some coding for moist convection, an ITCZ is unlikely to emerge. But all GCMs since the 1970s have ‘had’ an ITCZ even though the parameterisations range from the very basic to the very complex – so the existence of the ITCZ is robust. Of course, there specific aspects of tropical convection that aren’t, and that is where the evaluation of any specific scheme is focussed. -gavin]
I did not register to review the GCM that you had provided a link to in your article. I did read the article. Part of it prompted me to go in search for an earlier work that complimented Dr. Hansens work with a model. I went in search of a study that was published about two years ago regarding the BSRN at ETHZ (Swiss Technical Institute, Zurich), regarding radiative forcing and ran across a reference to the ECHAM5 GCM:
based on the data regarding the descriptive update I am impressed with the level of effort regarding the radiative forcing adjustment in the shortwave band, I think the increased resolution may have benefited the tracking accuracy.
The point is that I do not know, that enough value for radiative downwelling energy, through the entire spectrum is represented yet. The ECHAM5 GCM does appear to have a fairly good match up in the shortwave bands though.
I still think there is work to do with the adiabatic change at altitude as the ground based Lidar in Colorado seems to indicate there are significant changes in the altitude, temperature and water vapor content of the tropopause and Stratosphere that do not appear to be addressed. The recent paper by Dr. P. Chuang at UCSC:
appears to represent a major change in the latent heat release at altitude. I suspect this may be related more to the saturation level then the aerosol “seed” size or concentration, apparently the jury is still out in this regard.
Of greater interest is the apparent evidence in relation to the radiative shade and it’s effect on marine biological health. If the work of Dr. M. Behrenfeld at OSU appears correct, there is the possibility that though radiative flux does not indicate a change, that warming is having a negative effect:
This takes me back to the 2004 article regarding the Swiss Resort shielding the snow base around snow molding equipment with a simple tarp, not necessarily a thermal blanket and preserving the snow over the summer. This brings me back to the question regarding the drivers for the possible reduction of precipitation. Why is there less precipitation when the apparent vapor content is higher in warm air and the condensation temp at the troposphere is ‘cooler’, (providing I understand the 2006 Spring study saying that GHG reduced the long wave energy from reaching the 250mb range.)?
The most recent work I have been attempting as a layman is in regards to trying to breakdown of the major oscillation patterns to see if the drivers can be described. I believe if we can get to the point we can decipher the drivers of phenomena such as NAO or ENSO the long term models might improve dramatically.
I think the piece that Dr. Benestad posted earlier this week “Mid-latitude Storms”, is beginning to get close to describing the drivers. The interrelationships between large scale phenomena does not seem to be very good at trying to map cause and effect, it almost seems like using bamboo cane fishing poles as chopsticks. Is there any work or signals for possible drivers for these large scale events in any of the models or recent studies you have seen?
I, too, find the statement about current models being non-chaotic surprising. I once saw a suggestion that lemming populations were following an attractor and painstakingly created a cellular automaton to look at population behaviours — no attractors that I could see in two or three dimensions, but the little bugs did strange and unexpected things. Perhaps, as the climate models become better, we are in for surprises.
The article (timely and extremely useful) mentions that the microphysics of clouds and aerosols needs research — the latter is an area where statements such as ‘this work is difficult to carry out in the field’ occur frequently, so let’s not hold our breath.
Might I ask about a couple of other points?
Do the models make allowance for variation (other than, obviously, temperature) in the condition of the ocean surface, such as variation in evaporation rates (not temperature related), variation in aerosol production (not related to windspeed), variation in CO2 incorporation (not related to either)?
The idea that mankind’s small percentage of the carbon cycle could be the cause of the Mauna Loa graph seems, to this layman, to be unlikely, the attribution owing more to man’s overwheening vanity than scientific measurement, like claiming the Earth as the centre of all things. It seems more likely that we are disrupting one of the feedback mechanisms — my guess is that surfactant and oil sheen pollution of the ocean’s surface has led to reduced aerosol production, less biological CO2 pulldown and reduced mechanical incorporation of atmospheric gases. These effects will be subtle in highly productive waters — natural pollutants may mask the effect — and not present in shallow or coastal waters, but in the blue desert areas they will be highly significant and should be measurable. It might be possible, by piggybacking on Latham, Salter et al’s calculations (I’m afraid that I do not have access to a public domain version of their proposal for cooling using aerosol production vessels but a subscription, if required, is well worth it for the illustration alone) to predict how much albedo reduction we are suffering because of lower oceanic strato-cu cover.
I like numbers. I hope someone is out there with sample bottles.
[Response: Maybe this could have been clearer, but it is the climate within the models that is non-chaotic, not the model itself. All individual solutions in the models are chaotic in the ‘sensitivity to initial conditions’ sense, but their statistical properties are stable and are not sensitive to the intial conditions (though as I allude to in the article, I don’t know whether that will remain true as we add in more feedbacks).
As to the mechanisms you mention, evaporation generally depends on humidity, temperature, wind speed, and atmospheric stability. In models with interactive aerosols, ocean sources for sea salt are generally wind speed dependent, and for DMS or MSA they are now starting to put in biological activity feedbacks – but these are very uncertain as yet. There maybe important secondary organic aerosol precursors that are climate dependent – but incorporating these effects is very much a hot reserach issue at the moment. -gavin]
I don’t understand in what sense climate models solve “boundary-value problems.” They’re just dynamical models, which means they’re initial-value problems. This is just a matter of what sort of equations you’re solving. The fact that you run lots of trajectories to collect statistics doesn’t change the fact that those trajectories are solutions to an initial-value problem.
An unrelated quibble is that I have no idea what you mean when you say “Emergent qualities make climate modeling fundamentally different from numerically solving tricky equations.” Emergent behavior is something that may or may not arise in your dynamical model, but it doesn’t change the fact that your job as a modeller is still to properly define and accurately solve numerically tricky equations. I also don’t see the distinction you’re trying to draw between climate modelling and weather forecasting, here.
[Response: The distinction occurs precisely because I am interested in the statistics of the problem, not the individual trajectories. For instance, take storm tracks in the North Atlantic – I can be interested in the path of an individual storm (an initial value ‘weather’ problem), or I can be interested in the statistics of all such storms. The second does not depend on the initial values – I can perturb them to my heart’s content – yet the statistics of the storms once I’ve gone out long enough will converge to a ‘climatology’ of storms. This is true for even a perfect model (should any exist). Now, it is certainly my job to numerically solve tricky equations, but the point I was trying to make was that the emergent properties of dynamical systems make those solutions much less a priori predictable than simply a ‘tricky numerical problem’. – gavin]
With regards emergent behaviour, it was just one of the issues that changed my mind and defeated my scepticism with regards the models.
Some time back in my ‘studies’ I realised that the models are able to create things like the ITCZ or as in Pintubo recreate total column water vapour response to Pinatubo. This was despite these being emergent phenomena of underlying modelled processes. I just couldn’t get how, if the models were wrong, they could produce virtual analogues of real phenomena, without being coded to explicitly model those phenomena. The models aren’t perfect, but to dismiss them because of their flaws really is to “throw the baby out with the bathwater”.
Figure 7.5 and fig 7.9 at http://www.ux1.eiu.edu/~cfjps/1400/circulation.html , make the ITCZ look like the simplest and most predictable weather phenomenon on Earth untill one realize that the real world ITCZ line(fig 7.9) deviate from the line of maximum downpour or convection. So my question is if the ITCZ always behave as simple as this, or if it sometimes can split into two when passing obstacles like cold SSTs (East pacific/La Nina) or dry areas like the Sahara desert.
Gavin: I realize that the boundary values vs initial values discussion is probably getting tired to you, but please indulge me.
It is my understanding (from the literature) that climate prediction takes two forms. Climate predictions of the first kind certainly are initial values problems. Predictions of the second kind are likewise solutions controled by boundary values. I am unclear since some of the papers I have read reporting on AOGCM experiments of the first kind (with anthropogenic GHG’s kept at pre-industrial levels), have shown very poor skill at predicting regional climates (high sensitivity to perturbations of initial conditions), and for the most part a saturation or loss of skill at short lead times.
Since things like ENSO prediction (among others) is a problem of the first kind, and these are superimposed over predictions of the second kind (boundary values)when adding external forcings like GHG increases, is it not true that relevant climate predictions really can be thought of as a combination of both. In my way of thinking, it still seems like one could consider these intial value problems.
Since the statistical properties of predictions of the second kind are stable, does this not imply that we have smoothed out some of the climate metrics that are relevant to where people actually live?
My mathematics background in chaos, attractors, and differential equations is limited to about one or two undergrad classes a long time ago, so please clarify my thinking on this.
[Response: Practically you can distinguish between the two types just by seeing whether the initial conditions matter. In a ‘perfect’ system – i.e. using a model to predict what another run of the same model produced – you can show that there is useful information in ocean initial conditions for about a decade – mostly based on the North Atlantic. However, in the presence of strong forcing (rising greenhouse gases, volcanoes etc.), the predictions become much less dependent on the start. So for the 20th century trends – where we have essentially no information about the ocean initial conditions – it is easy to see that the global trends even over decadal to multi-decadal time scales are robust to the starting fields. Skill in prediciting regional changes however, even in a perfect set up, is not very high – mainly because of the amount of unforced ‘weather noise’ which neither kind of prediction can capture. – gavin]
Re “The idea that mankind’s small percentage of the carbon cycle could be the cause of the Mauna Loa graph seems, to this layman, to be unlikely, the attribution owing more to man’s overwheening vanity than scientific measurement, like claiming the Earth as the centre of all things.”
From 1750 or so to 2005 the ambient CO2 concentration rose from about 280 parts per million by volume to 380 ppmv. We know it’s from fossil-fuel burning because we can measure the fraction of C14, and that has been declining. There’s virtually no CO2 left in old (“fossil”) fuels simply because their age is many times greater than the C14 half-life. This is the smoking gun that shows the CO2 is anthropogenic and not from some natural process. All the CO2 in the biosphere has roughly the same mean level of C14.
Also, the amounts of fossil fuels used in the last century are quite well known – you can look up the numbers in various economic & statistical sources – and from that you can easily compute how much CO2 was produced. Add that up, compare it to the observed increase, and you’ll find the numbers match. (Actually the amount of CO2 from fossil fuels is somewhat greater than the observed atmospheric increase, because some gets dissolved in the ocean and so on.)
Thinking of fossil fuel CO2 as “man’s small percentage” is the wrong way to look at the problem. The key word there is “cycle”: all the rest (with small exceptions like volcanic sources and geological sequestration) is in a cycle that keeps going around and around. The fossil fuel CO2 is an addition. That addition may be comparatively small in any one year, but it doesn’t go away: it keeps adding up.
One question. Have models taken in to account the increace in the number of thunderstorms that will result from warming. I read once that thunderstorms keep the earth several degrees cooler than it would otherwise be. As more storms happen the hotter it gets will they have a moderating impact?
[Response:Equilibrium requires a balance between downward and upward transport of energy within the atmosphere. Thunderstorms should be thought of as simply one of many modes of behavior by which the atmosphere attempts to achieve this balance, in this case largely through the vertical transport of latent heat associated with condensation of water vapor within storm clouds. Obviously, individual thunderstorms cannot be represented at the coarse spatial scales resolved by GCMs. However, their principal role in terms of energy balance, as described above, is represented in models through the parameterization of convective instability in the atmosphere. -Mike]
Re 5. Thank you for your reference — I’ve not replied to thank you until now because I’ve been thinking.
Your reference is dated 1996 and as such the author had no access to the work (was it by Morel et al? I’ve looked at so many abstracts today my gyros are toppled) about C4 and beta-carboxylation pathways in phytoplankton. Under stress some phytos change from C3 metabolism and the isotopic fractions they sequester change — the heavier molecules are not discriminated against so strongly. So, my ocean pollution hypothesis is up to this objection: reduced upwelling and lessened entrainment of the surface by wind results in depleted zinc and cadmium levels. Stressed, the plankton switches to C4/beta carb and begins to rain out C13-enriched detritus, depleting (relatively) the upper ocean of C13 in an isotopic refinment of the biological pump. The ratio of C12 rises. The normal process of mixing ensures that the air and water concentrations of the new isotopic balanced CO2 mix and match. The atmosphere exhibits depleted C12. Surface pollution is worse in the Northern hemisphere, and continued healthy upwelling in the Southern hemisphere — helped by being less complicated by contraints of narrow seas — means that the phytos there are less stressed and can continue their usual C3 way of life.
“How then should an oceanic CO2 source cause
a simultaneous drop of 13C in both the atmosphere and ocean ?”
Not a source. A relative sink for C13.
ref 13: ‘the numbers match’. Have I got this wrong? I’ve seen references that half of the anthropogenic carbon is sequestered. Half is not what I call a very good match. However, I’d be grateful for references which tie this down a little more strongly as I’m sure the science does not depend on a simple post hoc argument and I’d like reassurance.
ref 12: The smoking gun indeed. If a plant switches to C4 metabolism — a technique which apparently is less discriminatory between C12 and C13, would it also sequester unexpected amounts of C14? Presumably the differences are purely mechanical which would indicate this was so. Does anyone know?
If C14 is being used up unexpectedly, what does this do to the smoking gun? Maybe I need to think some more.
There should be some testable predictions from this. C14 levels in deep sediments should show increases from around 1850 as the stressed oceans began to react to the outward spewing of the nascent petrochemical industry. Plankton samples, if dead — maybe frozen — should show a lack of zinc and cadmium indicating that they have changed to different metabolic systems. Is the data already out there?
I’ve been wondering where to look for the surprises. One example, much belabored, is methane hydrates.
I see that only a few years ago geologists were still debating the origin of pingos (not the Linux penguin). This is fossil carbon too, presumably depleted of C14?
Is there any clear idea how and when the stuff is formed? I can imagine it could be either a cyclical process associated with glaciation — lower temperature at a given depth below sea level? or contrariwise, when the planet is warmer, CO2 higher, and sea level much higher — because pressure is greater at that same location.
I wonder if the models state an assumption about this stuff — either
— it’s been locked down by past extremes so is stable, or
— it’s at an equilibrium state, so released as warming proceeds (in local spikes as brief temperature extremes occur in the location).
I’d think that some would have bubbled out over the last geologically brief warmth at end of last ice age — but that would have been ‘almost done happening’ as the planet slowly cooled back toward the next ice age. (The big old ones look like hills, tree-covered.)
I wonder if the Navy has mapped the polar sea floor well enough to count undersea pingos — presumably secret if so, as detailed maps, but if so perhaps summary data on size and location would be interesting, to try to quantify what’s been happening.
As of 2003 the origin of pingo structures was apparently still being debated — lots of good pictures for example here.
After reading this months quick study on climate modeling in Physics Today I’ve been unable to push this topic from my mind, especially with it being nearly 70 F here in New England yesterday. As a physicist with only a casual understanding of the issues surrounding global climate change I’m drawn to the simple idea that humanities demand for energy is the central issue. As the global population inceases and the spread of technology increases, the demand for energy generation will also increase. For our great-grandchildren other factors like thermal polution, and the effects of large scale solar and wind farms may be the climate issue of their era, CO2 emissions are only the beginning of what will be a constant need for the consideration of our actions and their impact on Earth’s climate.
Sorry, typo in 17 — in spite of lots of thinking. It should read, of course ‘the atmosphere exhibits depleted C13′ not C12. Sorry about that.
It has me wondering about the mismatch of CO2 rising after temperature in the historical records — do the dating techniques depend on C14 levels and would C4 reduction of C14 pull the dates back into line?
Julian, Gavin’s reference was to a FAQ produces by Jan Schloerer of Ulm University, and hosted on the web by Dr Robert Grumbine of NOAA http://www.radix.net/~bobg/. Although the article is now over ten years old it directly answers your question, one that has been asked by global warming skeptics for all those years.
Jan wrote “From its preindustrial level of about 280 ppmv (parts per million by volume) around the year 1800, atmospheric carbon dioxide rose to 315 ppmv in 1958 and to about 358 ppmv in 1994″. If you go to the Mauna Loa site you can see that it has continued to rise since then to about 375 ppmv in 2004. http://cdiac.ornl.gov/trends/co2/graphics/mlo145e_thrudc04.pdf
There is no doubt that the increase is due to fossil fuel burning. You only have to think about about 100 million Americans taking their cars onto the road and burning on average one gallon of fuel each day. That is 500,000,000 tons of CO2 they are producing per day. Globally, we have put a trillion tons (Pt) of CO2 into the atmosphere. See http://cdiac.ornl.gov/trends/emis/tre_glob.htm 305 Gt of C equals 305 * 44 / 12 = 1.122 Pt. Well that’s what 6.5 billion little unassuming people can do if you give them enough time :-(
C12? C13? C14? All three contribute:
“… The 14C/12C and 13C/12C ratios ….” SCIENCE VOL. 279 20 FEBRUARY 1998 1187
Atmospheric Radiocarbon Calibration to 45,000 yr B.P.: Late Glacial Fluctuations and Cosmogenic Isotope Production
In your text (Gavin), you explain equation-approximated physics (eg radiative transfer) and empirically-based physics (eg evaporation) need for parametrization. That’s not really clear for a non-expert : do you mean for example that different coefficients of equations (for these phenomena) are to be regularly corrected from real-world measurements and retrovalidation ?
You say on one hand that large scale behaviors of climate are robust, but on the other hand that they emerge from small-scale and more chaotic features. Does it mean that chaotic small-scale behaviors of climate are, after all, without real concern for the accuracy of 2100 projection ?
Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr. I wonder why we assume that the C accumulating in the atmosphere is actually ‘our’ C. Why is there this tiny fraction of the overall flux which is not consumed? The smoking gun did indicate that it’s ex fossil fuel carbon because it is depleted in C14 as one would expect. If C4 metabolism is disturbing the expected C14 levels then the bets are off.
The deep sea reservoir of C is 380,000 Gt. Thinking non-anthropocentrically, why does our little flow go straight into the atmosphere? Does it? Obviously not, because some of it gets lost. Our flow is dwarfed by natural processes and we need to find a way of pointing at our emissions and proving that the trouble is what we’re up to. Otherwise we’re like a little boy peeing into a lake and taking the blame when the dam bursts.
There’s a sawtooth daily pattern to the Mauna Loa graph. Is there any difference in the gas make up between day and night? Does the isotopic makeup vary?
Re: #10 Gavin, thank you for responding.
Since a climate prediction problem of the first kind (controlled by initial values) will effect numerous non-linear feedbacks which ultimately will effect the degree of free variations in the climate trajectories, it seems well founded (to me) that many classes of climate prediction problems are controlled by intial values. Do you agree with my statement? Also, greater numbers of non-linear type feedbacks added to the AOGCMs will most likely increase the degrees of free variations in modeling experiments. Right? It then seems feasible that improving the models (making them better resemble the real thing) may make skillful multi-decadal predictions more contolled by initial values, not less (since the amount of noise in the system is greater relative to the forced variations). Since a boundary values problem is superimposed over the initial values problem in real climate prediction, it seems necessary to obtain solutions which satisfy both types of equations (for many climate prediction problems). I think for some casual observers, it might seem non-intuitive that improving the models adds to more uncertainty in predictions. The more I read and study the problem, the more I am pursuaded you modelers have a tough problem on your hands. Good luck.
Those single-celled organisms are releasing carbon that was only recently fixed. Even in the short term (1 year period) it is within 1% of being in equilibrium with carbon fixed by life.
The C4 pathway is only used by grasses. Other plants and all bacteria and protozoa use the older C3 pathway. An organism cannot just choose which pathway is wants to use. Even if land use patterns have changed the ratio of C4 to C3 plants (probably increased it, because so much crop and livestock land is planted with grasses), the effect will be quite minor as the carbon is released back into the atmosphere in short order.
As far as the deep sea reservoir of C goes, don’t forget that it takes thousands of years for the deep ocean to turn over and that while it is a reservoir, it was more or less in equilibrium at 285 ppm CO2 and thus that carbon wasn’t going anywhere. With current CO2 levels the deep ocean is actually a net sink of CO2, absorbing a fairly significant share of our emissions.
With regards to the sawtooth daily pattern at Mauna Loa, it’s due to photosynthesis during the day and respiration during the night. While there’s probably a difference in the isotopic makeup, it’s probably pretty small?
“Thinking non-anthropocentrically, why does our little flow go straight into the atmosphere? Does it? Obviously not, because some of it gets lost.”
Quite easy – just look at a smokestack or exhaust pipe. The CO2 literally goes straight into the atmosphere. As far as what gets ‘lost’, that’s mostly the ocean absorbing about half of it. It isn’t gone for good and if CO2 levels drop, the process will reverse, releasing the CO2 back to the atmosphere.
Re #24: “Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr.”
Rather misleading language: they don’t “push out” that much CO2, except in the sense that you “push out” a certain amount of CO2 every time you exhale. A better way to put it would be to say that they _cycle through_ that much carbon: it comes in, it goes out, but only if there is a change in the mass of soil organisms (and their corpses, etc) does all that activity produce a net change in CO2.
This does bring up a question I’ve sometimes wondered about, but have never seen numbers on: suppose we could ignore the political obstacles, and make a serious attempt at re-vegetating areas – the western US, North Africa, the Mideast, Australia – that have been desertified by human activity. How much CO2 could we expect that to sequester?
“… One of the most striking results that 13C data (and now O2/N2 ratio data) unveiled is the existence of a very large repository of anthropogenic CO2 in Northern Hemisphere ecosystems during the early 1990’s when the atmospheric CO2 growth rate had diminished to only one third of its normal value. Still, the long term trend and interannual fluctuations of 13C at one given monitoring station is at the limit of detection of mass spectrometers, on the order of 0.01 per mil for 13C in CO2. Thus, even a very slight bias in the isotopic data would translate into different inferred magnitudes of the global land and ocean uptake of anthropogenic CO2….”
Biogenic aerosol formation in the boreal forest (BIOFOR)
Anomalous (or Not Strictly Mass Dependent) Isotope Variations Observed in Important Atmospheric Trace Gases
“… Variations in stable isotope ratios in the environment have generally been well understood and put to good use. However, the atmosphere appears to be the scene for a host of isotope effects that we do not yet understand. The prime example is ozone, whose anomalous enrichment has repeatedly defied correct interpretation.
“….Atmospheric studies also benefit from stable isotope variations. An illustration is the ongoing decline of the 13C/12C ratio of atmospheric carbon dioxide, largely in consequence of the increasing fraction of fossil fuel-derived carbon dioxide. This isotope effect is thus directly related to the isotopic composition of an important source of the gas. Fossil fuels have about 2% less 13C than atmospheric carbon dioxide. This in itself is obviously not a source effect (ambient CO2 is the carbon source for plants), but rather an isotope fractionation effect of photosynthesis. Plants favor 12CO2 slightly over 13CO2, so the assimilated carbon is depleted in 13C relative to the atmosphere. In isotope applications of interest to atmospheric chemistry, source signatures and fractionation effects in chemical reactions are both relevant….”
“… It has taken years to unravel the secrets of the anomalous isotope fractionation of ozone, perhaps the most extensively studied reactive atmospheric trace gas. In regard to molecular symmetry, 17O and 18O in an ozone molecule are identical (they are simply different from the abundant 16O isotope)…. However, theories based on symmetry have been challenged by the latest experimental data.
“… After ozone, it was found that carbon dioxide in the stratosphere exhibits MIF [Thiemens et al., 1995, Gamo et al., 1989]. A chemical mechanism was proposed by Yung et al. , who showed that the observed 17O excess in CO2 could be explained by transfer of the enrichment present in ozone to CO2 via the excited oxygen radical ….”
“There’s a sawtooth daily pattern to the Mauna Loa graph. Is there any difference in the gas make up between day and night? Does the isotopic makeup vary?”
Is there a daily sawtooth pattern to the Mauna Loa graph? There is certainly an annual sawtooth pattern, caused by the northern hemisphere forests kicking in with photosynthesis every spring. That is probably rather simplistic but I think it is what Charles Keeling set out to measure initially. The carbon flux is affected by all photosynthesis, terrestrial and aquatic. There would be a daily flux however, as plants respire all the time but only photosynthesize during light conditions. Again this is a simple explanation and does not take into account the processes that go on in the dark to produce sugars.
I propose we invest massive resources into making a large array of petaflop-scale computers dedicated to the study of global warming. This way, if the simulations show that Global Warming isn’t an upcoming threat, we’ve ensured that it really is, by churning out more than enough CO2 while making those computers. ;)
Re 24 30
Go to http://meteo.lcd.lu/today_01.html to see real live CO2 concentrations et al in Luxemburg. Variations over 50% and as high as 500+ppm
I agree that the patterns are complex. Just changing the water vapor content (& rainstorms) plays havoc with the CO2 concentrations, as does the daily commute (vehicle exhaust) near the measuring station.
In general it is my understanding that the daily fluctuaions & even the yearly cycle have a relatively small impact on the global warming consequences. I think there is a Gavin comment to this effect from a year or so ago somewhere in the archives.
re 28: thanks for the sites — even more thinking needed. I read a recent CO2 metabolism paper for a particular species of phytoplankton and began to wonder about the chance that they are less consistent than science currently understands. If the knowledge that certain marine plants can swap between C3 and C4 is only about 5 years old, I’d rather like to see whether their different fixation routes do odd things to oxygen isotope ratios. There is a rather disturbing illo of the degree of confidence in the science of global warming — low low and very low occur too often to be reassuring.
Governments make a lot of noise about global warming — how odd that they don’t throw large amounts of money to the only people who can demonstrate, by scientific measurement, exactly what’s happening. And yet monitoring sites are being abandoned — why is that? Do governments know something or is it merely complacency?
30: thank you. There’s a daily wiggle rather than an annual sawtooth I believe, but you’re right, I was conflating the two. I’m wondering if a signal of isotopic fractionisation could be teased out of the daily signal, pinning it down to either worldwide or near-ocean effects.
If the ocean surface pollution hypothesis of global warming is correct then this is the sequence: the petrochemical industry kicks into life around 1850. Surfactant run-off and oil spill begin to reduce stratocumulus cover: the whole surface of the ocean warms, reducing nutrient upflows and encouraging the phytoplankton to kick into C4 metabolism. C4 phytoplankton increase in numbers as their relative advantage inproves over C3 plants. C4 plankton sequesters more C13 than expected by conventional models, expected levels of C13 fall, producing a false anthropogenic signal.
Surfactant and oil pollution effects increase CO2 levels by reducing biological pull down, mechanical mixing and reduce solubility by warming the surface. (What happens to the albedo of a polluted surface? I don’t know.)
Deep water warms slightly. Methanophages begin to emit more light isotope CO2 as the clathrate deposits become more accessible. CO2 warming increases. The ocean surface becomes even more stable and the cycle continues. Eventually the clathrates boil off.
Result: large temperature spike and collapse of civilisation.
I wonder if there’s anything in the fossil record which would fit this scenario, triggered perhaps by the breaching of a large, light oil reservoir by coastal erosion? I wonder what signals a scenario like that would leave for us to interpret?
(Well, I like it better than a convenient volcano boiling off a carbon deposit!)
Re # 27: “This does bring up a question I’ve sometimes wondered about, but have never seen numbers on: suppose we could ignore the political obstacles, and make a serious attempt at re-vegetating areas – the western US, North Africa, the Mideast, Australia – that have been desertified by human activity.”
Most of those areas have not been “desertified by human activity” — they’re naturally deserts, part of the global desert belts at low latitudes. Vegetating them on a large scale would probably require significant energy expenditures (and accompanying CO2 release!) for fertilization, water transport, and building the necessary infrastructure. (This is not to say that it’s a bad idea to reverse desertification in local areas where it’s recently occured, of course.)
From the various Hansen et al and other papers, the computer calculations of global warming show that adding CO2 results in the added CO2 molecules delaying (not trapping!) the transport of energy out of the earth system, and the subsequent global warming of the air near the ground and cooling near the top of the atmosphere- TOA (see cartoon figure 2e from Hansen et al 2005). This is the Greenhouse Effect. The global warming effects results in a positive energy imbalance or disequilibrium at the TOA, where more solar energy is coming in than is being sent out because the TOA temperature is colder than the equilibrium. This apparently lasts for years if not forever per the GCMs (Hansen Nazarenko et al 2005).
However, on a daily basis the Earth goes through a daily rotating solar cycle where at night we have a negative imbalance with more energy out than in, then as the sun rises we warm up pass through the equilibrium point to get a positive imbalance and stay that way until the sun again starts to set and the energy-in again passes through the equilibrium point and we return to a negative imbalance. The same process happens when we get yearly/seasonal temperature cycles and also on the approximately 11 year solar energy cycles which change the energy-in amounts. At all times, the amount of energy being transported out of the Earth system is calculated by the Stefan-Boltzmann Law (SBL) based ONLY on the temperature of the Earth system, be it at the ground or at the TOA. The SBL, which says that the energy transported from an object is proportional to the temperature raised to the 4th power (ie hotter air rises faster-convection, or hotter objects radiate more energy per unit time- radiation transport), is forever trying to reestablish the earth to its equilibrium energy-in equals energy-out conditions, by either warming or cooling it, and it is perfectly successful exactly twice each and every day. This contradicts the conclusions of the Global Computer Models (GCMs) which require a permanent or multi-decade disequilibrium or energy imbalance to create the global warming.
So the question is: WHY doesn’t the Stefan-Boltzman Law feedback also automatically compensate for the greenhouse effect? Why doesn’t the feedback from the SBL automatically return the air to its equilibrium to (solar) energy-in conditions imposed by the daily solar cycles? Why isn’t the delay in energy transport by adding GHGs compensated for by the speedup in radiation transport at the speed of light caused by the increase in temperature and the SBL response? The SBL has no way of differentiating between a GHG or solar caused warming. According to the GCMs, we already know that the CO2 caused global warming results in hotter air rising faster (convection) at ground level, which Gavin says is included in the GCMs as a part of the water vapor feedback effect.@ How not to attribute climate change comment #182 12:57 pm (see also #126 et seq & 208) Why doesn’t the hotter air at ground level also cause more energy to be radiated out faster per the SBL to result in the greenhouse caused warming energy to be returned to the TOA where the extra energy will cancel out the CO2 caused imbalance at the TOA? ie to return the Earth to its equilibrium conditions, which it actually does – twice a day?
The conventional wisdom that greenhouse gases cause global warming is based on the identification of the greenhouse effect (GHE) in the Svante Arrhenius 1896 paper (see Wikipedia ) ” On the Influence of Carbonic Acid in the Air Upon the Temperature of the Ground”. However, my simple reading of the paper shows that Arrhenius calculated the warming effect of CO2 energy absorption in academic isolation (CO2 absorbs energy) without considering the real world effects of the Stefan-Boltzmann Law Feedback.
Also a review of the Global Computer Models (GCMs) results shows that the amount of global warming calculated varies depending upon the duration of the period modeled. eg 1950-1997, 1880-2000, 1750-2000, or from the bottom of the ice age and bottom of the CO2 level (20,000 years ago) until 2000 etc. This is to be expected, except that the computer model also requires that the temperature imbalance at the TOA be equal to and opposite the warming to maintain conservation of energy. SO just what is the temperature at the TOA in the year 2000? It HAS to be a single value, not the infinite number of options that can result from the GCMs. Again, the GCMs are calculating results that are not possible. BUT if the SBL Feedback returns the TOA to the equilibrium conditions, then we have a single value, but then there is no energy imbalance. However there also is no global warming due to the GHE!
Sorry Gavin, but I feel that the GCMs are giving such fundamentally incorrect/inconsistent results that they can NOT be valid. They seem to be missing some of the SBL feedback. If you can explain these discrepancies please do.
Which brings us back to what you previously stated 15 months ago,
“… [Response: … You refuse to relax your (incorrect) assumption that the flux from the surface is the same as the flux from the top of the atmosphere, which is equivalent to assuming that there is no GHE at all. So you assume the result you wish to prove. … -gavin]
Comment by John Dodds – 30 Oct 2005 @ 8:04 pm ” (click on the time stamp to go to Gavin Schmidt’s RealClimate.org location)
To which I now respond, Yes Gavin I intuitively assumed constant equilibrium flux, but it is not my assumption, it is the constant flux equilibrium IMPOSED by the Stefan-Boltzmann Law, & Mother Nature’s Laws of Physics, and seen on a daily basis. The SBL Feedback (which is dependant ONLY on the temperature and not the CO2 concentration) cancels out the GHE temperature effect basically eliminating it as it occurs. No GHE means no GHG/CO2 induced warming, as you said. The ever increasing GHG Forcing Curve (Planetary energy imbalance? May 2005 & IPCC) is effectively a flatline. Any research based on the increasing GHG forcing and the GCMs is invalid. The estimates of the temperature impact of doubling the CO2 levels in the next century are just plain wrong.
This does not mean that global warming does not exist or that adding CO2 does not cause problems such as ocean acidification. The evidence in melting polar ice caps, glaciers and measured temperatures etc is too obvious. However the increases must come from an external increase in energy-in (or maybe if the solar increases do not explain it, some of the decreasing Earth magnetic field flux energy is leaking into the ground/air ??? Are we seeing increases in the northern lights effects?). The solar increases documented by IPCC approximately account for the observed temperature increase from 1700-2000. For example the solar insolation increases by about 4 W/m2 to 1364 since ~1700, This increase is 4/1364 or 0.3% which is about the observed increase of 0.3% of 288 or 0.84 degrees absolute.
This “CO2 does NOT cause warming” conclusion does however mean that any efforts to control Carbon/GHG emissions (Kyoto Treaty, carbon taxes, carbon emissions trading, carbon sequestration research, lobbying, lawsuits etc) are totally worthless, a waste of resources, and can be eliminated since they will have no effect on global warming other than increasing taxes and getting me hot under the collar. :) However, for solving the problem of CO2 caused global warming and “saving the world” from the British/Stern estimate of 1 to 5% of annual global GNP, Companies and Governments are encouraged to send a fraction of their subsequent cost/tax savings to the John Dodds Foundation USA 94123-3404. :)
Re #38: Ever park your car in the sun on a cold day? It gets quite a bit warmer inside, doesn’t it? Now if I understand the physical model you’re using in your description, it should instead be in equilibrium with the outside air. I’d take that as a strong hint that your physics is wrong :-)
And re #36: “Most of those areas have not been “desertified by human activity” — they’re naturally deserts, part of the global desert belts at low latitudes.”
While such areas are indeed natural drylands, I think there’s quite a bit of evidence showing that much of the area has in fact been turned to desert as a result of human activity. See for instance the early explorers’ descriptions of the American west compared to conditions today. There are prairie/steppe plant communities that, once established, do quite well with little rainfall. Destroy that community by over-grazing or farming, though, and it does not readily reestablish itself.
A car has a physical barrier – the roof- that prevents or actually just slows down the restablishment of equilibrium to the rate that energy can transfer thru the roof.. Just like a glass greenhouse has a barrier that prevents the reestablishment of the equilibrium BUT go into a glass greenhouse at 4AM and it IS as cold as the outside- the equilibrium is restablished. The atmosphere has NO barrier to the flow of energy, the question is just how fast is the equilibrium reestablished. and radiation of energy is as fast as the speed of light.
The question you should be looking at is if there is a GHG caused energy imbalance per the GCMs, and more energy than what the GHGs generate is continuously added then why haven’t we overheated already? CO2 has been increasing for 20,000 years according to Hansen. (& I agree) so the so called imbalance should have been there for 20,000 years. Besides if there is an imbalance just why would it result in that extra energy going into the ocean (per Hansen) instead of just going into the air & eliminating the imbalance like it does on a daily basis. Hansen’s model does not make sense.
The Stefan-Boltzmann law does NOT apply to the Earth-atmospheric system. The SBL is a result derived from Planck’s radiation law, under the assumption of frequency-independent emissivity. This is flagrantly not true for a system with absorption lines.
If I can paraphrase, you are saying that the earth may heat up to above normal temperatures, because of GHG’s, during daytime. Each part of the earth will have lost all of the excess energy it may have gained (due to GHG) by dawn.
I think that this may be true but it would still result in an increase in mean temperature as the temperature has risen for some time (during the day) above that which it would have been if there were no GHG and the temperature will return to the usual ambient at night (and won’t go below it).
So, by your theory, we would get hotter days and no change in the dawn temperatures. I have no idea if this is the case or not.
I believe that there are more uncertainties in climate science than this (and I’m not the only one).
“There is international consensus that human activities are increasing the amounts of greenhouse gases in the atmosphere and that these increases are contributing to changes in the earth’s climate. However, there is scientific uncertainty regarding the sensitivity of climate to these increases, particularly the timing and regional character of climate change.”
Having engaged with Roger Pielke Sr. and cohorts on this issue, I begin to think that their definition of boundary value problem and mine is different. They are talking about the extreme limits of the system, whereas I (and I think most of the folk writing here) think of the hypersurface on which the trajectories evolve as being the boundary. Perhaps it is because they are locked to a three dimensional picture.
Re #41: “The atmosphere has NO barrier to the flow of energy…”
Huh? How did you reach that conclusion? Going back to basic physics, there are three modes of energy transport: conduction, convection, and radiation, no? Conduction is not significant in gasses, so we’re left with two. Incoming solar radiation in the visible spectrum heats the ground, and can be transported away by convection or radiation.
Convection pretty much stops at the stratosphere, so that leaves radiation. That radiation takes place in the infrared, to which CO2 is not transparent, so the outgoing radiation is stopped. The atmosphere _is_ the physical barrier that stops re-radiation.
“….and radiation of energy is as fast as the speed of light.”
Humm… So you’re saying that a hot object placed in a vacuum should cool instantly? I don’t think it works that way :-)
Looking at the earth-atmosphere system, an infrared photon leaving the ground does travel at the speed of light – until it hits something, say a CO2 molecule. Then the energy it carried can either stay with the CO2, making it hotter, or it can re-radiate. If it re-radiates, it can go either up or down. The net effect is that the system stays warm longer. Left alone, it would eventually cool back down, but the sun comes up before than can happen, so the next day the system gets a little warmer still, and that keeps happening until a new, higher, equilibrium temperature is reached.
But then humans keep adding more CO2, which shifts the equilibrium still higher…
The statement you made regarding, “The net effect is that the system stays warm longer.” do you have an indication of this? Have you a clear data table that the given temperature range has diminshed between high and low over the last 35 years based on similar conditions? How about the typical surface temperature decay curve in a clear night sky, under similar input potentials and similar weather conditions?
That the CO2 increases, hence creating a greater opportunity for CO2 vibrations to maintain the energy, or to delay the terrestrial release into space should be detectable don’t you think? I have not seen any indications of this have you?
Mr. Cooke, when you use phrases like “opportunity for CO2 vibrations to maintain the energy” you are making it hard for yourself. You won’t find that with Google — let’s check:
search – +opportunity +”CO2 vibrations” +”maintain the energy” – did not match any documents.
Yep, you’d find no evidence of it.
But look using the terms you find in the physics books, eh?
This is well documented as one of the basic predictions — greater warming in the nighttime, because less heat is being radiated off the planet by an optically clear sky.
Here, for example, just to give you a start looking into this:
“ABSTRACT Two sky brightness monitors – one for the near-infrared and one for the mid-infrared – have been developed for site survey work in Antarctica. The instruments, which we refer to as the NISM (Near-Infrared Sky Monitor) and the MISM (Mid-Infrared Sky Monitor), are part of a suite of instruments being deployed in the Automated Astrophysical Site-Testing Observatory (AASTO). The chief design constraints include reliable, autonomous operation, low power consumption, and of course the ability to operate under conditions of extreme cold. The instruments are currently operational at the Amundsen-Scott South Pole Station, prior to deployment at remote, unattended sites on the high antarctic plateau.”
James and L. The CO2 molecule absorbs a photon. It gains vibrational energy. This energy rapidly (~ a few tens of collisions, within nanoseconds) is collisionally transfered to other atmospheric molecules as translational energy (V–>T transfer in the argot of the field), mostly N2 and O2. This heats the atmosphere
So how do CO2 (and H2O) molecules gain energy to radiate? Well, the unexcited CO2 molecules continually collide with other molecules in the atmosphere and a small percentage of the collisions leave the CO2 vibrationally excited (T–>V transfer). The average fraction of CO2 molecules that are vibrationally excited in the bend is something like 2*exp(-1000(K)/T(K))
Eye crossing stuff: 1000K multiplied by Boltzmann’s constant is roughly the energy in Joules of the CO2 vibrational bend, the bend is two fold degenerate which accounts for the 2..
The bottom line is that about 6% of CO2 in the atmosphere is vibrationally excited at any time and can radiate, almost none of the molecule that absorb the IR emit directly.
Actually, I have a number of links that will clearly demonstrate that any accuracy resolution greater then 5 W/m^2 in a clear sky night time measurement is not currently available regardless of your 1st links suggestion of 2 watts.
The tools available, can be up to 50W/m^2 in error from calculated, depending on the environmental conditions. The recommended improvements suggested to reduce the level of error range from, Lidar, to ensure the various water vapor and aerosol participation, to different detectors altogether. I could share these though you can find them yourself at the arm.gov site if you will search for “down welling clear sky night”.
The point is even with the latest devices such as the SPECTRA AERI tools there remains a question. Either the frequency band or the temperature range of the equipment is exceeded by the real world values to be measured.
For example even the IRT versus the PIR discussion that have been deployed as recently as Oct 2006 are both subject ground IR skew. Though the IRT is more accurate at night, the bottom of the range being in the 220 K range when the actual down welling values can be 180 K or less. When you include the participation of water vapor and CO2 being in the 900nm to 2.3um range is missed by the IRT as it is focused in the 9-11um range.
This only goes to repeat the earlier concern that we have to obtain a set of reliable, discrete frequency band detectors designed with the necessary operating temperature/conditions range. A major concern is that it is unlikely that we will achieve the sensitivity necessary with a well shielded detector head or a constant temperature detector oven in the ambient temperature range that measures differentials between the calibration source and the down welling night time sky.
To address your post, the presentation by Dr. Long (found in the above arm.gov search as well) is much better as an introductory into atmospheric radiative measurement data. The follow ups that have occurred in relation to the equipment and data qualities also found in the above search leave major opportunities in relation to future improvements.
(Note: Putting ideas into simple language, such as “CO2 vibrations”, are intended to help with communications (one of my personal challenges I am attempting to overcome) and are not intended to assist Google searches.)
Comment by L. David Cooke — 17 Jan 2007 @ 10:29 PM
I’m afraid this is just word salad to me, and apparently also to Google:
Your search – “down welling clear sky night” – did not match any documents.
* Make sure all words are spelled correctly.
* Try different keywords.
* Try more general keywords.
Okay, here’s something — when I drop the quotes from your example and limit to recent Google Scholar hits, some papers turn up. Perhaps if you focus on one article, the authors might care to reply. Some of these are familiar names.
I guess you’re saying that this is an area not well understood (agreed).
OK people we are NOT all talking about the same thing or even from the same educational starting point. (responses to #38) I am going to explain where I am coming from & try to answer the comments made above as I go thru.
According to a Comment by Eli Rabett – 13 Jul 2005 @ 11:32 am (read the whole thing to get more info! Eli, I truly thank you for taking the time for that piece of education 18 months ago & as I just read, for the update in #50- JD)
“Briefly put, the process can be defined as a CO2 molecule absorbing a ~650 cm-1 photon (equivalent to a thermal energy of about 900 K), and losing that energy to the surrounding bath of atmospheric gases….Because collisional energy transfer to and from the excited molecules is rapid, the chunk of energy (650 cm-1) rapidly degrades into the heat bath of the atmosphere.” (Hank, this says that the laser excitation/deexcitation is only a trivial part of the energy transport process- the major part is absorption, collision to other air molecules, full spectrum at the specified temp radiation from the warmer air, reabsorption etc etc – IF the only process is absorption and radiation to another CO2 … then there is no way for the other air molecules in the bath to heat up.- you would end up with 900 K CO2 and 287K air- which is not possible for very long )
BUT this does not account for warming because for every absorption (or capture) there is an equivalent loss (or release) – conservation of energy. SO where does the energy for the warming come from? Answer- when you add extra CO2 & increase the number of absorptions and if each absorption/release/collision takes a few to ten microseconds (per Eli), then the energy being transported out to space takes LONGER due to added absorptions which translates to more energy in the air at the lower levels, BUT less energy being transported out to space over the same time interval in order to conserve energy. ie we get the energy dis-equilibrium with height or imbalance (TOA is cooler than at pre CO2 equilibrium with the energy-in) that is pictorially shown in Hansen et al 2005 fig 2e – ie ground warming, TOA cooling by an equivalent total amount of energy. ie The GHE does cause warming!! (now this raises another contradiction- Hansen shows the equilibrium point (temp where energy was formerly released to space) has moved DOWN, whereas James (#46) says it goes higher & Dave Cooke (#47) asks for evidence. – well Dave perhaps the evidence is in my proposal that the GHE exists but the higher temperature from the absorption and delay results in a larger driving force (SBL feedback- hotter air rises faster, or hotter objects radiate faster) to bring the earth system back to equilibrium just as fast as it warms up – in which case there would be NO evidence.- other than differences caused by changes in solar energy input (sunspot cycles) and any other source of variable input energy. Given the variability of sunspots/solar insolation I personally doubt/guess that you could measure the differences well enough- too much noise as Eli said.)
Which brings me back to “English’s ” paraphrasing/understanding (#41). NO your paraphrase of what I said is NOT what I meant. First on your reference (at the pco-bcp site) & challenge philosophy – YES we are adding CO2, yes adding CO2 causes more warming, BUT I do not agree that there is consensus that this results in significant global climate changes. Only the GCMs say that this is the cause. I personally think there is an error or two in the GCM calculations (not counting the uncertainties in parameters) and I am challenging this rather than trying to challenge the validity of the various parameters etc- everyone else is already doing this & getting nowhere. I am trying to point out that the results generated by the GCMs are internally inconsistent or wrong or do not agree with observed reality, & I am trying to point out what I think is the error. As I have learned, painfully, I have discovered that most of the Scientists doing this research are VERY VERY good- if they put it into the GCMs then it is probably valid, maybe with some uncertainties but valid in direction and pretty close to the correct magnitude, so I look for what they missed.
In one sentence, the proposal I am trying to make is that the GHE causes warming at the ground level and cooling at the TOA level, per Arrhenius and Hansen 2005, BUT by this very act this creates a temperature differential dependant driving force calculated by the SBL that results in a feedback that returns the atmosphere at all heights to the former equilibrium to energy-in conditions. ie When the GHE causes warming, (& cooling at the TOA- – adding GHGs can NOT create energy it has to come from somewhere – ie up near the TOA,) then these new temperature differences from equilibrium (even when it is only the warming caused by a single absorption) create the driving force that return the atmosphere to equilibrium conditions. This (SBL Feedback for lack of another term- I wouldn’t dare call it the John Dodds effect, you get too much grief- besides Stefan identified it first :) ) is a self adjusting NEGATIVE feedback that will automatically exactly negate the GHE & WV positive feedback effects while adjusting to changes in energy-in caused by solar, albedo, volcanoes etc. The GCM says that the warming creates a permanent (or at least multi-year until CO2 disappears) energy imbalance at the TOA that will “permanently” add (solar) energy to the Earth. SO why wouldn’t adding this solar energy just eliminate the GHE caused energy imbalance at the TOA? This is the reality every morning as we warm up. Besides if the energy imbalance is permanent with higher CO2 (ie global warming) then HOW do we get to equilibrium conditions twice a day? (Visualize the annual average equilibrium temp as being 288K./15C At night we cool below this – since the earth is still hotter than the energy-in which is zero at night (CO2 absorption operates 24 hrs/day) . Then during the morning/day the solar energy-in is more than energy out so we warm up to try to keep up with the steadily increasing equilibrium – BUT we HAD to pass thru the equilibrium point- BUT the GCM says that due to GHE warming we are always warmer (or cooler @TOA) & hence have a multi-year energy imbalance (ie the GCM conditions are contradicted by reality) During the daily cycles whenever there is an imbalance (be it caused by GHE or the sun) the driving force to return to equilibrium is the amount of energy transfer calculated by the SBL. The SBL canNOT differentiate between GHE warming & solar warming- just the temperature & Eli’s transfer mechanism/collisions guarantees that all the air adopts the same temperature, subject to weather variations). The return to equilibrium to energy-in is a fact observed twice a day. It is very fast- we can go from the daily high temp past equilibrium to the daily low temp in a few or 12 hours at most, but we are limited to the amount of energy transported out as calculated by the SBL (So, James #46- an object in space will NOT instantaneous cool – it will radiate the SBL calculated energy over time (SBL units are Joules/(sec-m^2) & keep cooling until it is at the equilibrium temp- because IF it keeps radiating past equilibrium with energy-in, then the SBL says it will radiate less than what is coming in & it will warm back up to equilibrium. The SBL strength and direction varies depending on the temperature differential from equilibrium )
The SBL Feedback is a self regulating equilibrium enforcer, that happens to operate much much faster than the CO2 warming does ( CO2 is a few degrees over a century). It forces equilibrium to energy-in twice a day regardless of if we are warmer or cooler. I think the GCMs ignore/forgot the radiation part of the SBL Feedback – Back in “Tom Fiddaman â�� 3 Nov 2006 @ 1:36 pm “, Tom identified that convection and conduction are small compared to radiation as a transport mechanism (but, James (in 46) – conduction in the air exists- think thunder & lightning! & all three combine to get energy to the TOA where it is radiated to space) AND Gavin agreed that there is a convection feedback due to the higher GHE caused temperature (- ie Hotter air rises faster & it is included in the GCMs as a lapse rate change within the water vapor feedback – see 12:57 pm ) SO FAR I have not seen anyone address the radiation part of the SBL feedback- hotter objects radiate faster??? BUT it has to work since that is what causes the loss of energy at night. AND it returns to the equilibrium to energy-in (which is constantly moving), not to several degrees above (ground) or below (TOA) per the GCM. Also if hotter air @ ground radiates more and faster, and cooler air @TOA radiates less and slower (both up & down) – then won’t they meet somewhere in between which is equilibrium before the slower GHE started? Since the rate of energy transfer is dependant upon the temperature differential (imbalance) then the SBL Feedback will self adjust to even out the balance of both solar and GHE over a day, and as I visualize it- it will adjust for the GHE as soon as the first little delta T is created. ie Arrhenius’s GHE exists but is neutralized by the SBL feedback virtually instantaneously. Gavin said that if we have equilibrium then we (effectively) do not have a GHE. & I agree- BUT I now think we still have a GHE (Arrhenius was right) it is just that the SBL Feedback cancels it within seconds. Which leads to Julian’s (39) no warming but we can add CO2 – at least until the CO2 ruins the ocean pH or the burning of CO2 depletes the O2 in the air so we can’t breathe (ie CH4 (smallest hydrocarbon) + 2O2 (from air)= CO2(to air)+ 2H2O.- which actually yields a trivially lower air density and trivial global cooling due to adding CO2 per the ideal gas law which applies when there is no net energy transport into or out of the system – Sorry Gavin, – but I can’t resist quoting your reasons/responses back to you- I apologize.)
OK I hope this explains what I’m trying to say. More comment/questions please.
Thank you for the general physical process you shared. As you indicated that the photon release is generally non-directional and will be released in 360 degrees, though the input is primarily the reflected or radiated ground radiation/up welling at between 15 and 20 um. I don’t know about your 6% activly emitting energy, though. I suspect any temperature above the solid state temperature of CO2 would suggest most all of the molecules would be active emitters. To some degree even after the solid form of CO2 is achieved some percentage on the boundaries would be significantly active, as well.
I guess my major concern of radiative measure is that there has been significant work recently on ECHAM5 to account for the short wave physics and it seems we really do not have a good understanding of the radiative physics yet. It does not mean that what you describe is not valid; however, it would appear to occur more often at the surface altitudes then higher in the air/grid column. This is part of where the adiabatic processes and radiant energy physics or convective along with radiant processes interact and confuse the mechanics.
I have been spending time on the UKweatherworld site and in association with a PeterH there, I have come to see that the convective processes actually have greater influence in heat transport as opposed to radiant transport. If this is accurate I suspect that the radiant influence of CO2 is much smaller then I have seen described in most studies. If this is accurate it may require a review of the model constructs as it may be that the radiative interaction is over rated. Hence, my desire to determine the direct baseline value of real world, night time, clear sky radiant energy that can be associated with GHG.
(Note: As I related above though professional language is more precise and descriptive, it reminds me of taking newbies out in my sailboat. If I have to Jibe or Tack, I am supposed to use the term “Jib Ho” or “Helms a Lee” to inform everyone that the boom is about to swing across the boat. The first time I took a newbe out he got bumped in the head and said, “Why didn’t you yell Duck!, that I understand.”.)
Instead of the statement 6% of the CO2 actively emit energy, would it be more accurate to say that 6% actively emit energy in the spectral range that can be absorbed by CO2 during the next absorption?
This way ALL the air in the air soup packet will actively emit energy just at different wavelengths (or maybe even as more collisions(?), 94% of which will be sent to space directly or absorbed by WV etc & only 6% of it will see the next CO2???
I think I asked that right. or I may not fully understand Eli’s physics.
Re #54: This seems so wrong that I’m not sure where to start. Let me begin by correcting a couple of misunderstandings. First, what I said about an object instantaneously cooling was meant as a sarcastic (see the smiley?) response to your statement that “…radiation of energy is as fast as the speed of light.” I really don’t understand what you’re trying to say here. In radiative cooling, the photons of course _leave_ at the speed of light, but that says nothing at all about the _rate_ of cooling, which depends on the number of photons leaving and their energies.
Second, when you say “…conduction in the air exists- think thunder & lightning…”, I think you’re being confused by language. Neither of those involves heat conduction. While air (and gasses in general) do conduct some heat, the amount is very, very small. That’s why most insulating materials are just ways to hold air in pockets small enough that it doesn’t form convective cells.
I think your major problem is in thinking that the Stefan-Boltzman law applies to the Earth. It applies to black bodies, and the Earth most definitely is not a black body. If it was, there’d be no greenhouse effect of any sort. Beyond that, about all I can do is repeat the simple first-order explanation: The atmosphere is transparent to visible light, partly opaque to infrared. Energy comes in as visible light, warms the ground, which re-radiates in the infrared. But the infrared can’t get out easily, so that raises the temperature. Eventually it increases enough that a new equilibrum is formed.
It’s not that different from putting on a sweater when you’re cold. The insulation slows the rate at which heat leaves your body, so you get warmer. Same with the greenhouse effect. Imagine you have two planets at the same temperature, say 280 K. A has CO2, the B doesn’t. Now take away the Sun. Both planets will cool, but B will cool faster. After 12 hours, it might be at 275 K, while A is at 276 K. Bring back the sun, both planets get the same amount of energy input (say enough to raise their temperature 5 K), and what happens? B is back at 280 K, but A is now at 281 K.
See? Global warming made simple :-) Beyond that, I think I have to let the professionals try to explain, if they like.
First statement goes to Hank, good sir if you want to see the data that I have been referencing for the past 7 years feel free to go to ARM.gov and perform the search as I had suggested. Google is likely not going to be of any assistance in trying to collect information that is published specifically on a site. Sorry, if you want the data you have to go to the well, there is no way you are going to prime the pump from the faucet.
Secondly, Mr. Dodds a great treatise; however, a photon is not focused to be received by any specific molecule. If a molecule releases the energy it will be as a point source and as a wave front it will be detectable providing there is not a shield or reflector between the detector and the emitter. Put another way, energy emissions that are generated in 3D from a point source are going to be detectable unless the density of the material between the source and the detector absorbs the total emission and subsequent emissions are directed away from the detector. It appears you are confusing radiant photon transmission (at the speed of light) and convective transmission (around the speed of conduction) with the transportation of the higher thermal content moving away from the detector.
(Keep in mind that the expansion of fluid or the expansion differences of clad dissimilar metals in most thermometers are based on the interception of part of the localized energy that is radiated. In short, temperature is the food coloring and the tank or bowl of fluid is the medium. Your detection of the temperature is going to relate to detecting a difference of the physical properties in at least two materials. The accuracy of the measurements are going to be related to how those materials react to the same stimuli frequency and energy band of the sampled physical phenomena.)
>48, 51, Dave Cooke wrote: “I have a number of links that will clearly demonstrate that any accuracy resolution greater then 5 W/m^2 in a clear sky night time measurement is not currently available regardless of your 1st links suggestion of 2 watts….”
“Clear-Sky Longwave Radiation: Clear-sky longwave radiative transfer appears to be largely a solved problem…. the uncertainty in the ccalculated longwave flux at the surface is better than 2 W m-2
for the range of measured precipitable water vapor values (Turner et al. 2003b). Because there are no major spectral errors in the flux calculation, we expect that model calculations will yield accurate cooling rate profiles in the spectral interval from 4 to 20 Âµm.”
—— end quote—–
I can’t find what you say you found — at the sites and using the search terms you say should support your info. What I find contradicts what you say you found somewhere in the same set of documents.
Can you find your source again, and point specifically to it? Knowing what you read, its date, and its cites would help.
Look, all the word salad comments on radiative physics “not being well understood” are just nonsense, not to put too fine a point on it. Take this physical example: the sun has set, and it is a cloudless night. What happens next? The earth’s surface was heated during the day, and is now emitting infrared radiation as it cools off.
The infrared radiation interacts with the matter (gas molecules) in the atmospheric column. The optical cross-section of absorption for different molecules varies; this means, for example, that O2 and N2 do not interact with infrared radiation, but CO2, CH4 and N2O all do. See the absorption spectrums here (infrared to the right). As a result of that interaction, the energy levels of those ‘greenhouse gases’ are excited, as they fall back down to ground level they emit infrared radiation in all directions. This has all been very well understood for quite some time!
So, the ground, which would have cooled more rapidly at night, is recieving back-radiation from the sky, and when morning comes, it is warmer then it otherwise would have been. Now the sun comes out… and if you have solar panels on your house, the solar radiation excites the energy levels in the N,P-doped silicon structure (or in some other material) and due to the electronic structure of the semiconductor system, excited electrons can only return to ground state by flowing through an electrical circuit – providing you with a useful power source. Algae in the oceans and plants on land do something very similar with their highly complex photosyntheic apparatus – they store the energy in the primary form of ATP and NADPH, and get their electrons by tearing apart water molecules with the aid of sunlight.
In any case, energy that would have been released to space is instead stored up in the land, oceans, ice sheets, and atmosphere – and the oceans and atmosphere are convective fluid systems, so the heat is transported here and there, and evaporation rates increase, and that puts more H2O in the atmosphere, and the radiation trapping effect grow stronger – and to quantify all that, and make future predictions, various real time observations are made using satellites,etc, and very complex mathematical models are developed for oceans, ice sheets and the atmosphere.
Now, what climate models don’t do is attempt to make numerical projections of future CO2 responses; rather they work off given scenarios for CO2 emissions. The CO2 emission rate is dependent on things like human behavior as well as the biosphere responses, but it has been steadily increasing over the past few decades – both the rate and the amount. This factor, along with N2O and CH4 emissions, seems to represent the greatest uncertainty in future predictions – with the IPCC projections being on the conservative side. Its also a factor that humans can control by switching from fossil fuels to renewable energy sources.
Some people complain that “the models are constantly being tweaked, so how can you rely on them?” but that’s how weather models work – the weather stations constantly send data out, which is incorporated into the models for the latest round of predictions. Old ice sheet models are similarly being replaced by more accurate modern versions which incorporate new observations about dynamic ice sheet responses, as I understand it.
Endless technical jargon often serves no other purpose than to confuse an issue and exclude ‘outsiders’ from the discussion – which seems to be the intent of some of the above posts.
(This is the reason that down welling measurements could be questioned as
unknown aerosols can offset the predicted values. With an unknown
reason for night time expected value being as much as > 30 W/m^2 lower
then projected, raises questions regarding the current equipments
ability to make these measures. These two issues appear to complicate
the ability to confirm clear sky nigth radiant measurements.)
These were in the long list with only a few links requiring you to register, hopefully these will help. Pay particular attention to the date stamps as the more recent indicate that the issues remain and the planned means to deal with them. Hopefully, I have not missrepresneted anything and Good Luck!
The point is your night time radiation of GHG does not appear to be detectable by the current radiative detector systems. If you have a reliable source that demonstrates this over time, on a daily basis and can docuement the clear sky nightly radiative down welling curves it would be very welcome. (Please keep in mind that we need a minimum of hourly discrete full spectrum measures, with a detector that is stable within 3db across the desired badwidth and a range of 175K to 350K over the local 12 hour period related to the night side of the planet.)
What’s your basis for that? Who says “we need a minimum of” — again, I can’t find your source on the site you point us to. Is that your requirement, or one from the research?
“Clear-sky longwave radiative transfer appears to be largely a solved problem…. the uncertainty … is better than 2 W m-2 … (Turner et al. 2003b)…. we expect that model calculations will yield accurate cooling rate profiles in the spectral interval from 4 to 20 µm.”
Why do you say that’s an impossible level of accuracy to achieve? You still aren’t giving specific sources, just pointing to large papers about many instruments, which of course take work to put together into a monitoring system.
Let’s start with your first question. A description of the curve of radiant decay is a graphic that is easy to over lay for comparitive purposes. To be have a high resolution and meet standard deviation or statistical confidence levels the desire would be to collect 30 samples nightly.
The collection of 1 per hour meets the requirement for describing the mean statistically and because you have solar influences on the upper atmosphere when the ground is in darkness the 10 hours of samples in darkness meet the min. requirement best as long as you are measuring winter clear sky night time values near the Vernal or Autmnal Equinox. So yes, sample rates of 1 per hour is my minimum requirement to meet a minimum level of statistical confidence regarding the description of the mean of a known environment / population. (Note: If you read the papers I have provided you will find that ARM makes measurements at 15 min. where Spectra AERI the sample rate is 30 min. Both meet the min. required sample rate, the ARM sample rate meets the minimum desired.)
As to your second question, you have to read these papers to see the discussion regarding the error and the calculated versus measured deviations. Virtually every reference that I have annotated as such, there is a description of a question of quality of the data support my conclusion. The specific data contained within these papers will point to the limitations of the detectors in regards to the frequency range, absolute temperature range and measurement conditions.
Do you have sources that deny the issues raised in these papers, where I have noted specifics? Do you have sources that have sufficient accuracy and sample rate required to define the full spectrum downwelling energy for a clear sky night time measure?
(Your reliance on a 2003 paper when there is clearly more recent papers that question the “monitoring systems”, causes me concern. Your motives appear to be suspect, do you wish to clarify your point?)
Funny, you appear to purposefully ignore data that is contrary to your position or has been superceded. On that stand point I guess there is no reason for further conversation. Good Luck to you in your endevors.
Comment by L. David Cooke — 18 Jan 2007 @ 10:11 PM
I don’t have a ‘position’ and don’t understand what you’re arguing with — I followed your pointers, read the 2004 summary that said “Clear-sky longwave radiative transfer appears to be largely a solved problem…. the uncertainty in the calculated longwave flux at the surface is better than 2 W m-2″ — that’s not the new stuff, that’s the foundation work.
Do you disagree with the report there? That’s the first part of the program, done on a few sites with the first set of instruments. That was the beginning work.
They got that — they say — down to a level of accuracy you don’t believe. I can’t say why they claim one thing and you claim another — but I am pointing out that you’re talking about their more recent work, with different instruments, in more locations — and that of course they will be reporting less accuracy for a while in new locations with new instruments.
I don’t see an argument here. Look, they reported getting that high accuracy level and cited a 2003 paper. That’s not the same instrument that you’re looking at for 2005 or 2006 — it’s the first stage work. They got to that level — then said their next step is rolling out a greater variety of instruments in mobile labs and going to many more sites.
So — with new equipment in new locations — they’re reporting more variability. That’s what you’d expect from the new mobile labs and new sites — what’s to argue about there? New instruments and new locations are going to give more variable data. Publishing the problems is how progress is made.
Did you read the reports done tracing the Space Shuttle launch plume, a few weeks ago? (at Head in a Cloud, here: http://atoc.colorado.edu/headinacloud/ where he talks about “PUMA, a field campaign for making measurements of the chemistry/microphysics of the space shuttle plume from aboard the NASA WB-57′
That’s not final publication, that’s a scientist blogging his daily work — including the uncertainties about the data collection flight. This is _exciting_ for an amateur to watch. It’s not something that proves the science is wrong, eh?
Don’t you think these folks are doing something wonderful, and right out in public? We can cheer them on because the facts that emerge are what’s real. We’re the audience, we need to be an inviting one if they’re going to talk to us. Same for the folks whose work you’re pointing us to — if they’re going to want to discuss their work, we need to understand it enough to talk about it. So — one accuracy level in 2003 at the first stage; and later in 2005-6, a variety of different accuracy reports from the second stage. It’s not that either is wrong, it’s a question of what were they doing, where, with what tools. So let’s charm them into talking to us. Why not?
And that’s really far too much from me. It’s not about me. Back to listening and reading.
So, for those who want to enter the highly technical world of atmospheric radiation calculations, the best place I’ve found to start is with Spencer Weart’s excellent website based on his book, “The Discovery of Global Warming” – and having read a lot of introductory books on global warming, that one seems to be the best – if you want to introduce someone to the topic, buy them that book! The supporting website has this essay on Basic Radiation Calculations in the Atmosphere.
As far as measuring the downwelling radiative flux (RE#64)…you’d have to measure it over the whole planet, wouldn’t you? Recall that the reason that Charles Keeling set up his CO2 measurement station in the middle of the Pacific was to avoid local fluctuations in CO2 concentration. Then you’d have to worry about ‘separating out’ the CO2 signal from the water vapor signal from other signals (CH4 – is there a gas flare nearby, or a swamp? An old leaky fridge putting out CFC’s? N2O generation? Aerosol content?). Of course, you can tell that it’s colder at night in the desert then by the coast – because there’s little water vapor in desert regions, so the surface cools off much faster. What if a cloud passes overhead? It suddenly gets warmer…but what about the atmospheric column above the cloud? Even if you got simultaneous data from all over the planet at high resolution, how would you know what was due to CO2? You’re not in a lab, you’re out in a field – on the ocean – on a glacier – and so on.
So, what do you do? The first thing to do is to take the temperature of the atmosphere and monitor it over time – that’s what the radiosonde ballon network attempted to do, followed by the microwave sounding units on satellites, which at first appeared to show that the atmosphere was not warming at the rate predicted by the climate models – the radiative-convective models of the atmosphere, terribly complicated beasties that they are. That whole topic has been put to rest, and the details of the issue are explained with the usual clarity by realclimate: Et Tu, LT?
Given a clear theoretical reason why the atmospheric temperature would increase, plus an actual measurement of that temperature increase, it seems rather obvious that the conclusion is solid. You can’t rule out invisible aliens from a far-away galaxy heating the atmosphere with giant ray guns as part of an elaborate practical joke… nor can you rule out deities placing fossils in the earth to test the faith of true believers – but come on, now.
As with many other aspects of climate science, the observations of atmospheric temperature are now matching the modelled predictions of atmospheric warming. The water vapor feedback effect is also matching the modelled predictions of an increase in atmospheric water vapor.
Those who feel the need for more information on why radiative models are so complicated could start with Richard Feynman’s short text, QED, on the physics of the interaction of light and matter. Happily, someone recorded those lectures on video at http://www.vega.org.uk/video/subseries/8 , so you can read along and watch at the same time. Then go back and read the above “basic radiation calculations” link. Repeat this process several times… and that’s just the front end of the climate models. The fact that they are reproducing observations is good evidence of their success; these are the most complex computer models ever created, as far as I know.
I’m a physicist, and new to this, so pls bear w/ me. I understand that due to computational considerations, it isn’t possible to generate error statistics for climate predictions in the “usual” manner (e.g. Monte Carlo simulations or what have you). Can anyone point me to some references that deal w/ these questions: how ARE error stats derived and how are they rigorously shown to be equivalent to the “usual suspects”? Thanks.
Jeff, I’m not sure what the ‘usual manner’ is for computing errors in models, though there are many… not being a modeller, but someone who has used models (QM-classical mixed models and protein structure prediction models), it seems that the issue is indeed addressed in some detail in climate modelling studies; hope these two references help:
1. Re Conduction: Energy transport is by 3 mechanisms Convection (movement of hotter atoms) Conduction (movement of hotter electrons), and radiation (movement of photons.) Heat is a form of energy In the air both conduction and convection are relatively SLOW heat/energy movement processes, but they do result in energy transport from the ground to space. Radiation since it consists of photon at the speed of light, followed by absorption & then another photon, another absorption etc is very fast in going thru the atmosphere to space. – according to Eli Rabett a few microseconds for each absorption – so we are talking a less than seconds for radiation to transfer energy from ground to space. so most energy transport is by radiation. Conduction exists in the air because electrons get carried from the ground to the tops of clouds & are also created in the air, – which is why we have charged clouds which result in lightning. but we can ignore it since it is small.
2. Re Stefan-Boltzmann: Yes the SBL applies to black bodies where the emissivity constant e =1, but for non black bodies the constant just gets adjusted. For the earth it is 0.95 per a lecture on global warming from Columbia University. So the principle still applies- The energy flux (watts/m2) IS calculated by the SBL . If you know the peak temperature of the radiating energy spectrum then you can calculate the outgoing energy flux. If you know the flux you can calculate the temperature. Hotter bodies radiate more – look at the sun.
When the GHE for added CO2 raises the ground level temperature then the ground/air will radiate MORE back up than what was being radiated pre GHG addition. IF the GHE lowers the TOA temperature, then the Earth at the TOA will radiate LESS energy both up to space and down. If we started pre-CO2, at equilibrium conditions, then there will be a negative energy imbalance at the TOA meaning that the Earth system will absorb more solar energy than it radiates in addition to the GHG part that is recycled. We will warm up.
3. Next your simple GW model – it is fine up to the last sentence.”Eventually it increases enough until a new equilibrium is formed.” Equilibrium for the earth is defined as Energy-in equals energy out. – in the case of our GW computer models energy-in is ONLY what comes from the sun.. If you do not change the energy-in then changing the energy-out by CO2 absorption can NOT create a NEW equilibrium. What has been created in the computer models is a non-equilibrium condition where the CO2 absorptions delay or slowdown the rate of transmission of energy to space. The longer transit time results in higher ground temps (global warming), and lower TOA temps to conserve energy. See Hansen et al 2005 for these results.
4. In your planet A & B example. The fallacy is in the “B will cool faster statement.” IF the temperature is the same- then the planets will transport energy out at the same rate, because the only thing that impacts the transport rate is the temperature per the SBL. A conceptual analogy, if 100gpm of water is pushed thru a 10 ft long pipe, and you add a recycle loop that takes 1 gpm from near the outlet and runs it back to near the inlet, the water will still flow in and out at100gpm because that is what you are adding at the front end, just the friction resistance has changed & it takes more power to push the water thru. It WILL take a few extra seconds for the water to transition thru the extra recycle loop, but AT EQUILIBRIUM the 100gpm will flow thru because we did not change the input. (This analogy will fail if you try to model the real environment.)
5. My QUESTION for the experts is can the atmosphere stay in this non-equilibrium situation in 2/3 above for 20-30 years per the GCMs, or 20,000 if the CO2 keeps increasing as it has since the CO2 low at the bottom of the ice age?
6.The GCMs say YES. Even with the TOA temp being lower than equilibrium, they say that the excess energy that accumulates goes into the ocean and the TOA temp remains unchanged at below equilibrium for years.(I assume until the CO2 is removed to the ocean?) So why don’t the daily changes in solar energy flow just eliminate the imbalance & return to equilibrium?
7.The SBL Feedback theory says NO. Equilibrium is restored every day. It postulates that because the very real GHE raises the ground temp, then the raised temp results in more pump power or SBL Feedback to push the energy back out, MORE energy being radiated (& convected & conducted) back up to the TOA where it came from. This larger pumping will push up enough energy to return to equilibrium and accommodate the extra energy that is returned to the ground by the added CO2. The question is will this return the air to equilibrium? In parallel, the SBL feedback is the same mechanism that returns the earth back to its solar equilibrium on a daily basis.- ie hotter air rises and radiates faster as the temperature warms every morning. The SBL can NOT differentiate between GHG or solar caused temperature increases.. It is postulated that the SBL effects to return to equilibrium are not properly included in the GCM. The conclusion is that GHG warming is negated by the SBL feedback. CO2 can increase without changing the temperature. (see 112, 118, 126, 148.. in the section on Al Gore’s movie distribution to schools.)
John, consider why the heat at the core of the Sun doesn’t rush straight out into space:
“… total Solar energy is determined by the temperature of the Sun’s visible surface, or photosphere, which is about 6000 deg K. This in turn is determined by the Sun’s core temperature of about 15,000,000 deg K, which arises from a balance of inward pressure from gravity and outward pressure from the inner nuclear reactions, and the radiative and convective transports of energy from the core to the photosphere. “
Thank you. Now I’m going to go one step further and continue to display my ignorance. By the “usual” methods I mean something along the following lines: in a particle physics experiment one accumulates a large # of events and ultimately derives error bars for whatever you’re trying to measure. The more events you have, the better (generally speaking) the result (the smaller the error bars). I gather that it’s not feasible to do this w/ climate models, so how does one assign error bars to, say, sea level rise? (10m +- whatever). Anyway, I’ll check these refs, and any other info is greatly appreciated. Thanks again.
No, the SBL does not apply with an adjusted constant to frequency-dependent emissivity. If you have a gray body, with non-frequency-dependent emissivity, that constant emissivity can be multiplied by the signma-T**4 to give SBL. But it just doesn’t work if the emissivity is not constant as a function of frequency.
Don’t believe me? Just go straight to the Planck radiation law and try to integrate it over frequency. If there is a non-constant emissivity factor, you can’t do the integral, so you can’t get to SBL. End of story.
The fact that hotter bodies give off more radiation doesn’t mean they satisfy the SBL, which is a specific dependence.
That gets to the heart of the problem, because in climate science there is only one experimental event – the ongoing evolution of the land-sea-ice-atmosphere system, with no controls or ability to repeat the experiment. Thus, all experiments must be done ‘in silico’ – imagine if in your particle physics experiments, you just had one chance to collect data from one single event – you’d want to collect as much data as possible, of course.
The chief problem with climate models, as with other models and experiments, is distinguishing between systematic errors and random errors (say you had a tiny piece of metal in your cloud chamber; that’d create the systematic skewing of results, even though the random variance might be very low after hundreds of observed events). That 2nd reference seems to use multiple independent models to test for systematic errors in the climate models; random variance is apparently tested by running the same model over and over.
This is why we should have far better data collection systems for monitoring the Earth system – more satellites and more ocean sensors (bottom moored ones deployed on a global basis to measure subsurface ocean temps and currents) and more in-situ measurements of ice sheet dynamics in the Antarctic and Greenland, as well as measurements of the permafrost behavior- it’d be a better use of resources than another ‘race to the Moon’. The only real way to test the models, after all, is to compare their predictions with actual data on an ongoing basis.
Re 74 Hank,
Your quote about the sun contains the key point- There is a BALANCE of inward pressure and outward nuclear radiations….
This is the key for CO2 caused global warming also. Nature/physics requires a balance, energy-in equals energy-out. The equilibrium exists. (UNLESS you are adding or subtracting energy and adding CO2 does NOT add energy to the earth system))
The SBL forces a balance in the atmosphere and at all points within the atmosphere. IF it gets warmer the SBL radiates more energy out to return to the balance, if it gets cooler the SBL radiates less out to return to the balance. The SBL is mother natures equilibrium enforcer.
The Earth on an annual average is at this balance point. It passes thru this point twice a day as the solar energy-in increases and decreases.
The GCM computer programs require that the the ground level air be warmer than the equilibrium AND the TOA air be cooler. (per Hansen) THIS IS NOT a balance. Even Hansen says that the earth is at an energy imbalance that lasts for years. The GCMs are wrong. Apparently the GCMs do NOT account for the return of the CO2 absorbed energy to the TOA.
In fact the SBL (hotter air radiates more) will force the balance to return the CO2 greenhouse effect warming to the TOA and return the atmosphere to equilibrium balance. ie NO NET CO2 caused warming. You only get warming if you increase the energy-in, because the SBL will return you to a balance with the energy-in.
[Response:John, your many comments’ dogmatic statements about what is or is not in GCMs continue to amuse. I appreciate your enthusiasm, but rather than make loud declamations, why not just check it out? There are mutliple GCM codes available for download (GISS, NCAR, EdGCM etc.), so try looking at them and searching through to find places where SB is used (and you’ll find it in all of them). Long-wave radiation is indeed the main ‘balancer’ against warming effects as has been known and used in all models of the climate (even energy balance models) for decades already. No more nonsense please. -gavin]
Sorry to drag this out but I THINK Mr Dodds’ has made two main points.
One is that dawn temperatures should be largely unaffected by CO2 as there should be enough time at night for any extra heat to go away. (There would still be an increase in mean temperature but this is not the whole story, at least not for Mr Dodds.) Can anyone say if this is as measured?
His other point seems to be that the lower atmosphere IS heating but this extra heat will eventually be passed into the upper atmosphere by conduction and radiation. That is, if we stopped putting any more CO2 into the atmosphere, the mean temperature would eventually fall (perhaps slightly) after some indeterminate time. There would be some overshoot in the lower atmosphere temperature for a step change in CO2 concentration. Does anyone know if the models agree with this statement?
re: #70. “… these are the most complex computer models ever created, as far as I know.”
I’m certain that AOLGCMs are not the most complex computer models ever created. How wold you like to start comparing the complexity of computer models of inherently complex physical phenomena and processes that occur in complex physical geometries. Close to 100 thousand lines of code is my estimate of the entry level computer model for such applications. Truly complex codes consist of a few million lines of code. The AOLGCM codes will fall closer to the former estimate than to the latter.
Gavin, Thanks for acknowledging.
I keep asking because you know better what is in the GCM (would I ever find that there are 365.0 days in a model year?). I do not want to become an expert.
AND philosophically IF my view is to prevail then I have to convince the experts (ie you) that the GCM is wrong. It does me no good to argue about parameter values like Temperature hockey sticks, that Monkton & Junkscience and most of the other denyers do because I can’t win that argument. I HAVE to show that the model is wrong or inconsistent or missing something OR accept that it works. (but a ray of hope for you- you & readers have guided me thru the entire process of the model from WV feedback to the actual absorption & energy return to the air (per Eli) process – I am NEAR THE END in that I am now challenging if Arrhenius(& the GCMs) included the SB process to return to equibrium of not. My conclusion is STILL that something is wrong. You have NOT refuted my points – just ignored them.
Why can’t you just respond to the physics itself?
I do not understand why the energy imbalance (ground warmer, TOA cooler) would not be equalized by the SB forces? After all the SB forces MUST change and be larger because the ground temp is warmer. It seems to me that the SB forces to equalize, would have to be faster than the CO2 absorbtions, because they act on all the air, not just the GHGs, and they act on daily temperature changes that are much larger than GHGs – AND we still return to equilibrium. The SB forces & especially what I refer to as SB Feedback is automatic to return the system to equilibrium. The only reasons I can see to explain them is that somehow they are missing (misapplied?) in the GCMs, hence I offer that explanation. – I will stop asking if the GCMs missed them, as you requested.
BUT please explain the apparent GCM inconsistencies that I identified in #38. Where are they wrong? To paraphrase you – you are dogmatically insisting that the GCMs are correct (which I understand since you have spent so much time on them)- I identify inconsistencies and you ignore them. – you do NOT refute them.- at least way back when, when you explained, I could move on. As it is I have come back to my discussion points from 15 months ago where you said I assumed equilibrium & you said that was wrong (but not WHY). I did assume then , BUT now I see that the SB forces should reestablish it. – see 38- so WHY is it wrong?)
Quite frankly, the idea that the inequilibrium/energy imbalance could last for many or 20,000 years or as long as the CO2 is increasing just seems impossible. After all we go thru equilibrium every day. The amount of energy transferred by GHGs is downright trival compared to what flows thru the earth at all times. The equilibrium concept must be dominant – the documentation for the research (publish papers) even identifies it as a requirement. Hank’s sources identify equilibrium as a requirement for the sun & earth energy flows and balances. So how can the GCM not allow equlibrium for the entire run length?
OR try this analogy (I know, analogies always seem to fail eventually – but at least this demonstraes why I think we return to equilibrium!)
Think of the energy transport as a frictionless pipe carrying water. The energy/water in from the sun is 100gpm, at equilibrium the energy/water at the outlet is 100gpm. If I add a GHG recycle/bypass line that takes 2 GPM of water /energy from near the outlet & recycles it to near the inlet, then I have simulated the atmosphere with the potential to double or change CO2 (igmore the actual numbers – use the concept) . If I suddenly open a valve to allow water to go into the recycle line, then it immediately creates the GCM conditions (ie more water at the inlet, less water at the outlet/TOA during the NON-equilibrium response,) BUT as soon as the system returns to equilibrium (as soon as the GHG recycle line fills up) I am left with equilibrium, 100gpm in, 100gpm out BUT a total of 102 GPM going down in the pipe (faster SB energy flow to compensate for the GHGs slowing down the outward flow) with a 2gpm flowing back up the recycle line – ie net 100gpm within the pipes total.- yes the flow rates in the pipes increase as more GHGs are added, but the equilibrium conditions are maintained because the increased water/energy flow returns the system to equilibrium immediately.
The question – where does the water/energy accumulate over time. My view it doesn’t. the increased downflow just speeds up & returns the system to equilibrium where the CO2 flow is returned to where it came from at the outlet/TOA.
Philosphically, IF the equilibrium conditons canNOT prevail and be reestablished in the earth, then ANY little change will upset the balance forever. A return to equilibrium (ie no CO2) would never be possible without reversing the the actual change. Such a situation does not seem to model the earth. The earth has compensating mechanisms and we are talking about energy which can assume many different forms. The GCM does NOT even include the energy contained in the Earth’s magnetic field or in the solid earth or the core. How do you know that they do not come into play in the balance? The Earth Mag field has reduced by 10% in the last century. What happened to that energy that was used to maintain the field?
Sorry I’m a skeptic who has to find out why. The GCM model just doesn;t feel right.
Re 80: You say “…dawn temperatures should be largely unaffected by CO2 as there should be enough time at night for any extra heat to go away”, but that’s obviously not true.
What exactly is this “extra” heat? Seems to me anything much over the 3K microwave background must be considered extra, and would radiate away given time – and indeed, the surface of the moon drops to somewhere around 100 K during the night. Or think about why the polar regions get cold during their sunless winters.
It seems that you’ve missed the important point, which is that CO2 changes the rate at which heat radiates away. It’s not all that different in principle from adding more insulation to your attic.
Re “The SB forces & especially what I refer to as SB Feedback is automatic to return the system to equilibrium. The only reasons I can see to explain them is that somehow they are missing (misapplied?) in the GCMs, hence I offer that explanation. – I will stop asking if the GCMs missed them, as you requested.”
They aren’t missing. You have a wrong idea of what the Stefan-Boltzmann feedback means and how it works, and your misconception is what’s missing from the models. If you follow your idea out logically, you seem to be saying that nothing can ever change its temperature, because the SB feedback would immediately put it back in equilibrium. That’s not what equilibrium means and that’s not how it works.
If you want to understand how the Stefan-Boltzmann law works, work out an example QUANTITATIVELY. Do the math. And as the math teachers say, show your work.
Look again at the sun. You grasped the word ‘balance’ but you’re taking it from the part of the sentence referring to physical pressure from gravity balancing pressure from the nuclear reactions, yes, it says that. You don’t think gravity holds the infrared photons inside the sun, of course. The sentence talks about the issue you were focusing on up til that word grabbed your focus — conduction and radiation transfer heat. That was your big issue
How long does it take a photon to move from the surface of earth into space? You claim above it happens at the speed of light.
How long does it take a photon to move from the center of the sun into space?
What’s different about the two conditions? Gravity, temperature, pressure, density and composition of the medium.
To the physicists — I suggest this does need to be explained and the math set out explicitly, because looking around there are far more websites proclaiming that the physics isn’t right than I’d imagined, most of them very recent. It seems like there’s a great number of people lately who are denying this research has been done or that it can be possible there’s a problem.
I _think_ the terms needed may be ‘minimum free path’ — how long a photon travels before interacting — and then how long it’s held in a CO2 molecule before it’s re-emitted in a random directdion.
Random direction means, likely to hit another CO2 molecule. I doubt anyone can arrange for the atmosphere’s CO2 molecules to emit their infrared photons in one consistent direction. If so it’d be one hell of a weapon.
Before everyone dismisses the Chinese as duplicitous with regard to the publication on potential global cooling, here is a post by Lubos Motl, Harvard physicist, that cites Russian and Ukranian scientists saying some very similar things… except they are even more extreme in their predictions.
One cannot attribute completely to politics those scientific conclusions that you don’t want to accept. One must confront the positions with other than ad hominem dismissals… dare I say one must use scientific review of that which is published and provide specific grounds for rebuttal? Since one of the physicist is acting as a contact and has provided and email address, it would seem reasonable to make the effort to contact him for more information about the study and the actual data used.
If one bothered to actually read the Chinese publication cited in #82, one might then be able to address the issues… and realize that the publication did say that CO2 accounted for approximately 40% of the change in the analysis… just less than current “global warming correct” adherents demand.
A closed mind is a terrible thing to waste.
[Response: A dismissal of a Russian or a Chinese publication or media report is not a dismissal of their entire scientific output or their government’s policy. There is much to be commended in both countries scientific processes. There has been a tendency for certain commentors to grab at any straw available that seems to cast doubt on the mainstream consensus – that’s fine of course, but those straws need to be examined critically. When it comes to comments in Moscow News that are not backed up by any published work or demonstrated expertise, that dismissal is easy. The Chinese publication is simply a statistical analysis of two temperature time series with no physics at all, and so it’s potential to give good predictions in the future is zero (there being an infinite number of statistical models of equal or better skill that would produce any prediction one would like). Attribution of temperature changes and prediction can only be acheived through physical modelling – absent that, you might as well cast runes. -gavin]
I was asking if anyone knew of any empirical evidence to support Mr Dodd’s statements or otherwise.
By “Extra Heat” I meant the increase in heat in the lower atmosphere caused by anthropogenic CO2.
The overshoot may be of interest as it may mean that the models are projecting slightly too high a temperature. The difference is very likely to be well within model errors, anyway, so it’s an academic point but an interesting one for someone who has been involved in modelling.
“It’s not all that different in principle from adding more insulation to your attic.” That is mostly a convection effect and Mr Dodd’s does not seem to argue with the current views of convection effects in the atmosphere.
To John Dodds (#83),
First you have to note, John, that the models aren’t “right”. No model is right. The question is whether or not they have all the important ingredients in proportions that will allow an adequate representation such that we can learn about the thing being modeled. Apologies in advance, from here this posting gets murkier….
I dunno how much interest you’ll get in your analogy. I tried to figure out what you were saying, though, because your inquiry seems honest enough. Unfortunately, I’m only a biologist, but let me see if I understand what you wrote. The water represents heat. Let’s say the GHG loop represents the extra heat in the system. So, the GHG’s don’t change the amount coming from the sun, but the total amount in the Earth’s system is increased before flowing out to space (or out the end of the hose which still equals the same amount coming in). Okay, to me there doesn’t seem to be a problem there. Now take into consideration some feedbacks and your analogy might begin to take shape.
Some people have done analogies with water pouring, but let’s try something different, an experiment you could try in your home. I have no idea if this will work:
Take two equal wires (unwound coathangers perhaps?) and shut one end of each into your fridge. Now you need two cans at room temperature with thermometers in them. The other end comes out and into a soda can that has been filled with water (each can has to be the same, side by side, with the same amount of water, etc). Now, if the wires are conducting heat away from the cans, the thermometers should be reading lower than they were when they were at room temperature. Now insulate one of the cans in bubble wrap or something. Because this can is now gaining heat from the room at a lower rate, it will cool faster than the other can. It will reach equilibrium eventually, but its equilibrium temperature will be lower than that of the other can. Maybe in reverse this would be a better analogy, with the wires touching a hotplate and the insulated can getting warmer. But hopefully you get the idea. Now, what has to be done to this system to take into consideration your concerns?
I suspect that insulation is more likely related to conduction then convection. I agree that on the surface there appears to be the possibility that model error and possible mathematical calculations might carry an accumulating error, it is still very likely that there is atmospheric warming occurring, though it is likely not as great as was attributed in the IPCC 2001 report.
(In relation to my earlier posts regarding the direct measurement of low level radiant IR, if a 2 watt (White/Blue GaAs(P or Ni)) LED or Solid State Laser was suspended at 1000 feet in the air I might be able to see it. (Gallium and Arsenide (GaAs) emits infrared light. Gallium Arsenide Phosphide (GaAsP) emits either red or yellow light, and Gallium Phosphide (GaP) emits red or green light.) But, at 10,000 feet it would be less likely and again at 70,000 feet it is more unlikely, even in a clear night sky as the photon density is less by a factor of 4 for twice the distance. Based on this, I suspect if there is heating due to GHG, it must be centered in the lower several thousand feet of the atmosphere (troposphere).)
The current theory is that CO2 would be acting as heat trap and reducing the radiation of the terrestrial IR into the upper atmosphere and into space. The heat trap would be different from an insulator in that the insulator would be preventing convection or conduction. The heat trap or absorption and re-emission of the IR by GHG is supposed to be reducing the radiation path to the upper atmosphere. As to the other heat paths, some form of temperature inversion would have to explain the convection issue and conduction would be blocked due to the low density of the medium. So I don’t know if Mr. Dodd is addressing the terrestrial heat transfer path correctly yet.
I recognize that climate modelling using supercomputers is a different effort than statistical analysis.
Often statistical models will show strong positive correlations between data that may or may not have a logical causal relationship. Whether or not the Chinese or Russian and Ukrainian studies were more than “quick and dirty” statistical analyses or legitimate analyses of major climate influences based on “real physics” certainly should be determined.
It is important to not disregard macro analyses in favor of models econtaining multiple thousands of lines of code and assumptions, especially if the macro analyses include the major climate influencing factors. If historical data show that CO2 concentrations are not correlated with absolute temperatures and lag major temperature increase trends by up to 800 years(and geologically that is the case), then it is important to be able to say why that is not the case this time. Just because a short-timeline model fits now, doesn’t excuse the model if it doesn’t backfit geologically.
Macro models can be useful for a reality check… even if they don’t have 100,000s lines of code.
[Response: The reason why we know CO2 is not following temperature this time is because we know that we’re putting more into the atmosphere than is staying and that the excesses are going into the oceans (mainly). Statistics from the ice ages (which show a strong correlation, not none!) have nothing to say on the matter (but just as an exercise, calculate what the predicted increase in CO2 would be over a multi-century timescale assuming such an ice-age-based statistical model was valid – then come back and try and make your point again). More to the point, where are these sophisitcated Russian statistical models of which you speak? Find a paper and I’ll analyse it. Until then this is all smoke and mirrors. – gavin]
I don’t have the source documents for the Russian studies which are only abstracted in springerlink.com (which I’m sure you are familiar). You can probably obtain them with a little effort. This forum is fine for lighter weight discussions and offering up related news, not complete examinations of models. You don’t have to pursue the leads; others will, no doubt.
Once again, the point of geological studies as a checkpoint is that absolute concentrations of CO2 have not been correlated with absolute temperature levels… something different from a correlation between temperature changes and CO2 changes that follow.
Dr. Patterson can be contacted at Carleton University (http://http-server.carleton.ca/%7Etpatters/) if you really want to examine the geological studies that make those points. I don’t dispute your knowledge of the dynamics of atmospheric gases. I’m suggesting that other disciplines have something to say about climate change, too… like geology and astrophysics. The earth is not a closed system or a static one geologically.
Or, you can just keep calling anyone who might offer up differing perspectives a hack and go on with close-system thinking.
[Response: As an exercise in jumping to conclusions your posts are pretty good. Where did you ever get the idea that I advocate not talking to geologists? (Try looking at my publication record to see the intereactions with paleo-climatologists for instance). The best records that have both temperature and CO2 changes are the ice cores, and I am flabbergasted that you think they show no correlation between CO2 and temperature over the last 650,000 years. On the contrary, the relationships are very obvious and causal in both directions (though you need more information than just the two records to determine that). Still doesn’t have much to do with the change in CO2 occuring now (as you would see from doing the calculation I suggested above).
Going back to our Russian colleague – the abstract linked discusses only potential changes to the solar forcing. This is fine topic for discussion, but one should state straightaway that this is only one of many forecasts for the upcoming solar cycles (which differ enormously). Most importantly however, you should note that no comparison of these changes with the other forcings in the system have been made. Since the climate will react to the net forcing, that is essential to working out whether a cooling should be expected. Since the measured solar forcing (solar max to min) is so small (0.1% ~ 0.24 W/m2), any change of that magnitude is not going to make any difference in the next few decades. -gavin]
Dave Cooke writes: “The heat trap or absorption and re-emission of the IR by GHG is supposed to be reducing the radiation path to the upper atmosphere.”
I believe that’s backwards — increasing CO2 increases the radiation path, each photon has a longer path/distance/time to travel before it escapes the planet, because there are more opportunities for each photon to be captured, particularly at low pressure/high altitude where the absorbtion bands are more effective. See Weart’s AIP article link above.
These discussions are too terse and too easy to have “tones” that are not part of normal discussion… which I think we both did here.
My “tone” was not regarding your collaboration with other disciplines which is obvious from your publications and statements elsewhere in this site. It was more directed to the initial “rubbish” comment.
You are correct, I had meant to say reduced, the ability of transport of the radiant energy to the upper atmosphere. Or as you would put it,
increase the time that the terrestrial IR energy is trapped in the lower atmosphere.
Re #89: “By “Extra Heat” I meant the increase in heat in the lower atmosphere caused by anthropogenic CO2.” – Which I what I thought, so my argument still applies. The night is nowhere near long enough for the dark side of Earth to radiate down to thermal equilibrium. The bottom line is that, all else being equal, if some area starts out warmer at sunset, it will be warmer at dawn.
“”It’s not all that different in principle from adding more insulation to your attic.” That is mostly a convection effect…”
OK, if you want a better analogy, it’s like using low-E glass in your windows :-)
Dan, aren’t weather and climate models themselves “models of inherently complex physical phenomena and processes that occur in complex physical geometries.”?
In fact, if you assign a temperature value to a point (x,y,z) in space you have already created a 4-dimensional output F(x,y,z). Let’s say you want to calculate the local humidity as a function of pressure(P), temperature(T) and location(x,y,z) – and then allow the whole system to evolve in time(t), and what you have is humidity as a function of (x,y,z,P,T,t) – which is something like what weather models try to do when they come up with rainfall predictions, isn’t it? 6-dimensional geometries calculated over the entire planet – isn’t that what you are talking about – i.e. climate and weather models?
Thanks for the reply but I was asking for actual measurements.
I can only give my own measurements – the temperature inside my shed-cum-greenhouse (when I had one) with greenhouse glass reached the same as the outside temperature before dawn.
By my maths, the temperature of the lower atmosphere should decay exponentially towards an aymptote, so dawn temperatures should be affected less than dusk temperatures. I’m not sure if the asymptote is the 3K background or the temperature of the stratosphere.
“…the temperature inside my shed-cum-greenhouse… reached the same as the outside temperature before dawn.”
Maybe that has something to do with the fact that your greenhouse is a lot smaller than the atmosphere, and so has a lot less thermal mass to store heat. Look at any source on passive solar design, and you’ll find they’ll stress the importance of having enough thermal mass to store heat overnight.
Re BPL’s 85.
Yes , if you follow my idea out logically nothing can ever change its temperature by ITSELF. This is called ENTROPY. The 2nd law of thermodynamics, says (in one of its forms) that you can’t get an entity to raise its own temperature without adding work or energy from outside. Another version is: You can’t get something for nothing. Also there is the conservation of energy.
To raise or lower an objects temperature you have to add or subtract outside energy – ie increase the solar energy-in, and we then come to a new equilibrium at a higher temperature. It happens every day when the sun rises- but it is a moving new equilibrium temp because the solar energy in is increasing or decreasing, except at night when it is zero.
In our case the GCMs DO comply in the GHE. they take energy from the TOA & move it to the ground level but do not return to equilibrium. GHGs do NOT create energy to raise their temperature
Yes I agree Hank we need a mean free path estimate or definition.
Remember the minimum mean free path is the time to travel at the speed of light to the escape distance/TOA, since there are some small number of photons that bypass all absorptions & go directly to space. I think this transit time to space can be thought of as virtually zero.
Eli says that for a CO2 absorption the photon is resident in the CO2 for several to 10 microseconds at sea level, before being returned to the global energy bath & the residence time varies with air density/height. So IF there are say 100,000 absorptions in a CO2 to air to CO2 to air to… to CO2 to space pathway then we have a 1 second transit time. If you use a convection pathway it is longer. Since most energy transport is radiation, the transit time is on the order of zero to a second. Eli – do you have a better estimate? about how many absorbtions are there really?
Now following through, when a single added CO2 absorbs a photon and creates the GHE warming effect, which is an unstable dis-equilibrium where the ground is warmer, TOA is cooler and there is an energy disequilibrium, where more solar energy would be absorbed than emitted (per Hansen 2005) then it will take about 1 second or so for the warmer photon & GHE warming at ground level, to transit back to the TOA to return to equilibrium. The earth is now back at equilibrium, the CO2 has been added and energy absorbed AND returned to equilibrium. The mechanism for transporting energy out (all radiation, conduction and convection as driven by the Stefan Boltzmann law, -hotter objects radiate more- based on the temperature difference from equilibrium) must be equal to the radiation part that is reflected down from the added GHG absorption plus the extra warming from the GHE delays. Even when the added anthropogenic CO2 increases continuously, the atmosphere will return to equilibrium with only a slight delay. Yes, adding CO2 results in more energy moving back & forth (stronger hurricanes), but the end result equilibrium conditions required for balancing the sun (your ref) or for balancing the earth, still result in the stable equilibrium ground temperature, not net warming.
This is similar to the water pipe analogy above. The system moves from its unstable disequilibrium when the recycle pipe is opened (ie CO2 absorbs) and returns to its 100gpm in and out equilibrium condition as soon as the recycle pipe fills up or the recycled energy rejoins the main pipe and moves to the exit point/TOA).
It seems to me that the Earth will return to equilibrium at all points in the air, (no ground warming, no TOA cooling ) from a GHG addition almost immediately. If so then the accumulation of GHG absorbed energy and the ever increasing GHG forcing, to warm-up by a few degrees over a century is not possible. The GHG Forcing line is actually flat. Mother Nature’s forcing of equilibrium/balance to energy-in eliminates GHE warming.
In comment to “[Response: The reason why we know CO2 is not following temperature this time is because we know that we’re putting more into the atmosphere .. Statistics from the ice ages (which show a strong correlation, not none!)…More to the point, where are these sophisticated Russian statistical models of which you speak? Find a paper and I’ll analyze it. Until then this is all smoke and mirrors.”
Is there data, papers, and logic to support the Russian assertion? Seems to be.
1) First what goes up will come down.
The current solar activity is at its highest level in 8000 years. (see paper below). If the proxy climatic record can be used as an analogue: This warm period will end. The sun will enter a Maunder like stage and the earth will cool. (i.e. The proxy climate record shows periodic warming, followed by cooling.)
“According to our reconstruction, the level of solar activity during the last 70 years is exceptional, and the previous period of equally high activity occurred more than 8000 years ago. We find during the past 11,400 years the Sun spent only of the order of 10% of the time at a similar high level of magnetic activity and almost all of the earlier high-activity periods were shorter than the present episode.”
2) Could a change in the sun affect the earth’s temperature?
Also in support of the Russian statement is the fact that the increase in solar magnetic activity correlates with the observed increase in temperature in 20th century.
See figure 6 in the above linked paper that shows there is a close correlation between observed global temperature anomalies and the solar index “ak”.
From that paper: “It has been noted that in the last century the correlation between sunspot number and geomagnetic activity has been steadily decreasing from – 0.76 in the period 1868-1890 to 0.35 in the period 1960-1982, … According to Echer et al (2004), the probable cause seems to be related to the double peak structure of geomagnetic activity. The second peak, related to high speed solar wind from coronal holes seems to have increased relative to the first one, related to sunspots (CMEs) but, as already mentioned, this type of solar activity is not accounted for by sunspot number. In figure 6 long term varations in global temperature are compared to the long-term variations in geomagnetic activity as expressed by the ak-index (Nevanlinna and Kataga 2003). The correlation between the two quantities is 0.85 with p< 0.01."
3) What is the possible mechanism that would enable the sun to affect the earth's temperature?
And for those who are interested in how solar changes could affect planetary temperature, the following is a link to Tinsley and Yu's paper which provides an explanation of a mechanism that links solar wind changes (also changes to the earth's magnetic field and GCR changes) to cloud formation. (An increase in clouds cools the planet. A decrease clouds warms the planet.)
Comment by William Astley — 25 Jan 2007 @ 11:56 PM
Dave, if you make up your own definitions, instead of using the ones the physicists use, you will create inordinate confusion among your readers.
Look up “minimum free path” please.
And “random direction” and “drunkard’s walk”
You’re still imagining somehow that photons can all go in the same direction, and want to be free of the planet. For the same reason the Sun doesn’t collapse like a popped balloon, the Earth’s heat doesn’t all rush off into space. Please, read the standard definitions of the terms and use them in your logic.
Er, that should be addressed to “John” not “Dave” on the definition of minimum free path.
And it’s not easy to find. Til a physicist comes along, I’ll try this once. After that, I really think you ought to start a website and put what you believe all in one place, so maybe someone can understand what would be helpful to you.
I understand this process, as an amateur, to mean the distance on average that a photon travels between being emitted (in any random direction) and interacting again.
A ‘drunkard’s walk’ is a random progression; on average it eventually returns to where it started, wandering around. If the drunkard happens to reach a curb, he falls off it. But he doesn’t make a straight line run for the edge. Each step is in a random direction. Nor do infrared photons in an atmosphere containing greenhouse gases make a run for the top of the atmosphere. Recall in the upper atmosphere the absorbtion is _more_ efficient. And re-emission is in random directions. The photons are repeatedly caught, and re-emitted in random directions. They don’t rush off the planet from Earth any more than they rush out of the Sun.
The only time energy rushes out of a sun the way you are imagining photons rushing out of Earth’s atmosphere is during a supernova at the point in the process when a vast quantity of matter becomes neutrinos, which don’t interact significantly. Everything else we know of interacts and takes a very long time to get anywhere.
You are ignoring, for one thing, the role of convection in the atmosphere entirely when it comes to calculating temperatures; you are also ignoring the role of water vapor and the sensible and latent heat transport (latent being the heat absorbed and released during phase changes; when water vapor condenses it releases heat); and you are also ignoring the fact that the main effect of increased CO2 is in the upper atmosphere; where adding CO2 leads to more absorption then if CO2 is added at sea level due to temperature and pressure effects, which means that the longwave radiation that reaches the Earth’s surface would come from lower in the atmosphere as CO2 levels increase.
Add more CO2 to the atmosphere, in other words, and the ‘mean free path’ that an infrared photon from the surface will follow before being absorbed is shortened. Thus, it’s where the radiation is absorbed in the atmosphere that matters to the surface radiation as much as the total amount that’s absorbed – all the ice sheets are on the surface, after all. Imagine the atmosphere as a stack of translucent horizontal slices, as more CO2 is added each layer becomes more opaque to infrared radiation (not at all like a greenhouse!) and then you get more radiation from the lower layers.
This is obviously complicated, but perhaps you should look at http://www.aip.org/history/climate/Radmath.htm. You can also think of it this way; if you wrap a blanket around yourself on a cold night, you will warm up and a faraway observer will initially see a cooler image through an infrared viewer – but over time, the net energy released will come into equilibrium (or you’d continue to heat up indefinitely) – but you’d still be warmer inside the blanket then if you didn’t have it. Now add more blankets, and you’ll get warmer – but which blanket is delivering the most warmth to your body, the inner layer or the outer layer? As you added more blankets, the far-off observer would again see an apparent cooling at the outer layer. So while there is no net warming (your body doesn’t start generating more heat), you stay warmer.
As far as entropy goes, you misunderstand; you can get something to raise its own temperature without energy from outside – through conversion of internal stored potential energy – like when you strike a match; the friction is miniscule compared to the release of stored internal energy. This is why there is some concern about methane clathrates – warming ocean water might result in floods of methane to the atmosphere, though I have little idea of what the chances of that are.
In reply to “Since the measured solar forcing (solar max to min) is so small (0.1% ~ 0.24 W/m2), any change of that magnitude is not going to make any difference in the next few decades.”
The change in solar forcing (which is due to changes in the solar wind, that it is hypothesized in turn affects the amount of planetary cloud cover), if the solar cycle moves to a Maunder minimum, will be more than an order of magnitude greater than what you note.
For example, the decrease in the earth’s albedo (due to a reduction in cloud cover) 1994/1995 to 1999/2001, is (based on the data from the Earthshine Project. A link to Palle’s paper is attached below) is minus 4.97% +/- 1.66% which converts to an increase in solar forcing of 7.5 +/- 2.4 W/m2, which is three times the estimate for the CO2 forcing, in the 20th century.
The hypothesis that warming is followed by significantly cooling, is not a surprise, based on analogues with the past. The paleoclimatic proxy data provides a record of other similar warmings in the past that were followed by significant coolings.
As to what is the possible mechanism by which changes in the solar wind, or changes in the geomagnetic field, or changes to Galactic Cosmic Rays (GCR) can change planetary cloud cover, the following is a link to Tinsley and Yu’s paper. Increases in planetary cloud cover result in planetary cooling. Decreases in planetary cloud cover result in planetary warming.
In reply to comment 107: “Any change in solar forcing is going to be in addition to CO2 forcing, right? So, if the solar forcing swamps the measured CO2 forcing, we’re going to fry. And soon.”
If solar activity forced the climate 7.5 W/m2 (see comment 106), then the estimated warming due to increased CO2 has been over estimated.
The papers that provide data and analysis to support the hypothesis that past climate changes are caused by (initiated by) changes in the geomagnetic field, or the solar wind, or Galactic Cosmic Rays (GCR) all of which it is hypothesized modulate the level of cloud cover, have a reduced estimate for the magnitude of planetary warming or cooling due to changes in atmospheric CO2.
Interestingly and controversially it appears there have been periods when the planet was warm when CO2 levels where low and cold when CO2 levels were high.
If the modulated cloud level hypothesis is correct and the sun follows a 8000 year cycle, the current warming event will be followed by a cooling event, based on what has occurred in the past. See the attached link to show what happened 8200 years ago.
[Response: The 8.2kyr event has nothing to do with solar, and everything to do with huge lake discharges. The chances of it happening today are zero. Please keep it real. – gavin]
Comment by William Astley — 26 Jan 2007 @ 12:41 PM
Hank you are misunderstanding my position.
In summary I contend that the earth (and the sun) are forced by the Stefan-Boltzmann equation applied at all points from the center to space, to be in balance with energy-in equal to energy out. The SB equation identifies total energy flow which includes convection conduction and radiation. In the atmosphere a majority is by radiation.
The calculations for the GCMs do not require a balance per their own statements, claiming that the addition of CO2 results in a net imbalance that continues and accumulates as long as the CO2 is continuously added. Hence warming at gound, cooling at TOA.
It is my contention that earth’s natural feedback system (ie the Stefan Boltzmann law,) requires that when the CO2 absorbs a photon, delays its transport resulting in longer residence time in the air (and hence warming), then this warming causes the extra energy to be transported out by increased convecton conduction and radiation per the SB Law, such that the atmosphere will return to equilibrium almost as soon as the extra CO2 absorbtion warms it. The greenhouse warming effect will not accumulate, or you do not get an ever increasing GHG Forcing curve. The earth is always at or near equilibrium, which can only be changed by adding more energy. and adding CO2 doesn’t add energy.
Re Entropy- adding energy by changing the nature of a component (eg nuclear power, the sun, methane clathrates etc) is defined as adding external energy. Entropy says that CO2 in the air can NOT warm itself up by staying as CO2 in AIR unless it gets energy from somewhere. The GCMs agree- the ground based air is warmed by taking energy from near the TOA (see Hansen et al 2005 figure 2e.
Gavin, I have a whole bunch of question, but I’d like to ask them one at a time–until comments are closed.
In your article you say:
“The physics in climate models can be divided into three categories. The first includes fundamental principles such as the conservation of energy, momentum, and mass, and processes, such as those of orbital mechanics, that can be calculated from fundamental principles. The second includes physics that is well known in theory, but that in practice must be approximated due to discretization of continuous equations. Examples include the transfer of radiation through the atmosphere and the Navier-Stokes equations of fluid motion. The third category contains empirically known physics such as formulas for evaporation as a function of wind speed and humidity.”
I assume GCM’s don’t solve conservation of momentum from “fundamental principles”, since, taken literally, that requires solving Navier-Stokes equations from “fundamental principles” ). I also know if you actually do model the NS equations, the reason you approximate these is not remotely “due to discretization of the continuous equations”.
My questions right now is:
How is conservation of momentum modeled in a GCM?
I don’t mean how do you discretize the continuous equations, I mean what PDEs do you start from (assuming you do.) Do you parameterize things like boundary layer? (If so how?) (A reference would be fine. But I’d like something more specific than what I found this NASA page — http://www.giss.nasa.gov/tools/modelE/modelE.html– that said nothing more than “the solution of the momentum equations is done within the DYNAM” which is rather vague. The turbulence model tells me you use a Turbulent Kinetic Energy equations– TKE.)
[Response: What goes into the model at an algorithmic level is described in the published literature (i.e. Schmidt et al, 2006 and references therein). You main question is not very well posed though. Conservation of momentum is ensured by simply making sure that any process that affects the velocities does so in a way that momentum doesn’t change – this is fundamental. A perfect solution of the NS equations would do that of course – but so can all imperfect solutions – it’s just something you can take care of in the formulation. Conservation of energy is the same – any energy change in one reservoir/grid box must be balanced by an equivalent energy change in another reservoir/grid box. This too is fundamental (and much more so than the NS equations). I also don’t understand your point about how we solve the NS equations. At the large (synoptic) scale, the equations can be discretised and stepped forward in time with only minor adjustments to deal with unresolved variability. Boundary layer processess are more parameterised, but in our model expand out the Reynolds stresses out to thrid-order terms, but again you need to read the published literature (cited above) for the gory details. – gavin]
I can see we have different ideas about the exact meaning of “solving conservation of momentum from fundamental principles”. If someone replaced the momentum equations in a GCM with Stokes Equations — which do conserve momentum– would you say they modeled conservation of momentum from fundamental principles?
I’d say “No”. But, if you’d say yes, then I’ll at least agree that by your definition of “fundamental principles” the GCM would be solving conservation of momentum from “fundamental principles”. Of course, the results would be pathologically wrong. . .
Anyway, thanks for the paper. So far, I’m reading under d. Dynamics “The runs here use a second-order scheme for the momentum equations”. (This is immediately followed by information describing what you do to track the motion of passive scalars like heat and humidity. )
I’m not actually seeing much specificity about how momentum transport is modeled over all. Presumably I can look up the papers in the references and eventually find more, since the text tells me much of the dynamics is described in an earlier paper. (Which, as we all know, not repeating what was already said in an earlier paper is common practice. )
[Response: I’m not sure I follow you at all. Conservation of momentum is a fundamental principle, as is conservation of energy. Exact solutions of NS satisfy those principles but so must everything else including the discretised versions and the boundary layer scheme. -gavin]
[[It is my contention that earth’s natural feedback system (ie the Stefan Boltzmann law,) requires that when the CO2 absorbs a photon, delays its transport resulting in longer residence time in the air (and hence warming), then this warming causes the extra energy to be transported out by increased convecton conduction and radiation per the SB Law,]]
The SB law says nothing whatsoever about convection or conduction. It deals solely with radiation.
[[ such that the atmosphere will return to equilibrium almost as soon as the extra CO2 absorbtion warms it. The greenhouse warming effect will not accumulate, or you do not get an ever increasing GHG Forcing curve. The earth is always at or near equilibrium, which can only be changed by adding more energy. and adding CO2 doesn’t add energy.]]
If you’re saying the atmosphere doesn’t permanently store a given bit of energy, you’re right, but that has no effect on the temperature of the atmosphere or the ground, which do rise when you have more CO2 in the air. Equilibrium can be at any temperature. And the specific equilibrium arrived it will be a function, at least partially, of the amount of greenhouse gases in the atmosphere.
When I read the AIP page linked above, it makes sense to me, I can read the papers linked, at an amateur level, and find that the basic radiation physics seems quite widely agreed on. It’s basic not just for Earth but for the Sun as well. You should put your argument on your web page where people can look at your math, maybe. It’s just scattered here, can’t follow it.
From the Radmath page linked above:
“The Earth must radiate back into space as much total energy as it receives, to stay in equilibrium. Adding gas to the atmosphere moves the site of this emission to higher levels, which are colder. Cold things radiate less than warm ones, so the system must warm up until it can radiate enough. For more, follow the link … to the “Simple Models” essay.
“… Callendar … assembled measurements, made in the 1930s, which showed that at the low pressures that prevailed in the upper atmosphere, the amount of absorption varied in complex patterns through the infrared spectrum. …
“Solid methods for dealing with radiative transfer through a gas were not worked out until the 1940s. The great astrophysicist Subrahmanyan Chandrasekhar and others, concerned with the way energy moved through the interiors and atmospheres of stars, forged a panoply of exquisitely sophisticated equations and techniques. The problem was so subtle that Chandrasekhar regarded his monumental work as a mere starting-point. It was too subtle and complex for meteorologists.”
————- end quote———-
Gavin: I agree conservation of momentum and energy are fundamental principles. I think we are differing on a matter of semantics.
I guess this is going to be long, but let me explain how I parse the term “solving conservation of momentum form fundamental principles”.
Let me start with an example: Suppose we examine steady fully developed flow or a viscous flow in a horizontal pipe. We can write down the NS equations. We can then simplify knocking out convective terms because the flow is steady and fully developed. Then we can solve the resulting equation in closed form. (We get a poiseuille flow solution. )
In this case– by my definition– one has solved “conservation of momentum” from “fundamental principles”. This is because both of the following apply:
a) momentum is conserved in the solution and
b) we used fundamental principles to describe conservation of momentum.
Now, supposed flow is turbulent and though fully developed and steady on average. We still know the Navier Stokes equations — but because the flow is inherently unsteady, we can no longer knock out the convective terms to obtain a solution.
Of course, we can try schemes to develop models to estimate the effect of the convective terms. But we know any model we develop does not describe transport of momentum “from fundamentals”. The turbulence model is an approximation.
Depending on the model we concocted, we may be able to solve the new set of approximate equations in closed form. Or we many need to use computational methods.
But either way, once we introduce the turbulence model, while our system of equations do “conserving momentum” (which is a fundamental principle), our system of equations is no longer based on “fundamental principles govering transport of momentum”.
Consequently, the way I see it, if you stuff these into a code the code– and the resulting solutions are not “fundamental principles governing transport of momentum.” (Note: the “non-fundamental” issue has nothing to do with discretization error, differencing schemes or any details involved in stuffing this into a code. I’m actually fine with that! )
So to say it a different way: to my way of thinking, conserving momentum alone is a necessary, but not sufficient, condition to permit us to claim a code “solves conservation of momentum from fundamental principles”.
Now, I know you may believe my definition is too strict. But if we permit the laxer definition for the meaning, then “momentum from first principles” type models could give widely inaccurate predictions for pressure drop vs. bulk velocity in pipe flow– and that’s a bulk feature. Needless to say if I extended this example to flow with heat transfer, and used the lax definition for “energy transport based on fundamental principles”, I make these sorts of “fundamental” models give atrociously bad predictions for the temperature gradient as a function of heat addition at the pipe walls. (And of course, pipe flow is much easier than climate modeling.)
Of course, none of this implies that all models are bad or that climate models are bad. I’m fine with models– if they are used appropriately. It just means that I found your paragraph confusing.
Mostly, reading the paragrpah I wanted to know what really happens in the code — so thanks for the reference.
[Response: This all appears to be due to a misreading of what I wrote – go back to the original paragraph: “The first includes fundamental principles such as the conservation of energy, momentum, and mass…” – so far so good right? All of these are fundemantal principles – “…and processes, such as those of orbital mechanics, that can be calculated from fundamental principles.” – it is the processes that can be calculated from fundemental principles. The statement that you appear confused about appears nowhere in anything I’ve written. – gavin]
Gavin: ” it is the processes that can be calculated from fundemental principles.”
The process of conservation of momentum is not calculated from fundamental principles describing transport of momentum in a GCM. It’s just not.
[Response: Conservation of momentum is not a process. Convection is a process, and must conserve momentum. Drag from unresolved gravity waves in the stratosphere is a process, and must conserve momentum. I just don’t get what point you are trying to make. -gavin]
In reply to “[Response: The 8.2kyr event has nothing to do with solar, and everything to do with huge lake discharges. The chances of it happening today are zero. Please keep it real.”
The above comment is meant to be taken literally. There is no possibility that a pulse of water from the Glacial Lake Agassiz, could today stop the THC. (There is no Lake Agassiz today.)
The above comment does not question or address the data or analysis that supports the hypothesis that there will be a sudden increase in cloud cover, when the current high solar activity ends. If there is an increase in cloud cover, all agree the planet will cool.
The assertion that there will be an increase in planetary cloud cover and cooling, would be correct, even if the 8200BP cooling event was not caused by solar or geomagnetic field changes (there was a drop in the magnitude of the earth’s magnetic field just before the 8200BP cooling event.)
Two comments concerning the hypothesis that a pulse of water from Lake Agassiz stopped the THC and that the THC stoppage caused the 8200 BP cooling event.
1) Concerning mechanism. I thought the THC currently was reduced by 30%. No cooling todate. Is the THC cooling or warming mechanism non-linear?
2) Timing of the melt water pulse in relationship to the 8200BP cooling event. See figure 4 in the attached paper. The largest early Holocene melt water pulse 1B occurred 2 thousand years before the 8,200 cooling event. Is there cooling after each melt water pulse? Why does the cooling occur after the smallest pulse?
There is an anomalous drop in the earth’s magnetic field recorded in volcanic flows, immediately prior to the 8200BP temperature drop.
Comment by William Astley — 26 Jan 2007 @ 10:49 PM
Orbital mechanics is also not a process. Neither are fluid mechanics, solid mechanics, continuum mechanics nor just plain “mechanics”. Yet, you wrote:
“The physics in climate models can be divided into three categories. The first includes fundamental principles such as the conservation of energy, momentum, and mass, and processes, such as those of orbital mechanics, that can be calculated from fundamental principles.”
If you are saying that GCMs do not model conservation of energy, momentum or mass from fundamental principles, I agree. If you say conservation of mass, momentum and energy are not processes, I agree with that too. (I would go further and note that GCM’s don’t model “many of the dominant processes involved in transport of mass, momentum or energy”.)
If you are saying your original two sentences convey the impression that GCMs do not calculate conservation of mass, momentum or energy from mathematical models derived from fundamental principles, because those three are “principles” and not “processes”, and that the only things you are claiming are calculated from fundamental principles are “the physical processes of orbital mechanics” …. well, I guess I’m not going to worry about that claim.
After all, I think we’ve resolve this:
GCM’s do not describe the all the physical processes involved in conservation of momentum in any way that could be characterized as ” solving conservation of momentum using mathematical representations that are derived from fundamental principals”. (Or are you still disagreeing on that?)
If you need to understand the process or processes that are not modeled using mathematical models based on “fundamental principles ” they are: Momentum diffusion by small scale turbulent motions, and momentum diffusion by any sub-grid scale structures.
Diffusion by these two processes leading order in at least some portions of the flow field — and they are modeled in
GCMs. The reasons why GCMs don’t capture these have nothing to do with “discretization”.
If you don’t understand this, I’m not going to worry about it further.
[Response: If your point is that not everything is included in GCMs, then just say so (and you will have no argument from me). And if you want to point out that sub-grid-scale flows are not well dealt with, then again, no argument. But semantic parsings of distortions of statements I didn’t make is pointless. Processes in the GCM conserve momentum, energy and mass. Are all processes included? No. -gavin]
William, what’s your source for a 8000y solar cycle? Your link only shows that abrupt cooling took place 8200 years ago. Even if solar cycles have more influence than is usually appreciated in this forum, it doesn’t mean every cooling or warming is the result of a solar cycle influence whether it be from GCR or some other mechanism.
I seen a lot on 100Ky year cycle from Muller and more recently from this:
In reply to Jim Cross’ comment #119 to my comment 106.
1) In my comment 106, I included a link to a paper that notes, cloud cover has decreased by 5% from the time of 1994/1995 to 2001/2002. The paper I linked to notes a reduction in cloud cover of 5% translates into an increase in solar forcing of 7.5 W/m2 or three times the estimated forcing, 2.5 W/m2, for the CO2 increase in 20th century.
2) In my comment 106, I included a link to a paper that notes, the current solar activity is very unusual. The sun is at its highest activity level in 8000 years.
3) I included in my comment 106 a link to a paper that describes the mechanism by which the current high solar winds that are associated with the current state of the sun, can reduce cloud cover.
Jim Cross’ comment:
“Even if solar cycles have more influence than is usually appreciated in this forum, it doesn’t mean every cooling or warming is the result of a solar cycle influence whether it be from GCR or some other mechanism.”
Before discussing specifically the current state of the sun, the following is a link to a paper that provides data that supports the assertion that cyclic changes in the sun, are responsible for cyclic warming/followed by cooling in the Holocene.
The following is a link to Bond’s paper “Persistent Solar influence on the North Atlantic Climate during the Holocene”
“A solar influence on climate of the magnitude and consistency implied by our evidence could not have been confined to the North Atlantic. Indeed, pervious studies have tied increases in the C14 in tree rings, and hence reduced solar irradiance, to Holocene glacial advances in Scandinavia, expansions of the Holocene Polar Atmosphere circulation in Greenland; and abrupt cooling in the Netherlands about 2700 years ago … Well dated, high resolution measurements of O18 in stalagmite from Oman document five periods of reduced rainfall centered at times of strong solar minima at 6300, 7400, 8300, 9000, and 9500 years ago.”
Comment by William Astley — 27 Jan 2007 @ 12:32 PM
Margo, you say: “If you are saying that GCMs do not model conservation of energy, momentum or mass from fundamental principles, I agree…”
Well.. that’s a nonsensical string of words. Conservation of energy is an experimentally observed phenomenon, not something ‘calculated from first principles'; in fact if you look at the history of physics, conservation of energy was temporarily called into question in the 1930’s over the issue of neutrinos in particle physics; now that neutrinos have been detected, the case is that the principle of conservartion on energy, experimentally demonstrated by Joule in the 19th century, has never been challenged.
A common theme on this thread seems to be attempts to use poorly considered semi-scientific but wordy arguments to confuse the issue.
For a much longer introduction to General Circulation Models, see http://www.aip.org/history/climate/GCM.htm Read the whole article; it’s far longer and more detailed then the shorter piece this thread is based on, and includes the history of the development of climate and weather models.
The bottom line is that the GCM outputs have to be compared to detailed data from the real world, and the scarcity of oceanic temperature and current profiles is a real problem, thus all the calls for a global network of ocean sensors. The oceans play a dominant role in the long-term climate, and the lack of extensive data on heat transport and other measures (alkalinity, say) in the oceans is as much a problem for climate models as the lack of weather stations and satellites would be for short-term weather forecasting models.
While the three classes of models (short-term weather, mid-scale regional (i.e. El Nino, NAO), and planetary GCMs) all differ from one another, they all benefit from having very accurate data on current conditions – both to compare past predictions to, and to extrapolate forward into the future.
“Statistics from the ice ages (which show a strong correlation, not none!)”
I’ve looked for these in table form and found nothing, as I’ve often wondered how good the correlation is. Has anyone got a link to a table of temperature, CO2 (and other gases, if possible) from ice cores?
Okay, the article deals with variations in solar irradiance but nothing about a 8000 year cycle and even the quote you cite has reduced irradiance occurring in various periods between 1100 and 500 years. True, one of them comes close to the 8200 years ago event that you cited originally.
Also, nothing in this article ties to the GCR theories you are promoting.
I’m fairly sympathetic to the idea that solar variation may be a much bigger part of longer term climatic changes than is generally accepted in this forum. And, it is an odd coincidence that the Sun has been unusually active in the last 100 years or so while we have been warming. Nevertheless, that doesn’t mean that the solar cycle accounts for much of our current warming (it might account for some of it) nor does it mean that drastic cooling is right around the corner.
Re 116, Hank, for someone who relies on google so much the failure to find this surprises me. It is widely quoted in RC (Gavin is a co-author) but alas one RC link in the archives to it no longer works. Figure 2e is a variation of ones that have been quoted in previous Hansen papers.
Hansen et al 2005 is at: http://pubs.giss.nasa.gov/abstracts/2005/Hansen_etal_2.html
You can download the full pdf. I must give credit to NASA GISS. (Hansen & Gavin et al) They do a great job of documenting all they do. Even give you the program they use if you want it.
However it really bothers me that they pay lip service to the concept that the earth is at equilibrium, like the sun, then have their GISS GCM tell us that anytime you add CO2 it pushes the earth out of equilibrium, warmer ground, cooler TOA, energy flow imbalance at the TOA (less energyout at lower temp) for decades, when quite clearly the energy flow imbalance is reversed at night on a daily basis, AND there is a driving force that would return the earth to equilibrium simply by the SB law transferring more energy from the warmer ground to the cooler TOA.
Note to BPL 112 – The Stefan Boltzmann Law units are Joules/sec-m^2 – It says nothing about what the transport mechanisms are. If the mechanism moves energy then that is part of the total calculated by the SBL. In fact it is my contention that when the GHE moves the energy from the TOA to the ground (per hansen) that the return mechanism per the SBL actually increases the flow of all three, conduction convection and radiation, because hotter air rises- conduction, hotter objects radiate more – radiation. Hence I do NOT understand why the hotter air from the GHE at ground level does NOT naturally return to equilibrium by returning to the TOA, and thus just cancel out the Greenhouse warming effect. Even with a continually increasing GHG/CO2 GHE force trying to push us warmer! The driving force as calculated by the SBL says that the energy flow WILL return as long as the ground temrperature is warmer than equilibrium. (to solar energy-in- unless we identify some mag-field source of energy-in) If the air is returned to equilibrium every day, then the conclusion about a steadily increasing GHG Forcing curve can NOT be correct.
Eli re 121 you said:
“Well.. that’s a nonsensical string of words. Conservation of energy is an experimentally observed phenomenon, not something ‘calculated from first principles';”
For what it’s worth, I didn’t say that conservation of energy is calculated from first principles. Also, I am familiar with Joule’s contributions thermodynamics and agree the first law of is not disputed. (And in any case, the 1930s neutrino issue would not be particularly germain to issues involved in climate modeling.)
For what it’s worth, I have both substantive and semantic issues with Gavin’s paragraph, but I don’t think it’s worth delving into that further in comments. (Also, having issues with a paragraph is not quite the same as having major issue with the model; I may or may not have issues with the model: right now I simply don’t know.)
You say, Hence I do NOT understand why the hotter air from the GHE at ground level does NOT naturally return to equilibrium by returning to the TOA, and thus just cancel out the Greenhouse warming effect.
Well, that’s a replay of Richard Lindzen’s theory of the dynamic atmosphere flipping itself over and dumping radiation to space. Lindzen has been thoroughly refuted, and is now claiming that Global-warming alarmists intimidate dissenting scientists into silence. Anyhow, Lindzen’s argument was that there is “excessive vertical diffusion of heat and moisture in GCMs”; i.e. it should just all go up to the top of the atmosphere…
Of course, if that was the case what kind of temperature change would one expect to see in the upper atmosphere? A warming or a cooling? How about if solar forcing was responsible? Would the stratosphere cool or warm, in that case?
The contention that because things “return to equilibrium,” greenhouse gases can’t elevate the surface temperature or the atmospheric temperature, is pure crackpot pseudoscience. Here’s how the process actually works:
Sunlight hits the ground.
The ground warms up.
It radiates photons (I = ε σ T4) upwards.
The greenhouse gases absorb some of the photons.
The greenhouse gases heat up, because that’s what physical objects do when they absorb photons.
The greenhouse gases radiate photons (Stefan-Boltzmann law again).
Some of those photons go back to the ground.
The ground therefore has both sunshine and “atmosphere shine” heating it up, and is warmer than it would be in the absence of an atmosphere.
This is as certain as modern science gets about a subject. The contention that air molecules collide with other molecules and therefore spread around some of the energy rather than reradiating it is quite true, but completely irrelevant. As a result of the collision, the molecules are going a little faster. Their kinetic energy has been increased.
Molecular motion is heat. The faster the molecular motion, the higher the temperature. Thus, whether by immediate absorption or by collisional excitation, the atmosphere heats up. It heats up exactly the same amount whatever the breakdown of the two processes. Any other result would violate conservation of energy.
It is the contention that greenhouse gases can absorb photons and not heat the atmosphere that violates conservation of energy. Arrhenius was right. The “greenhouse gases can’t heat the atmosphere” people have the energy absorbed by the greenhouse gases disappearing. That doesn’t happen, folks. In every physical process we’ve ever observed, energy is conserved.
Please excuse my naivety to science and climatology. I have, what seems to me, a very simple question. How current are climate models? I’ve heard it can take months, if not years to create climate models, so I wonder if they incorporate the dramatic changes we’ve observed in the last 3 to 5 years. If they haven’t yet incorporated the most recent data, to what degree is information obtained from them useful?
I hope you can appreciate my need for a layman’s explanation, however I will read (and attempt to understand) anything already written on this if you prefer to just provide a link.
Thanks. That’s exactly what I’m getting at. Don’t worry about systematic errors right now. You can’t make large numbers of simulations to (presumably) remove random errors. I suppose you could finesse this by combining the simulations of other models (something like meta studies in medical research)but there aren’t that many models are there? So how are the predictions for say, a temperature rise of between (I’m making these up) 1.5 and 5 deg C w/ a most likely value of 3.2 derived, and is there a formal derivation for the connection between this and “standard” statistics? Any references you can suggest would also be greatly appreciated.
Just a repetition that the conceptualization of increasing cloud cover, may actually have an opposite effect other than that desired, although natural processes might equalize said concept leading to net zero effect.Common knowledge that any area having cloud cover has a heat blanket over night periods that actually holds in heat, so must side with those above that doubt the potential of this concept. Of vital notation is the potential of these clouds movement over power generating plants and thereby holding in excess heat generated by man.
Man made thermal energy is very often over looked in most global models, but from space satellites the hot spots At Night are obvious over generation stations and cities.This generation of thermal radiation needs more consideration in the big Thermodynamic balance picture.