RealClimate

Comments

RSS feed for comments on this post.

  1. Is there an ultimate limit that arises from human intellectual capability, irrespective of model simulations and databases? After all, even with the aid of meetings and model simulations and conversations there is a limit. Unless one subscribes to the notion of “artificial” intelligence, this would come down to an analog of a “world” chessmaster, whose intellect is (?) superior to all others, and (?) all of the parameters of the contest (in this case “prediction” of a winner) are known. A derivative of this kind of pondering leads to the “flap of a butterflies wing” analogy, i.e., how do climatologists reconcile to the chaos theory?

    Comment by Charles Raguse — 30 Apr 2008 @ 9:01 AM

  2. Are there any proxies for clouds (e.g. average cloudiness or thickness, rainfall patterns, mountaintop rime ice, etc) that can be picked up on an annual or finer basis? Seems like that would provide validation for, or inputs to a model.

    Comment by Eric (skeptic) — 30 Apr 2008 @ 9:08 AM

  3. Gavin,

    Has anyone gone about compiling a “wishlist” of sorts in the modeling community of studies (other than syntheses) they’d like to see come out of the paleo community along the lines of what you mention in your ninth paragraph?

    Cheers.

    [Response: Wishlists compiled without appreciation of the reality of paleo studies tend to be a little pointless. For instance, if asked, modellers often request proxies for clouds, or the thickness of the ozone layer - both would be great - but this is just wishful thinking. Instead, modellers really need help with more forward modelling (including the proxy system in the models directly) and on downscaling larger patterns to specific proxy records (i.e. for a specific proxy record, how does the local environment influence what gets recorded). These issues require input from the data-gathering community (since they know their proxies and sites better than anyone) and would go some way to making the comparisons more direct. - gavin]

    Comment by Jon — 30 Apr 2008 @ 9:18 AM

  4. Assuming we keep beating Moore’s law, computation should soon be “too cheap to meter”. Given essentially limitless computational power, how will GCMs be affected? Will our abilities to model and predict weather and climate become limited only by the power to collect data? Will we be able to push our predictions to the very edge of the complexity horizon?

    Comment by Jim Galasyn — 30 Apr 2008 @ 10:14 AM

  5. Jim Galasyn,
    Technically, Moore’s law derived from scaling of one particular type of semiconductor technology–Complementary Metal Oxide Semiconductor or CMOS. The next generation could be designed from the former using fairly similar models–departing from the recipes only when technical roadblocks were encountered. That worked well down to about 0.5 microns minimum feature size. Every generation since then has been a struggle, but we are still more or less on track of Moore’s Law. The law has taken on a life of its own–driven more by competition and economic necessity. The result is that I have sitting on my desk DDR3 SDRAMs for testing and we will soon be testing 45 nm feature size parts.
    If you want to peer into the crystal ball a bit, the International Technology Roadmap for Semiconductors (ITRS) makes interesting reading. What you find is that there are lots of roadblocks, but also lots of interesting alternative technologies that could take their place.
    Bottom line, I don’t think we’re done as far as increasing computing power for GCMs and other applications for at least a couple of decades yet. However, if economic hard times hit, progress will be slower.

    Comment by Ray Ladbury — 30 Apr 2008 @ 10:53 AM

  6. Excellent post.
    However, the current changes are so quick and fast that prior changes in the Earth’s climate, all created by nature, do not significantly model the true extent of the negative feedback mechanisms currently ongoing, created my humans. Take the pine beetle crisis in Canadian forests, for example. The trees are dying because the yearly low frost temperature does not fall below the level to keep the beetles in check.As the trees die they release carbon. There is no precedent for the quickness and severity of the current rise in CO2.280 ppm to 380 ppm in 150 years?And current estimates show 450 ppm in 50 years or so.Such a quick climate change has never occurred, even with huge volcanic events of the past. Even with the meteor strike 65 million years ago. Also, from a moral standpoint, these events were caused by nature, and were unavoidable. Mankind has through negligence and disrepect caused the current changes.Also, if you look carefully at the current data, the shutdown of the North Atlantic current has already begun. Upwelling, and salinity levels, are already negatively effected by the meling of the ice at the poles.I am no scientist, but I know what I read, and I read a lot.
    Mark J. Fiore

    Comment by Mark J. Fiore — 30 Apr 2008 @ 12:56 PM

  7. re: Charles Raguse
    It’s interesting to think about. I think that by definition climate is less dependent on initial conditions than is weather. The reason is that the term “climate” is not meaningful for any one point in time–it is by definition a more generalized state of the system over a period of time. So I think in a way your question about chaos is more for meteorologists than climatologists, i.e., will we ever have the understanding of enough variables, and enough computing power, to accurately predict weather many years into the future (or even months?)? Never say never, but that level of understanding of the system seems mighty distant right now.

    Comment by Kevin — 30 Apr 2008 @ 2:50 PM

  8. Ray and Jim: Another key point is that we’re still in the early stages of learning how to really leverage mesh or cloud computing, e.g. using a very large number of networked computers of various speeds to work together on a single problem. There have been some notable projects with people “donating” CPU cycles over the ‘net for protein folding and SETI research, so I would expect that a similar thing could be done with climate modeling, assuming the code could be written in a reasonable compartmentalized form.

    [Response: It's already been done. - gavin]

    Comment by Lou Grinzo — 30 Apr 2008 @ 2:58 PM

  9. Excellent topic. Using paleoclimate data is exactly what made me a skeptic.

    The change in forcing by CO2 as calculated by this equation = 6.3 ln (C/Co) ( perhaps you may use a different coefficient but the principle remains) and using a base line of say 200 ppm for Co , we can see that at 280 ppm we get an increase of 2.119 W/m*2 from the periods of glaciation.

    Now if we look at the paleoclimatic record graphed in the Nature article we see that sometimes at 280 ppm the temperature rises and other times it falls regardless that the atmospheric concentration is 280 ppm.

    http://www.nature.com/nature/journal/v407/n6806/images/407859aa.2.jpg

    This suggests there is a force that is much more powerful driver than CO2 to overcome the forcing at 280 ppm. Most likely the sun. Assuming the sun’s effects are typically cyclical we are faced with the problem that as temperatures rises during an interglacial the effect of the sun is less powerful because it is being assisted by the increased forcing of CO2 concentrations. However as temperatures fall as we approach the next period of glaciation the sun’s decrease in output must decrease much more rapidly than it increased to offset the upward temperature pressure that would be created by the forcing of CO2 at 280 ppm.

    But such a conclusion seems at odds with known solar behavior. So it suggests that 1) either the sun is much more variable than we are aware or 2) if the sun’s output is cyclical, the effect of CO2 is negligible as temperatures rise and fall. It has been argued that the sun starts the temperature rise and then the increased forcing by increased CO2 is a feedback mechanism to increase temperatures. However that same story can not hold true when the tmepratures are falling, otherwise if CO2 is the driver, the feedback would raise the temperatures, but instead we fall back to a period of glaciation.

    [Response: Ahem. If you this is all that convinced you to be a skeptic, you can breathe a sigh of relief and let the scales drop from your eyes. CO2 is an important part of what is going on during the glacial/interglacial cycles, but it is not the only part. Can you say "Milankovic?" Read up on that in wikipedia, then come back. Needless to say, Milankovic forcing is too slowly varying to be playing any significant role in the warming we are seeing now. --raypierre]

    Comment by Jim Steele — 30 Apr 2008 @ 3:21 PM

  10. Mark J. Fiore (6) — Using the GISP2 Central Greenland temperature data averaged over 30-year intervals, the temperature there went up 0.72 K from 8170 to 8140 kpb. This was during the recovery from the 8.2 kya event:

    http://en.wikipedia.org/wiki/8.2_kiloyear_event

    From data there and also the HadCRUTv3 global temperatures since 1850 CE, this is close to the warming experienced during the last thirty years. The temperature recovery at the end of Younger Dryas was even more dramatic, at least in Central Greenland.

    That said, this current rate of warming is certainly hard on already over-stressed organisms.

    Comment by David B. Benson — 30 Apr 2008 @ 3:49 PM

  11. #4 – limitless computer power? I doubt it. My models (for basin thermal evolution) have benefited enormously from increased computing power but what you can practically do is still limited heavily by available power and your idea of a “reasonable” run time. An 8-day run time limits the no. of model iteration you can do a year somewhat! With access to 1000 cpu’s, we can now do montecarlo runs to look at sensitivities but can realistically look at only a few variables and a limited no. of samples. Increasing computer power would allow more runs, more variables, and ultimately finer meshes but its hard to imagine having enough computer power in my lifetime, even assuming it doubles every year, to feel the model is not compromised by computing limitations. I suspect this is true in spades for GCMs.

    Comment by Phil Scadden — 30 Apr 2008 @ 4:17 PM

  12. David (10), Mark Fiore’s (6) refers to rate of change of CO2 in the atmosphere, not rate of temperature change.

    Tamino blogged on the rate here:
    http://tamino.wordpress.com/2007/03/06/fast-co2/

    “… the fastest rise or fall in the entire Vostok ice core record amounts to a rate of increase of 0.06 ppmv/year.”

    Comment by Hank Roberts — 30 Apr 2008 @ 4:23 PM

  13. It’s a “mess” alright! What is unknown far far exceeds what IS known or just probable given our extreme lack of perspective inherent in our chaotic history as a species.

    As the sun burns it’s hydrogen, what is happening to it’s total mass and therefore it’s gravitational attraction? If the earth is gradually increasing it’s mass thru dust and meteorite deposition, what does this do to our orbital radius from the sun? What do radar-distance studies tell us vis a vis Venus, Mars, Mercury inside the “snow-line”?

    Gavin, please tell us what can be measured with some accuracy and then tell us what you think those measurements mean.

    Comment by Vern Johnson — 30 Apr 2008 @ 4:42 PM

  14. http://www.telegraph.co.uk/earth/main.jhtml?xml=/earth/2008/04/30/eaclimate130.xml#form

    Appalling journalism quite frankly. Typical right wing media. A decade lull, its is not a consensus view and only preliminary view.

    Arctic sea ice melt is looking slightly worrying.http://nsidc.org/arcticseaicenews/

    Comment by pete best — 30 Apr 2008 @ 4:42 PM

  15. Ray and Jim, I think you have to set Moore’s Law side by side with the computational demand increase for every stepwise increase in model resolution. The European Centre weather model uses a grid scale about 25 km per side, horizontally, and it divides the atmosphere into 91 layers:
    http://www.ecmwf.int/products/forecasts/guide/The_numerical_formulation.html

    What climate modelers do is try and make their code as efficient as possible while preserving the basic physical processes – I’m not sure what law that follows. I believe that the current view is that it is better to spend any extra computing power on running more ensembles of models, and possibly on improving simulation of specific physical processes, rather than on increasing the grid resolution of current models.

    Lorenz (1993): “If the models could ever include so many variables that individual thunderstorms and other smaller-scale structures would be properly represented, and parameterization would no longer be needed, it would be totally unreasonable to expect that errors in the details of these structures would require two days to double. Individual thunderstorms only last a few hours… since a thunderstorm can in reality double its severity in less than one hour, we should expect that the difference between two rather similar thunderstorms would double just as rapidly…

    The apparent drop in returns with continued increases in resolution has led some forecasters to propose that the anticipated additional computer power in the middle nineties can be more advantageously used to carry out some Monte Carlo procedure.

    With all these obstacles around, it may surprise us to learn that within our chaotic atmosphere there are certain weather elements at a few locations that can be rather accurately predicted not just two weeks in advance but two months or even two years ahead. The most spectacularly predictable of these are the high winds in equatorial regions…

    I was fortunate enough to be present at the meeting in 1960 when Reed announced his findings, and I could see members of the audience shaking their heads as he maintained that at these heights the equatorial winds would blow continually from the east for about a year, and then from the west for a year, and then from the east again for another year… the subsequent years have fully confirmed his claims.”

    RIght now it is hard to say what will happen to the North Atlantic thermohaline circulation, which now appears to vary by large amounts on a monthly to yearly basis (not the Gulf Stream, but the other components). However, the multi-model ensemble approach is being applied here as well:

    Collins et. al 2006, Interannual to Decadal Climate Predictability in the North Atlantic: A Multimodel-Ensemble Study, JC

    Ensemble experiments are performed with five coupled atmosphere–ocean models to investigate the potential for initial-value climate forecasts on interannual to decadal time scales. Experiments are started from similar model-generated initial states, and common diagnostics of predictability are used…

    If one wanted to do that with paleoclimate, one of the well-studied time regions is the Younger Dryas, the sudden cold freeze about 12,900 – 11,500 years ago, which is thought to have happened rapidly, within a decade, and an associated cooling of the North Atlantic – though I don’t know if it really has been tied down to a halt in thermohaline circulation or not – and as usual, this is a topic that many people have tackled:

    Tarasov & Peltier (2005), Arctic freshwater forcing of the Younger Dryas cold reversal, Nature

    For those who are interested in doing a little research, you can look at all the papers that have cited that paper, courtesy of Google Scholar.

    Comment by Ike Solem — 30 Apr 2008 @ 5:10 PM

  16. @ #11. Back in the 70s computer scientists realised that the increasing speed of hardware made the design of algorithms, code tuning, etc, *more* important, not less so. A famous example of paying attention to these matters reduced the run time of a simulation from a year to a day.

    It would be great to get some of the best CS people helping out with climate modelling! (Perhaps some are already, but they’re keeping their heads down.)

    Comment by Greg van Paassen — 30 Apr 2008 @ 5:35 PM

  17. It’s sometimes hard to go back and get all of the previously recorded information sorted, cleaned, verified, put into the format you want, and then used in a synthesis. My limited experience with meta-analyses leads me to recommend asking the researchers who do the primary data collection to try and use the same measures, formats, etc in their future reporting so that useful analyses can then be easily done. I’ll give one example with which I am familiar: in the Province of British Columbia, Canada, to research fish, part of the requirements to obtain sampling permits was to put results into a Provincial data base on the web. The standardization of methods (gear used, linear and area estimates of habitat surveyed, etc) and details to be recorded (GPS coordinates, species classification) was a benefit in mapping distributions of species. Because negative results were also reported, it removed some biases that would show up if only published work was surveyed for the mapping.
    Perhaps you can request similar standardizations in paleoclimate?

    Comment by Steve L — 30 Apr 2008 @ 6:38 PM

  18. re: Moore’s Law and such
    OR: Amdahl’s Law is becoming much more relevant
    OR: don’t expect magic for every kind of simulation

    0) Moore said nothing (in 1965) about specific technology (Bipolar, NMOS, PMOS, CMOS, BiCMOS), and in fact, it wasn’t even until ~1982/1983 that Intel shipped its first CMOS DRAM’s and micros (80286).

    1) Moore’s law *really* was the 2X # transistors/chip every 2 years, or equivalent, gotten by shrinking
    a) Shrinking transistors
    b) Shrinking wires
    c) Adding more layers [which also costs more mask steps, i.e., more cost]

    2) In “the good old days”, this tended to give you more performance as well [not necessarily 2X, as there were memory delays as well], but transistor-switching speed was the limitation, and transistors switch ~ in proportion to size, so we not only got 2X transistors, but got a speed boost “for free”.

    In the really old days, one could do an optical shrink from one generation to the next (or maybe at least to a half-generation) and the same design would work, just use smaller chips and run faster, and the same software would run, just faster.

    3) But for various reasons, wires don’t shrink as fast as the transistors, and a lot of the time is “in the wires”, and people have already used most of the clever well-known architectural tricks for making a single CPU go faster … which is why you may have noticed:

    a) the end of massive touting of big, easy GHz improvements … because those are over (at least for CMOS).
    b) there are lots of 2-core or 4-core chips around, because it is way easier just to replicate simpler CPU cores.

    Unfortunately, while 2 CPU cores might give you ~2X (or somewhat less) throughput of independent tasks, they don’t automagically make any given program run 2X faster.

    4) Some algorithms parallelize well, some don’t, but Amdahl’s Law says that the unparallelizable part matters. For example, if one CPU runs a program in X time, and:
    90% is parallelizable onto multiple CPUs
    10% is serial code

    Then, no matter how many CPUs you have, the best you’d ever do is approach a 10X speedup. Actually, in practice, most codes get faster with each additional CPU, but each CPU adds less performance, and after a while, adding a CPU actually makes the program run slower. Fortunately, some important classes of simulations can use *many* CPUs, unlike, say Microsft Word.

    5) Bandwidth, latency, and programmability matter as well: some algorithms work OK on distributed PCs, others really want lots of CPUs connected with low-latency, high-bandwidth shared-memory. Even with multi-core CPUs, some algorithms bottleneck on access to external memory, in any of several different ways.

    Good parallel programming is not easy, and the free ride we got for many years on single-CPU performance is over until some post-CMOS technology happens. We will still get more transistors/die for a while (someone mentioned ITRS). However, at least multi-core chips (including nVidia chips being used for parallel programming) are so widespread that maybe more students will learn.

    Lately, a huge amount of effort has been going into power reduction, not just for mobile devices, but for server chips, albeit for different reasons.

    re: #16 there are some pretty good CS people out here already, i.e., I suspect the long-hanging fruit has been pretty well picked over.

    Comment by John Mashey — 30 Apr 2008 @ 6:53 PM

  19. Hank Roberts (12) — Thank you for the clarification.

    To a man with a hammer, every problem looks like …

    Greg van Paassen (16) — Some computer scientists are interested in designing machines capable of considerable parallelism. Others are then interested in the question of how one ought to program in order to best utilize such machines. This gives rise to better programming languages for parallel computation, and some of these features can even be stuffed into the Fortran straight-jacket. :-)

    Anyway, the computer science assistance with climate modeling is largely under the surface, so to speak.

    Comment by David B. Benson — 30 Apr 2008 @ 7:10 PM

  20. #9 Jim Steele, CO2 is a result of solar changes that increase ocean temperatures not the driver of changes. Raypierre mentions the Milankovitch cycles but their periodicity is too long too explain the current warming or things like Dansgaard-Oeschger events that Svensmark connects to cosmic rays. RC writers will allow for some solar warming but then attribute the rest to increased CO2 feedback. But as you suggest that makes no sense because how does temperatures decrease back to the low baseline with that CO2 now in the atmosphere unless something else is the driver, reducing temperatures and allowing CO2 to dissolve into the oceans.

    Raypierre failed to mention short term solar variability. The sunspots activity is way lower than the consensus predicted, very similar to a Maunder Minimum, and several researchers like Khabibulo Absudamatov have been saying the sun will undergo decreased activity and create a cooling trend. I too have predicted lower solar output and a cooling ocean. Now there is a paper that will be published in Nature telling us that due to ocean currents that we can expect a ten year cooling trend. (Hardly what the GCM’s have predicted) and once again evidence that natural variations overwhelm CO2. Here is a snip and a link.

    “Those natural climate variations could be stronger than the global-warming trend over the next 10-year period,” Wood said in an interview.

    http://www.bloomberg.com/apps/news?pid=20601124&sid=aU.evtnk6DPo

    According to earlier discussions with Gavin on the Galactic Glitch he says that the ocean will respond to the atmospheric forcing to achieve equilibrium with radiative input, so if this cooling trend further supported, by his own logic he would have to admit that the there is less input.

    [Response: Not really. The Nature study is talking about changes associated with ocean circulation even while CO2, and the global imbalance, and global temperature, is increasing. It is exactly what we've been trying to explain. - gavin]

    Comment by gusbobb — 30 Apr 2008 @ 7:39 PM

  21. The reason those other models are still useful is that the GCMs are not complete.

    I don’t think that’s really fair, it’s not the only reason, and one of the main reasons for using simplified models, is because they are cheaper, and so can be run for longer.

    If I’m correct, what counts as a really long control run for a typical GCM is a few thousand years, while the standard centennial timescale experiments for the IPCC reports each run from 1860-2100, ie 240 years.

    I think it’s correct to say that the GCMs are still too expensive, because they’ve got so much in them, to use them for glacial cycle experiments. You just can’t run a GCM for a hundred thousand years of model time.

    This brings me to the discussion about computer power. My impression was that, with even desktop PCs now having to go the route of parallel processing to improve performance, the modellers now face the problem of re-writing their code to get it to work across not just dozens of processors, but hundreds, (or thousands?), if they want to see the increases in performance that they’ve achieved in the past. This will be hard because of the problem of exchanging data between the processors, and reducing that data flow to a minimum will be a key challenge.

    [Response: Tim, you make a good point and I should have expanded that line. In relation to using massively parallel computers, the scaling is ok for relatively high resolution models and for intelligent domain decomposition. It does take work to get the models configured appropriately though. But it always pays to remember that mutliple ensembles (initial conditions and perturbed physics) are likely to play an ever larger role in the modelling, and these scale perfectly on MPP machines! - gavin]

    Comment by Timothy — 30 Apr 2008 @ 7:51 PM

  22. Gavin said “The Nature study is talking about changes associated with ocean circulation even while CO2, and the global imbalance, and global temperature, is increasing. It is exactly what we’ve been trying to explain.”

    I guess I need to fix my glasses because no where did I read that “global temperature, is increasing”. Could you provide a quote?

    This is what I read:
    “Average temperatures in areas such as California and France may drop over the next 10 years, influenced by colder flows in the North Atlantic, said a report today by the institution based in Kiel, Germany. Temperatures worldwide may stabilize in the period.”

    And when you say that is “exactly what we’ve been trying to explain”

    can you link me to the quote where you or other RC writers predicted the next 10 years temperatures will stabilize worldwide and California and France will get colder?

    And the article contradicts predictions made by the NASA folks in GISS Surface Temperature Analysis
    Global Temperature Trends: 2007 Summation

    “Based on these considerations, it is unlikely that 2008 will be a year with truly exceptional global mean temperature. These considerations also suggest that, barring the unlikely event of a large volcanic eruption, a record global temperature clearly exceeding that of 2005 can be expected within the next 2-3 years.”

    http://data.giss.nasa.gov/gistemp/2007/

    [Response: The global temperature in the models used in this study are increasing (see figure 4) - the changes are initially less in their SST-restored run, but they are still increasing and after a decade or two are at the same level that the standard run has. These are however 5 year trends, and so don't include variations related to ENSO and the like. My guess is that the next record breaking year will coincide with the next moderate-to-high El Nino, a position that is not contradicted by these new results - which focus on the predictability of the N. Atlantic, not the Pacific. Since I have never presented any initialised climate forecasts, you will find that I have never predicted any short term temperature trends at the regional scale. This paper (and a previous study by Smith et al earlier this year) are some of the first people to do this and so it's not yet clear how good it is as a method. I would recommend suspending judgement for a while. - gavin]

    Comment by gusbobb — 30 Apr 2008 @ 9:17 PM

  23. I have suspended judgment and only speculated that “if the trend is supported”. I figure in 5-10 years we will be able to decide who had better predictions.

    Gavin said “My guess is that the next record breaking year will coincide with the next moderate-to-high El Nino, a position that is not contradicted by these new results ”

    But their expectation of stabilized and decreasing temerpatures doesn’t exactly support the prediction of new global highs in the near future El Nino or not. Are the equations and data for calculating global temperatures available to the public. I understand they are not.

    In earlier conversation you adamantly said heat does not leave the ocean. So where does the heat come from so that El Nino will cause a record breaking year?

    Comment by gusbobb — 30 Apr 2008 @ 9:42 PM

  24. Gavin,

    Given all the media attention the new Nature article has been getting (see tomorrow’s NYT, for example, or National Geographic, the CSM, etc.), it may merit a full RC post. Its also interesting to compare this prediction with that of the Hadley Center last year, which also suggested a stagnation of temperatures (till 2009-2010 in their case, if I recall).

    [Response: We'll try and have something up at the weekend. The NYT story is ok, but the Telegraph is appalling. - gavin]

    Comment by Zeke Hausfather — 30 Apr 2008 @ 9:47 PM

  25. The number of enthusiastic posts about Moore’s law gives me some hints as to the professions of the other lay readers of the site.

    Back to paleoclimate: is it possible to pick out large volcanic events, of the Krakatoa or Laki scale, in the ice core CO2 data? I’m having a look at some data from Taylor Dome, and it looks like the temporal resolution is only about 100 years or so, which would make it tough.

    I realise that currently, anthropogenic sources far outweigh volcanic; just wondering how much CO2 is released by the absolute biggest events, and if that can then be seen in the record.

    [Response: Volcanos affect climate almost exclusively through the aerosols they pump into the atmosphere, which reflect sunlight. The amount of CO2 pumped out by even a big eruption is pretty trivial, and would hardly make a blip in the observed CO2 record. To put things in perspective, the anthropogenic source of CO2 at present is something like 20 times the estimate of the long term mean flux of CO2 out of
    the interior of the earth by all volcanos and all undersea vents. --raypierre]

    Comment by tharanga — 30 Apr 2008 @ 11:01 PM

  26. I put in about two decades worth of tuning mostly engineering code for modern computing applications. We have to recognize that modern computing cores will run at much higher throughput when low level parallelism, what computer scientists would call Instruction Level Parallelism, is exposed to the hardware/software combination. Even with all data local the rate of operation can vary by as much as an order of magnitude for well designed single thread code, than otherwise poorly designed code. During the past few years CPU clock rates have stagnated, mainly because of power consumption/heat dissipation issues, but peak computation per core rates of well designed code has continued to increase. At the same time the number of cores per chip has been rising, with six to eight coming within the next couple of years. Unfortunately off chip data bandwidth is not keeping up with on chip computation rates. This means that unless your program makes very efficient usage of the cache memory hierarchy that even without Amdahl’s law the parallel efficiency of the chip can be a lot less than anticipated. This trend towards an increasing ratio of chip capability to off chip bandwidth is expected to continue.

    Personally, I’m not as sanguine about the prospects for as rapid an increase beyond the next couple of chip shrinks, we are already at 45nm, and two more shrinks brings us to 22nm at which point both quantum effects, and the fact that the number of atoms per part starts getting small enough that part to part variation starts getting pretty severe. I wouldn’t be at all surprised that after another five years or so that progress slows (but doesn’t stop).

    Comment by Thomas — 30 Apr 2008 @ 11:08 PM

  27. ITRS is practically a conspiracy to make sure Moore’s law keeps happening. The semiconductor companies have parceled out research tasks so that each company takes a particular technology to research They have signed cross-licensing agreements. If any technology makes a breakthrough, the whole world makes the breakthrough. Some semiconductors other than silicon are up to 1000 times faster than silicon. There were about 20 semiconductor materials under consideration last time I checked. Raymond Kurzweil says some time in the near future [say 2020 or 2025] computer IQ will exceed human IQ. See “The Age of Spiritual Machines.” Once that happens, computer IQ will fly way past human IQ and soon exceed the thinking power of all of the people. We use computers to design computers. A better computer can better design its own successor. A computer smarter than a human can design a computer 10 times smarter than a human or itself. Keep on iterating.
    So don’t count computers out as far as better climate predictions go. Future computers will be programmed by computers. If Kurzweil is right, computers will take on all of the tasks presently done by human climatologists. That includes the synthesis of fragmentary data task. The new kind of scientist who is at ease in dealing with the disparate sources of paleo-data and aware of the problems, and yet conscious of what is needed (and why) by modellers could be a machine. It would be a 2025 machine, not a 2008 machine. The question is, will the 2025 computer be smart enough to figure out how to undo the climate screw-up that will have happened by then? Will the humans be savable or will only computers survive? The computers will survive the extinction of humans if they become conscious/self aware because computers don’t need food or breathable air. Raymond Kurzweil is one of those super talented software people, not a science fiction writer, so I don’t count his ideas out.

    Comment by Edward Greisch — 30 Apr 2008 @ 11:17 PM

  28. Gusbob,

    The ocean contains far more stored heat than the atmosphere does, and is also continually exchanging heat with the atmosphere – unless covered by sea ice.

    It’s very hard to predict what the net ocean-atmosphere heat exchange will be over any given year. Much depends on the strength of the wind-driven upwelling, which brings cold deep water to the surface.

    At this point, model forecasts of ocean circulation at the decadal scale should probably be viewed cautiously, since we cannot even predict an El Nino more than 9 months in advance or so.

    Predictions of the Pacific Decadal Oscillation are even more unlikely – claims that it has entered its “cool phase” are basically completely unsupported. No mechanism for the PDO is known, and whether it has regular phases that will continue into the future is also unknown. El Nino is far better understood.

    I would wait for the paper to come out before leaping to promote it.

    Comment by Ike Solem — 1 May 2008 @ 2:28 AM

  29. Holland’s old (1978) figure for CO2 from volcanism and metamorphism was 330 million tons per year. According to T.M. Gehrels at USGS, volcanoes contribute about 200 MT of that. Last year, human technology added about 30 billion tons of CO2 to the air. So human output dwarfs natural by a factor of 91 and volcanoes by a factor of 150.

    Comment by Barton Paul Levenson — 1 May 2008 @ 6:34 AM

  30. RE: #6 Mountain Pine Beetle Outbreak

    Global warming has little to do with the massive mountain pine beetle outbreak in BC. A cold snap of -35 to -40 deg C for at least 2-3 days is required to kill larva and eggs under the thin bark of a lodgepole pine in winter and -25 deg C in early fall. Cold snaps of this magnitude don’t happen all that often.

    The other natural control of pine beetle infestions is wildfire started by lightning strikes. Lodgepole pine is the only common pine that has serrotinous or heat sensitive cones, which can hang on the branches for long periods of time. The scales on these cones only open and drop their seeds when exposed to the intense heat of fire which melts the resin. This occurs just before the cone is consummed by fire. After the forest burn downs, the seeds germinate in soil enriched from the ash of the burned trees, and the cycle start over again with a period of about 100-150 years. This is also why there is a mono culture of LP that goes on forever and ever.

    Pine forest in north western NA just don’t last that long as compared to the coastal rainforest where there are giant Skita spruce, Doug fir, and western red cedar. Old-growth WCR is more valuable than platinium, and the BC forest service is in a constant on-going, cat-and-mouse struggle with tree rustlers.

    Vigorous suppression of forest fires beginning with the Smokey the Bear Era has led to large mature LP forest and lots of fuel on the forest floor. This is the reason the Yellowstone pine forest burned down in 1988. I recall on the cover of Newsweek mag a phrase,”Is global warming the cause?”, or something like that.

    There is one other curious weather pattern that has contributed the current outbreak. Starting in the late ’90 and early ’00′s, late spring and early summer were unsually cool and rainy. This led to a low incident of forest fires. Many fires which start in remote areas with beetle-infested trees are often left burn out because there is no easy access for fire fighting crews and equipment.

    Mountain pine beetles are very picky critters and only mass attack mature LP pines that are about 40 cm or larger in diameter at breast height. Prof John Borden (Bio. Sci., SFU) and I worked for about 25 years tying to find a hypothetical attractant host chemical emitted only by mature trees. We never found the “magic bullet”.

    For all the interesting info on NA’s No. 1 tree-killing beetle, GO: http://www.for.gov.bc.ca/hfp/mountain_pine_beetle

    And checkout this awesome video that shows the growth of the infestation in the BC LP forest, GO:
    http://cfs.nrcan.gc.ca/images/3252.

    Note how the cold snap in 1985 wiped out the beetles in the Chilcotin.

    Comment by Harold Pierce Jr — 1 May 2008 @ 7:09 AM

  31. ATTN: ALL

    When posting comments, please limit paragraphs to less than ca 10 lines. Otherwise the text is too hard to read especially for us old guys with vision problems.

    Comment by Harold Pierce Jr — 1 May 2008 @ 7:24 AM

  32. John Mashey–I should have been more specific. True Moore’s original formulation was not for CMOS. However, most of the period over which the law has been valid relied on scaling of CMOS to achieve the increased density and speed. I agree that the age of CMOS scaling is coming to an end–and it has been for the past 3 or 4 generations. I think we’ll get to 22 nm with great difficulty, but beyond that I think we’ll need to rely on new technologies–to wit:

    http://www.nature.com/nature/journal/v453/n7191/full/453042a.html

    The thing that is interesting here is not just that the technology could provide a large boost in memory density, but that it could provide an entirely new type of circuit that could “learn”. Coupled with genetic algorithms and other programming advances, this could very well revolutionize programming for complex models such as GCM.

    Now, if we can only fly it in a satellite, I’ve got job security until I retire at age 80.

    Comment by Ray Ladbury — 1 May 2008 @ 8:01 AM

  33. Re: #30

    “Lodgepole pine is the only common pine that has serrotinous or heat sensitive cones, which can hang on the branches for long periods of time.”

    Just to set things straight, jack pine (Pinus banksiana) is a very common pine that has serotinous cones. Jack pine is also under threat from the spread of the mountain pine beetle.

    Comment by Scott Reid — 1 May 2008 @ 8:30 AM

  34. 27 It is not all obvious that the world can afford much further development of ITC. It has been clear for nearly a decade (eg see Paul Strassmann’s article here http://www.strassmann.com/pubs/ft/ft_guru.html ) that the corporate world would hit the critical point of being unable to afford further expense of scrap & build cycles. The critical point is now in terms of cycle confluence and it is no coincidence that world markets are edging towards fundamental restructuring.

    The IT world, in addition to confronting an increasingly cash-strapped client base, also has to comes to terms with its contributions to climate change. Further, it is worth doing some supply chain analyses for the semiconductor industry and watch out for that wild card: sea level rise. The conditions are right for a fundamental revolution to sweep through the ITC industries, driven by financial evaporation.

    Comment by mg — 1 May 2008 @ 9:10 AM

  35. Ike Solem Says:
    At this point, model forecasts of ocean circulation at the decadal scale should probably be viewed cautiously, since we cannot even predict an El Nino more than 9 months in advance or so.

    Ike I agree with you. I think the ocean is a wild card regards heat distribution and climate predictions. I have advocated that since my first post.

    I was not predicting an El Nino or its effects. It was Gavin who predicted the next record high will be in a few years with an El Nino. My question is where does the heat come from to generate the spike in temperatures during the El Nino.

    I also find it amusing that cool spells due to ocean circulation and oscillations are not seen as “climate change” as the reports on the new cooling predictions suggest but road bumps to inevitable warming. The distinctions seem arbitrary and biased. If models of past climates were done correctly then these oscillations were accounted for. Attribution for increased temeperatures from warm phases of various oscillations were supposedly already accounted for. The IPCC predicted a consensus .3 degree rise in temperature in the next decade. Now the predicted decade of cooling is reported as an anomaly that was not part of the models?

    And what drives these oscillations? Before it was suggested that AGW was increasing the probabilities of warm phases. Then by prior logic a cold phase must be affected by less radiative input. I eagerly await new papers.

    Comment by gusbobb — 1 May 2008 @ 9:21 AM

  36. Vern (13)
    The atomic mass of Hydrogen is 1.00794 and Helium is 4.002602 – so fusing 4H -> 1He you only convert 0.029 atomic mass units into engery, which is only 0.7%. During its 10 billion year lifetime the sun will fuse of order 10% of its hydrogen, so over any timescale of interest to climate science the mass change is negligible. The sun is slowly incresing in luminosity, but this is easy to take into account.

    BBP

    Comment by BBP — 1 May 2008 @ 9:51 AM

  37. gusbobb:

    My question is where
    does the heat come from to generate the
    spike in temperatures during the El Nino.

    The global mean temperatures are computed by areal averaging of the weather station data on land, and sea surface temperature data observed by satellite. The temperature variation of El Nino doesn’t require heat as much as it requires a very large area in the Pacific to be a little bit warmer at the surface.

    The software for computing global temperature (one version of it) is freely downloadable from the GIStemp website.

    About GCMs including El Nino etc., no they don’t. But they do produce their own El-Nino -like natural variations (IOW, physical realism!) which however are not synchronized with the real world. IIUC, the computations described in the article aim at precisely achieving this sync by setting realistic initial conditions.

    Comment by Martin Vermeer — 1 May 2008 @ 10:38 AM

  38. Re: #31 (Harold Pierce)

    I too would urge all commenters to help us old guys whose vision isn’t what it used to be. Very long paragraphs can make your comment harder to read.

    Comment by tamino — 1 May 2008 @ 10:53 AM

  39. Gusbob, the classification of oceanic variability as weather rather than climate is not at all arbitrary. Oceanic variability typically persists on timescales of less than a decade. Since the solar cycle is 11 years, anything less than this is weather. And you seem to be missing the point entirely. OK, so let’s say there is a flux of cold water to the surface to suck up heat for a decade. All that means is that you won’t see the type of warming we’ve had for the last 20 years for a decade. At the end of that time, the CO2 is still there and warming kicks off again with a vengeance. What is so hard to understand about this? Climate consists of long term trends, neglecting short-term variability (

    Comment by Ray Ladbury — 1 May 2008 @ 11:05 AM

  40. RE #20 & Gavin’s “The Nature study is talking about changes associated with ocean circulation even while CO2, and the global imbalance, and global temperature, is increasing.”

    I just read a news article about this, and the impression I got was that the team was using a new approach (something like between the climate & weather timescales???), and that the ocean circulation (a slow down ??) and some natural variability would work in sync to make it a bit cooler in the N. Atlantic region, while the natural (down turn) variability working against the GW would keep it about constant in the tropical zones.

    Then after this next decade GW might kick in with even more ferocity than ever.

    I’m just paraphrazing what might be a lot of mistakes via a news article. However, I remember in high school physics being in charge of the ripple tank, and how upswing waves when they converge can create bigger waves, while downswing and upswing waves crossing each other cancel out each other.

    And what worries me is that the denialists will spring forth from the woodwork to denounce GW, and this may impact public perception, and our GHG emissions will continue to soar, and we’ll really be in hot water after that decade of slight cooling (or stable average temps) and beyond into ??? hysteresis.

    From what I understand our GHG emissions are already higher than the worst-case IPCC scenario.

    And there are other considerations, aside from the warming — like the oceans becoming acidic from CO2, and the ocean conveyor slow down reducing the upwelling of nurishment for sea life…..etc.

    Comment by Lynn Vincentnathan — 1 May 2008 @ 11:09 AM

  41. Re #35 gusbobb

    You ask with respect to El Nino’s “where does the heat come from to generate the spike in temperatures during the El Nino.”

    It comes from the sun. Within a world completely in energy balance with a stable equilibrium temperature, there are El Nino’s and La Nina’s. These correspond to periods in which heat energy is redistributed. In an El Nino event warm surface waters (warmed by the sun!) spreads across the central tropical Pacific, and cold water upwelling off the West coast of S. America is suppressed. So the Earth’s surface temperature warms for a short spell until the circulation patterns return to their “normal” state.

    Presumably all of these ocean circulations act to take solar energy into the oceans and if the Earth is in positive heat imbalance (as it seems to be under the influence of an enhanced greenhouse effect), will contribute to ocean warming, even if occasionally (due to fluctuations such as La Nina events), the ocean surface temperature is overall rather cooler for a short while.

    So these truly are fluctuations or oscillations that act to modulate either the equilibrium surface temperature (if the Earth is in “heat balance”) or the transition to a new equilibrium temperature if the Earth is in imbalance with respect to temperature forcings (as in the case of an enhanced greenhouse effect).

    That’s all very straightforward isn’t it? It seems rather likely that in a world warming under the influence of enhanced greenhouse forcing, that many of the “record years” will coincide with positive modulations from the various internal “oscillations” (e.g. El Nino’s in this case) that transiently redistribute heat to or from the Earth’s surface.

    So I don’t see why cool spells due to ocean circulation and oscillations should be seen as “climate change” (after all they’re small and short lived), and if these are a consequence of generally “heat-neutral” and short lived redistributions of ocean surface warmth, it’s not surprising if these are considered to be “bumps” on the year on year variation of the surface temperature anomaly as it responds to enhanced greenhouse forcing.

    Why should we expect otherwise? That’s how things have been during the warming of the last 30-odd years. Why should they be any different now? And many of the “bumps” can be accounted for in the past record (El Nino’s, La Nina’s, volcanic contributions, solar cycle; i.e. all of the factors that result in transient modulations of the long term trend). What’s “arbitrary” or “biased” about that??

    Comment by Chris — 1 May 2008 @ 11:23 AM

  42. I’m probably over my head here, but I’d like to point out that all of these advances in computers and writing faster running code won’t solve a basic problem with modeling: the assumption that non-included variables will stay linear and can be safely ignored. Anything that can be measured might be one of these suddenly exponential variables. Among the thousands, millions, or more ignored variables there’s probably a few that can unexpectedly go exponential and ruin the model when compared to the real world. The effect on our climate by being struck by an asteroid is an obvious example. But so too might be some undiscovered chemical reaction that only takes place at the extremely pressure found at the bottom of the deepest oceans. This variable could quantify anything imaginable (or unimaginable!) where scientists are not currently looking.

    Comment by catman306 — 1 May 2008 @ 12:33 PM

  43. RE #27 Edward Greisch “Raymond Kurzweil is one of those super talented software people, not a science fiction writer, so I don’t count his ideas out.”

    [edit] Compare actual technical progress over the past half century with what techno-optimists were predicting in the 1950s, in areas such as space travel, medicine, nuclear power, transportation, and above all, artificial intelligence.

    [Response: This is getting way off topic. Bring it back to climate models or drop it. - gavin]

    Comment by Nick Gotts — 1 May 2008 @ 1:00 PM

  44. > 31, 38, “old guys with vision problems”

    I’m another, same plea for line breaks, same reason.

    aTdHvAaNnKcSe (THANKS in advance)

    Contributors writing inline replies too, please?
    Return — twice for a blank line — between thoughts.

    Comment by Hank Roberts — 1 May 2008 @ 1:35 PM

  45. Re 35: “I also find it amusing that cool spells due to ocean circulation and oscillations are not seen as “climate change” as the reports on the new cooling predictions suggest but road bumps to inevitable warming. The distinctions seem arbitrary and biased.”

    This is just a standard strawman practice claim. It is the deniers who consistently choose the “bump” year of 1998 as their reference in all kinds of calculations. I have not seen one of the climate scientists claim that year as proof of global warming.

    Any record of 20th century global temperature measurements shows the warming trend as well as the short term noise, both up and down.

    Comment by Pekka Kostamo — 1 May 2008 @ 1:42 PM

  46. From the first post:

    > the tools and techniques required for doing good
    > synthesis work are not the same as those for making
    > measurements or for developing models. It could in
    > fact be described as a new kind of science (though
    > in essence it is not new at all) requiring, perhaps,
    > a new kind of scientist. One who is at ease in
    > dealing with the disparate sources of paleo-data
    > and aware of the problems, and yet conscious of
    > what is needed (and why) by modellers. Or
    > additionally modellers who understand what the
    > proxy data depends on and who can build that into
    > the models themselves

    This seems like a chance to create a new academic course, to start with. Would you all consider starting a Wiki or something in which possible study guides, coursework, references and such could be collected?

    Possibly there’s a department chair or academic dean watching or you could attract the attention of one.

    This sort of thing is being done for flu pandemics:
    http://www.fluwikie2.com/index.php?n=Forum.WhoIsQualifiedToAssessTheConsequencesOfAPandemic

    Comment by Hank Roberts — 1 May 2008 @ 2:15 PM

  47. Gavin,

    This was one of your very best posts. The only thing missing from your post was a paragraph or two that we could copy and send to our congressman to get you (plural) some money for a paleo-climate information data warehouse.

    (Aside to US readers; Gavin has other things to do. Do not wait. Re-read the post and go to http://www.visi.com/juan/congress/ and send your congressional representative a missive. Then, have all of your family members, friends, and coworkers send similar missives.

    Aside to other readers; Tell your representatives that the people that emit (or plan to emit) greenhouse gases should take full responsibility for their actions; including the research necessary to determine the extent of the problem.)

    Comment by Aaron Lewis — 1 May 2008 @ 4:16 PM

  48. Re46
    Hank,
    In the old days we called this “Interdisciplinary Environmental Studies”.

    While climate modeling with Paleo data is important, climate modeling with an understanding of infrastructure engineering and economics is just as important. I would say that the fact that the climate models fail to account for land ice sheet dynamics speaks volumes that our climate modeling effort needs to be vastly broader and deeper.

    When the IPCC was formed, there were 2 questions that needed to be answered. The IPCC has danced around these questions 4 times, and never offered an answer to either question. The questions are: “How fast can heat be transferred to the major ice sheets?” and, “How will the ice sheets respond?” Paleo data can help answer these questions. Then, we will need infrastructure engineering and economics to decide what, if anything, we should or can do to adapt or mitigate.

    The ice sheets represent a global threat at a risk level above VP Chevy’s 2% rule. Therefore, asset allocation to climate threat analysis (climate modeling) should be similar to asset allocation to assessing and fighting global terrorism – call it US $100 billion per year. Gavin should be able to run a paleo data center on some slice of that.

    Comment by Aaron Lewis — 1 May 2008 @ 5:00 PM

  49. If, as some of the above references to the article in the May 1 issue of “Nature” point out there is a temporary cooling in the near term due to natural variability, this will reinforce the need to take immediate action to reduce GHGs. It’s an opportunity to take advantage of the fact that less glacial, sea surface and (at least) northern hemisphere continental ice will melt. This will mitigate the positive feedback effects of the loss of albedo(I almost said libido-that ship has almost sailed- I’m another one of those old guys, who prefers shorter paragraphs).
    A cool phase coupled with accelerated steps,starting very soon, to reduce CO2,CH4,NO2 and other heat trapping gases can benefit the planet in this regard.

    Comment by Lawrence Brown — 1 May 2008 @ 6:51 PM

  50. Aaron, I have big hopes for the International Polar Year science — once we start seeing it.

    I’ve seen only one journal article from the ANDRILL cores. The first of these were only shipped to Florida late last year, and sliced up and bits of them passed out to scientists all around the world. They must be writing papers by the dozen.

    I think those two questions really have been waiting on a whole lot of research that’s wel under way now.

    Comment by Hank Roberts — 1 May 2008 @ 7:50 PM

  51. I wonder if the record setting sea ice melt last year perhaps allowing a large transfer of heat from the atmosphere to the arctic ocean is a factor (along with la nina) in the cold winter we’ve had in the northern hemisphere this year. Has there been any research done on this?

    It seems to me a measurement of the “total energy” in the earth system would be useful to show trend of climate change. It should include not only the heat stored in atmosphere, ocean and land but the energy stored in the biosphere and even the energy in currents and winds if it’s significant. Just a thought.

    Comment by Dave Werth — 1 May 2008 @ 10:39 PM

  52. • Martin Vermeer Says: “The temperature variation of El Nino doesn’t require heat as much as it requires a very large area in the Pacific to be a little bit warmer at the surface.”

    In other words Martin the temperature change is not a function of increased heat content but a function of its distribution and therefore an artifact of the measuring methodology? That certainly raises questions regards to the value of temperature as a measure of the earth’s energy balance.

    • Ray Ladbury Says: ”And you seem to be missing the point entirely. OK, so let’s say there is a flux of cold water to the surface to suck up heat for a decade. All that means is that you won’t see the type of warming we’ve had for the last 20 years for a decade. At the end of that time, the CO2 is still there and warming kicks off again with a vengeance. What is so hard to understand about this? Climate consists of long term trends, neglecting short-term variability “
    Well put Ray and exactly my point. Climate consists of long term trends neglecting short-term variability. Researchers like Okasofu argue that we have had a linear increase in temperature and sea levels starting from the early 1700’s before CO2 became a factor. See Figures 3, 4,5, 6 and 10 from his paper.

    http://www.iarc.uaf.edu/highlights/2007/akasofu_3_07/Earth_recovering_from_LIA.pdf

    And such a trend would be in keeping with the historical sunspot record. And Ray to say the sun cycle is just 11 years is your personal arbitrary distinction. If you include solar magnetic field reversals you must say the cycle is at least 22 years. And obviously from changes since the Maunder minimum or evidence for other minima there is obviously a change in the number of sunspots at each ~11 year maximum.Your 11 year cycle may be considered solar weather – not solar climate. I am suggesting Ray that the point is anything less than 300 years is weather. And in that case solar trumps CO2.

    http://earthobservatory.nasa.gov/Library/SORCE/Images/sunspot_number_1611_rt.gif

    Most people have no problem linking solar change to increased temperatures but then argue that since 1985 it has decoupled. But how much of that recent increase is short term variation? We have had 5 El Ninos since 1985 with a big 1998 peak in an El Nino year. As Martin points out you don’t need more OHC, just a redistribution that allows us to measure that heat differently. Combine that with the warm bias of the XBT’s that proliferated during this period we have to wonder if the sun really decoupled from OHC. If we subtract these short term variations from the long term trend that Oksofu presents there is not much warming left to attribute to AGW.

    Ray you are suggesting that the oceanic variability is due to cold water sucking up the heat. But Since 2003 there doesn’t appear to be an increase in OHC, so maybe there is other intrepretations to this story. If Okasofu’s trend is correct and if the driver of that trend is solar which correlates well for most of the past 300 years, and if this dearth of sunspots continues we may be entering a minimum and then you will not see warming like the past 20 years for another 100 years. You must have bumped into David Hathaway hanging around NASA. I bet he would admit that the current low solar activity is unlike any he has observed.

    Comment by gusbobb — 2 May 2008 @ 12:47 AM

  53. Having more paragraphs is an improvement.

    I wonder if it might also be possible to provide us readers with an extra feature, i.e. a TOPIC button which would enable us to concentrate on the subset of comments which the monitor has decided are closely relevant to the lead article? Less relevant comments would not be displayed. Then those who prefer to read the unsorted mixture would simply avoid the new button.

    [Response: nice idea - anyone know any relevant plugins? - gavin]

    Comment by Geoff Wexler — 2 May 2008 @ 4:49 AM

  54. This all sounds like an extremely exciting area of research to me. Being a freshly graduated ICT student (Leiden University, Holland), I wonder if I would have the right kind of background to consider this as subject for a PhD thesis? I do have a somewhat above-average background in math and physics for an ICT graduate (got a Bachelor degree in astrophysics).

    Comment by Vincent van der Goes — 2 May 2008 @ 5:19 AM

  55. Gavin, you are on the record with a prediction that global temps will rise above the 1998 record in five years, by 2013. And if my memory doesn’t fail completely, you also said that that would constitute some kind of falsification of some AGW hypotheses (à la Pielke Jr.). Now, in light of the Keenleyside et al. paper in Nature, it would be interesting to hear your thoughts on what to expect in 5, 10 or 20 years, and if you would want to refine the falsification criteria somewhat.

    [Response: No. They don't show annual data or predict ENSO, and it is almost certainly the case that the next big El Nino will put us over 1998 (again). Plus this is only one paper and which did not demonstrate that their method was any good at matching global mean temperatures - their standard run does much better (see fig 4). - gavin]

    Comment by Dodo — 2 May 2008 @ 8:03 AM

  56. gusbobb writes:

    I also find it amusing that cool spells due to ocean circulation and oscillations are not seen as “climate change” as the reports on the new cooling predictions suggest but road bumps to inevitable warming.

    You never had a course in statistics, did you?

    Comment by Barton Paul Levenson — 2 May 2008 @ 8:28 AM

  57. re: 52 gusbobb
    Dr. Akasofu: Can you recognize when a highly-respected aurora researcher, after retiring, starts opining about climate science without apparently knowing even the minimal basics? Apparently not, if you quote that paper.

    See analysis of that paper, my post as of May 17, 2007 5:03 am.

    My summary says:
    “This paper is not remotely credible, and it’s sad to see a respected scientist fall into this.”

    Comment by John Mashey — 2 May 2008 @ 8:35 AM

  58. In 39 Ray Ladbury says: “OK, so let’s say there is a flux of cold water to the surface to suck up heat for a decade. All that means is that you won’t see the type of warming we’ve had for the last 20 years for a decade. At the end of that time, the CO2 is still there and warming kicks off again with a vengeance. What is so hard to understand about this? Climate consists of long term trends, neglecting short-term variability.”

    Ray, let me suggest an idea that might help, but may be too oversimplified.

    Even if ocean currents bring cold water to the surface, thus causing a plateau or drop in mean global surface temperature, that does not mean global warming has stopped – only surface warming has. In fact, one could argue that global warming would increase, since the cooler surface, especially of the ocean, would radiate less IR to space, thus increasing the energy imbalance, and accelerating the rate of heat accumulation. Where does this heat go? It warms these cooler waters, increasing the heat energy content of the oceans. One consequence would be to increase the rate of sea level rise, though probably by a miniscule amount. Also, that heat does not go away, but is there in the ocean and will make itself felt, sooner or later. It might, for example, increase the rate of melting of the WAIS.

    I did a brief Google Scholar search, but could not find anything conclusive about a possible negative feedback if the cooler water generated more fog of a type that had a higher albedo than the open water.

    Comment by Ron Taylor — 2 May 2008 @ 8:48 AM

  59. Gusbob says: “I am suggesting Ray that the point is anything less than 300 years is weather. And in that case solar trumps CO2.”

    Wrong! Solar changes happen on a timescale of decades. See, for example:
    http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.0385v1.pdf

    CO2 persists on a timescale of centuries.

    Comment by Ray Ladbury — 2 May 2008 @ 9:40 AM

  60. Hank,
    “Research well underway” suggests appropriate levels of concern by policy makers resulting in appropriate funding. I do not see appropriate levels of concern. I do not see appropriate levels of funding.

    The only thing that I can hope is that this summer’s sea ice melt (and resulting distress to polar bear populations) arouses great concern, resulting in great actions.

    Plot Arctic sea ice area in terms of standard deviations, and it is clear that the Arctic Sea Ice System is “out of control.” It is a system seeking a new equilibrium. Volumes of GIS affected by melt conditions are increasing exponentially, but are harder categorize statistically. However, exponential change is not typical of systems in equilibrium. Thus, I think we can assume that the GIS is out of equilibrium. That means that Hansen is correct that we are above GHG concentrations that trigger ice sheet melt and he understats rates of heat transfer to the ice.

    If policy makers had a proper sense of urgency we would have 150,000 graduate students deployed to Greenland pointing their thermometers and stain gauges at everything; and in their spare time they would be measuring every kind of phase transition that might be occurring.

    Am I paranoid? Yes, because we really do not have a very good understanding of how ice transitions to water in the pressure ranges found at the bottom of an ice sheet, and there is a big difference between the structural strength of ice, and the structural strength of water.

    Tell me again, How much potential energy is there in a nuclear bomb? And, How much potential energy is there in an ice sheet? Yes, the 2% rule applies.

    Comment by Aaron Lewis — 2 May 2008 @ 12:18 PM

  61. BBP (36)

    The sun loses quite a bit more mass through the solar wind outflow than to mass-energy conversion, tho it still seems inconsequential in terms of climate change.

    Comment by Rich Creager — 2 May 2008 @ 12:27 PM

  62. Here’s comments and details on the Keenlyside paper in Nature. Accurate?

    http://climateprogress.org/2008/05/02/nature-article-on-cooling-confuses-revkin-media-deniers-next-decade-may-see-rapid-warming/

    Comment by gmb — 2 May 2008 @ 12:59 PM

  63. #52 gusbobb:

    In other words Martin the temperature change is not a function of increased heat content but a function of its distribution and therefore an artifact of the measuring methodology? That certainly raises questions regards to the value of temperature as a measure of the earth’s energy balance.

    You got that last one somewhat right, gusbobb. Global mean temperature is a noisy measure of global heat content, the noise being precisely natural variations in circulation patterns like El Nino/La Nina. But what you got wrong is that it is a measure, warts and all. As long as the Earth’s heat energy budget is in imbalance, as it is, due to anomalous greenhouse forcing, the total heat content of the Earth will grow, and much of that heat will end up in the reservoir with the greatest heat capacity, the deep ocean.

    As Martin points out you don’t need more OHC, just a redistribution that allows us to measure that heat differently.

    Based on the above misunderstanding.

    I see John Mashey took care of Dr Akasofu. Gusbobb, please allow more curiosity to inform your scientific reading appetite. Google Scholar is your friend, and while Dr Akasofu put his name to many space plasma physics papers two decades ago — and those may well be worthwhile reading — he has nothing to contribute to the current discussion.

    Comment by Martin Vermeer — 2 May 2008 @ 1:18 PM

  64. One of the clearest statements of the problem that I have read.

    Comment by Stormy — 2 May 2008 @ 1:26 PM

  65. If the PDO and AMO are both moving into a cool phase, with cooler water being brought to the surface, doesn’t that mean the warmer water which had been at the surface has been taken deeper by the currents? Continuity of mass would seem to require that.

    I am not sure what depth scale the currents cover, but this could lead to fairly abrupt warming of the ocean down to depths of perhaps a few hundred meters. That is exactly the kind of thing that could accelerate the loss of the WAIS, assuming that this applied to the waters around Antarctica. I understand there is already a problem with warmer waters reaching the underside of the ice shelves.

    It seems to me that cool phases for the PDO and AMO may not be the denialist’s friend many think it is.

    Comment by Ron Taylor — 2 May 2008 @ 1:28 PM

  66. Ray Ladbury Says: “Wrong! Solar changes happen on a timescale of decades.”

    I don’t understand your quibbling. Ray the article you refer to clearly shows solar changes happening at timescales greater than your 11 year constraint and is more supportive of my arguments than yours. Here are a few snips:

    “The distribution of the duration of grand minima is bimodal, with a dominance of short (30-90 yr) Maunderlike minima and a smaller number of long (longer than 110 yr) Sporer-like minima.”

    “The Sun spends around 3/4 of the time at moderate magnetic activity levels (averaged over 10 years). The remainder of the time is spent in the state of a grand minimum (about 17%) or a grand maximum (9% or 22% for the SN-L or SN-S series, respectively). The solar activity during modern times corresponds to the grand maximum state.”

    “This suggests that a grand minimum is a special state of the dynamo. Once falling into the grand minimum as a result of a stochastic/chaotic but non-Poisson process, the dynamo is ”trapped” in this state and its behavior is driven by deterministic intrinsic features.”

    Ray if you are limiting solar change to cyclic behavior that is easier to model, then fine.. If we are talking solar changes that affect climate, and allow us to interpret paleoclimates, then to limit a model to 11 years will cause misleading interpretations regards solar effects on climate. And it is also why the consensus missed the meagerness of this new sunspot cycle.

    I totally agree with Usokin et al (2007) paper. Our record high temperatures correlate well with the fact we have been experiencing a less common grand maximum clearly visible in the paper. Solar effects deserve greater attribution.

    They mention these chaotic solar changes are a challenge for the solar dynamo theory and I totally agree. I suspect the stochastic processes affecting the sun are related to our travel through varying interstellar medium. But that is another discussion.

    Comment by gusbobb — 2 May 2008 @ 1:44 PM

  67. Gusbob, My point is that the timescales of solar variability are short compared to CO2. What is more, the changes in insolation are not typically large. And what is more, the solar output is independent of the forcing by CO2. They are determined independently. A GCM is not a Chinese takeout menu. You can’t just take one from column A, one from column B… If you want a world where sensitivity is less than 3 degrees per doubling, you’ll have to go back to all of the evidence that supports that value and show why it is wrong.

    “I suspect the stochastic processes affecting the sun are related to our travel through varying interstellar medium. But that is another discussion.”

    Horse Puckey!

    Comment by Ray Ladbury — 2 May 2008 @ 2:11 PM

  68. # Ray Ladbury Says: And what is more, the solar output is independent of the forcing by CO2. They are determined independently. ”

    Well duhh they are independent. However attribution of recent temperatures is not. Underestimates of attribution by solar could lead to mistaken attribution by CO2.

    Comment by gusbobb — 2 May 2008 @ 3:02 PM

  69. For what it is worth in the current phase of the discussion, a determination of the power spectrum of the temperature variations in the GISP2 ice core data from Central Greenland over the Holocene show a significant peak in the 162 year band plus side-bands. There is also a small peak in the 65 year band. I suppose the latter is from the PDO.

    I’m not finding any other even potentially significant peaks for further exploration.

    Comment by David B. Benson — 2 May 2008 @ 4:00 PM

  70. Re 55. (Gavin) Thanks for the reply, and sorry for the clumsy formulation in my question. I’ll try again: Just in case, IF no new global temperature records are NOT set by 2013 or 2018, what would you conclude from that?

    [Response: The double negative makes your point a little hard to decipher. But if you mean what happens if there isn't a new record, then I'll be embarrassed and my predictive skills will be publicly scorned. In the meantime of course model development will continue - this isn't something that varies because of the annual temperature anomaly, and it remains to be seen whether more up-to-date models would have predicted something different. My guess is that they won't (since projections with similar forcings have been stable for a couple of decades). However, we will also likely have better estimates of the forcing fields - aerosols, land use etc. - that will cover the current period and so that might make a difference. - gavin]

    Comment by Dodo — 2 May 2008 @ 4:54 PM

  71. Gusbob, NO! If CO2 forcing is fixed, then even if solar forcing is rising there would have to be another forcer of opposite sign so that the net forcing remained the same. And CO2 forcing is for all practical purposes fixed. It’s not a Chinese menu. You can’t make substitutions. So, you would have to provide not one but two mechanisms. And since we can control CO2, but not solar, which one would you attribute the warming to?

    Look, what you are saying is that if I have x+2=5, that 2 must be smaller than its known value because you contend that x is larger than three. You can’t conduct algebra in the limit of small 2, just as you can’t diminish CO2 forcing below the levels where it is currently constrained without explaining why the constraints are wrong.

    Comment by Ray Ladbury — 2 May 2008 @ 6:28 PM

  72. Gavin, you asked about ways to hide off-topic comments.

    There’s no threaded newsreader for blogs (sigh) yet.

    Killfile works but only for readers using Firefox. Invaluable!
    (Scienceblogs uses it)
    – killfile http://userscripts.org/scripts/show/4107
    “… for … livejournal, haloscan comments, most typepad blogs, most blogspot blogs, scienceblogs.com, and more as I add them.”

    Or, hand-editing a ‘dungeon’ thread (moving offtopic posts to a thread instead of just deleting them)

    Comment by Hank Roberts — 2 May 2008 @ 7:36 PM

  73. • Ray Ladbury Says: “Look, what you are saying is that if I have x+2=5, that 2 must be smaller than its known value because you contend that x is larger than three. You can’t conduct algebra in the limit of small 2, just as you can’t diminish CO2 forcing below the levels where it is currently constrained without explaining why the constraints are wrong.”

    That’s not at all what I m saying. Why do you persist in telling me what I am saying? Please keep your straw dogs on a leash.

    The real algebra is a tad more complex and helps explain why CO2 often lags temperature changes. If we assume for a moment that CO2 is a result of temperature change instead of a driver of temperature change, then we can easily see that solar induced decreases in ocean temperatures 1) increases CO2 solubility and 2) decreases the potential energy stability of the ocean by decreasing surface temperatures and allows greater vertical mixing.

    I believe the estimates of change in SST since the early 1800’s approaches 1 degree C and a back of the envelope calculation would suggest that increased solubility may change atmospheric concentrations by 40-60 ppm. Add in vertical mixing and uncertainties due to biological responses and other natural variations, and by the time the sun starts increasing its activity following a minimum, atmospheric concentrations may have dropped low enough that warming will definitely not return with a vengeance, but follow the sun. Such a scenario is certainly supported by ice core data showing CO2 follows changes in temperature. Seldom does the evidence show CO2 leading temperature change.

    Comment by gusbobb — 3 May 2008 @ 12:39 AM

  74. Re #71 Ray,

    Depends what you take for efficiency of the different forcings + feedbacks.

    As already said in several discussions, current GCM’s assume that a change in solar forcing has more or less the same effect (efficacy) as an equal change in forcing from GHGs. But there are several indications that changes in solar forcing have more impact that changes in GHG forcing… Thus what we have is a different equation:

    dT = f1(dFsolar) + f2(dFghg)

    where the factor f1 and f2 may be quite different, because of different feedbacks. Solar changes have their maximum influence in the tropical stratosphere in the UV range, which changes the ozone layer thickness, the position of the jet streams, and cloud and rain amounts and patterns (empirically proven). GHGs have their main influence in the lower troposphere more towards the poles as IR downwelling, with unclear influence on clouds (still troubling for GCMs) and temperature/rain patterns…

    Any past trends can be explained by any combination of weight factors for solar and GHGs once you stop using similar efficacies for different forcing sources…

    Comment by Ferdinand Engelbeen — 3 May 2008 @ 2:03 AM

  75. re: 72 Hank
    I’d guess that just as troll-frequency tends to minish if they’re ignored or their posts are deleted, I’d expect that the existence of a dungeon thread and a willingness to use it would tend to lessen the need for its use.

    Comment by John Mashey — 3 May 2008 @ 3:42 AM

  76. In #65, Rod Taylor writes “If the PDO and AMO are both moving into a cool phase, with cooler water being brought to the surface, doesn’t that mean the warmer water which had been at the surface has been taken deeper by the currents? Continuity of mass would seem to require that.” I am not sure that this is correct. If you assume that the amount of energy absorbed by the earth from the sun is constant, then I suspect that you are correct. But if, for some reason, the earth is absorbing less energy from the sun, then the PDO and AMO could be caused by this loss of energy. There is then no requirement for the warmer water to go anywhere. But I am not sure that my logic is correct.

    Comment by Jim Cripwell — 3 May 2008 @ 6:30 AM

  77. gusbobb writes:

    I suspect the stochastic processes affecting the sun are related to our travel through varying interstellar medium.

    I doubt it. The interstellar medium is many orders of magnitude less dense than even the solar atmosphere.

    Comment by Barton Paul Levenson — 3 May 2008 @ 7:26 AM

  78. gusbobb writes:

    # Ray Ladbury Says: And what is more, the solar output is independent of the forcing by CO2. They are determined independently. ”

    Well duhh they are independent. However attribution of recent temperatures is not. Underestimates of attribution by solar could lead to mistaken attribution by CO2.

    You appear to have quoted Ray without reading the quote, or at least without understanding it. CO2 attribution, CO2 forcing, is NOT “what’s left over after solar is taken into account.” It is determined INDEPENDENTLY of other factors. Independently means not dependent on, not related to, unaffected by.

    Comment by Barton Paul Levenson — 3 May 2008 @ 7:28 AM

  79. Gusbob says: “I think the ocean is a wild card regards heat distribution and climate predictions. I have advocated that since my first post.” “I was not predicting an El Nino or its effects. It was Gavin who predicted the next record high will be in a few years with an El Nino. My question is where does the heat come from to generate the spike in temperatures during the El Nino.”

    Well, the ocean is more predictable than the atmosphere, which is expected since the atmosphere mixing time is on the order of a few hundred days, while the ocean mixing time is on the order of hundreds of years.

    Thus, there are weather forecasts that are accurate on the order of days to weeks and there are ocean-atmosphere forecasts of El Nino and similar phenomena that are accurate on the order of months to years. These could be called weather and climate predictions of the first kind – initial-value problems.

    Then, there are climate/weather forecasts of the second kind, which involve “external variables” such as the chemical composition of the atmosphere, the amount of water locked up in the ice sheets, the shape of the ocean basins and the distribution of the land masses, and the orbital variations, i.e. the Milankovitch, forcings and so on. (The solar forcing has been flat, and since the climate system responds quickly to changes in solar forcing, thus we can rule out solar forcing as having any role in global warming over the past 60 years or so.)

    These climate models might be accurate on the scale of decades to centuries, but they do not attempt to model biosphere responses or carbon cycle changes – those are the “external variables” for the climate models. By comparison, the “external variables” for weather models are things like sea surface temperatures.

    We can imagine this as a three stage process –

    1) Set up a biosphere/carbon cycle model which estimates the future composition of the atmosphere and the amount of biomass as wetlands, forest, etc. on the planet.

    2) Use that output to set that “external variable” in the climate models, which will then produce estimates of future sea surface temperatures.

    3) Plug those SSTs into weather models with 2-week limits and see what kind of weather might be expected in a warmer world.

    There are some big uncertainties involved in model #1 – the largest being future human behavior.

    Comment by Ike Solem — 3 May 2008 @ 12:55 PM

  80. # Barton Paul Levenson Says:”You appear to have quoted Ray without reading the quote, or at least without understanding it. CO2 attribution, CO2 forcing, is NOT “what’s left over after solar is taken into account.” It is determined INDEPENDENTLY of other factors. Independently means not dependent on, not related to, unaffected by.”

    I can calculate how much IR is absorbed per concentration of CO2 at a given temperature independently of other factors. However the climatic sensitivity is another story. When sensitivity is being calculated Hansen and others will include water vapor feed back. Clouds and water vapor effects are certainly not well modeled. Whatever effects you wish to attribute to cosmic rays, Svenmark has shown there is an observable impact on cloud cover. And cloud cover will effect our sensitivity equations. CO2 doesn’t trap heat like Al Gore [edit] portrayed in inconvenient Truth. CO2 delays re-radiation back into space. And that delay is affected by winds and clouds and convection currents.

    [Response: Svensmark did not "show" that there is an observable impact on cloud cover, and even if he did that's not going to affect the trend because there ain't no long term trend in cosmic rays. And your statement about how CO2 works bears no resemblance to the the way radiative transfer really works, which in fact is quite close to the cartoon in AIT. And we have thousands of observational confirmations that the way radiative transfer done is right. So don't make me get impatient. To paraphrase The Hulk, "...and I don't think you'll LIKE me when I'm impatient." --raypierre]

    Comment by gusbobb — 3 May 2008 @ 1:41 PM

  81. Such a scenario is certainly supported by ice core data showing CO2 follows changes in temperature. Seldom does the evidence show CO2 leading temperature change.

    …and in related news, chicken cannot lay eggs because they have been observed to hatch from them :-)

    You’re talking about the glacial/interglacial cycle, gusbobb. Well documented thanks to the ice cores. I propose that you read up on our understanding of this cycle, which is easy thanks to the useful link on this site. Ever heard of a guy named Milankovich? You see, we know what caused those ice ages. And we also know what were the forcings and the feedbacks, even quantitatively. Go read up and stop embarrassing yourself. (Note that I am rhetorically assuming your honesty, in good debating tradition and for the benefit of other readers.)

    BTW the Milankovich forcing is currently going down (verrry slowly), and still it’s getting warmer. Different mechanism.

    …and CO2 is not leaving the ocean, but entering it. We know that for a fact too, by a simple bookkeeping exercise (I won’t mention the isotopic fingerprint evidence, keep it simple eh?): we know how much fossil fuel is being burnt, yet see appear only roughly half of that in the atmosphere, in the Keeling curve. The other half enters the ocean, where is is also observable as a lowered pH.

    You really need to read up, gusbobb, to stop making elementary errors that undermine everything you write. Science isn’t just stringing together fancy looking terms into syntactically correct sentences — that’s scienciness, not science. Science also has to make sense. You don’t. Until you get this, please do us all a favour and refrain from adding further noise to this blog.

    Comment by Martin Vermeer — 3 May 2008 @ 4:00 PM

  82. Gusbob, if you were to actually do the math, you’d find that a couple of degrees cooling will not raise CO2 solubility sufficiently to make a difference–and we’d give all that back when the temperature rose again.
    Doing the math–that’s what scientists do. They don’t just focus on plausibility arguments and spin.

    Comment by Ray Ladbury — 3 May 2008 @ 4:11 PM

  83. Sorry to try your patience Raypierre but could you make the criticism more substantive. The AIT cartoon implied that CO2 traps the energy forevermore, and you seem to imply that such a notion is correct. So are you saying that once a CO2 molecule absorbs a photon of IR it remains in an excited state forevermore? That it doesn’t re-radiate that energy, roughly 50% back to earth and 50% back to space? I am very confused as to what you are suggesting. Perhaps if you could give the details of the observational confirmations that you refer to it will clear up my muddled thinking.

    Comment by gusbobb — 3 May 2008 @ 4:42 PM

  84. As a continuing student of blog behavior, in this RC thread, I observed that Tom Watson achieved a 27% blog-consumption rating.

    As of 82 posts:
    10 gusbobb
    14 replies (that I noticed)
    24 total of 82, or 29%

    and as I suggested in TW’s case, it is occasionally worth going back and reviewing a sequence of posts.

    Comment by John Mashey — 3 May 2008 @ 4:59 PM

  85. # Martin Vermeer Says:”Ever heard of a guy named Milankovich? You see, we know what caused those ice ages. And we also know what were the forcings and the feedbacks, even quantitatively. Go read up and stop embarrassing yourself. (Note that I am rhetorically assuming your honesty, in good debating tradition and for the benefit of other readers.)”

    Martin thanks for graciously reaffirming my honesty, although I am unsure why my honesty would be an issue. And I am very well aware of the Milankovich cycles. I did not understand however that you “knew” the exact quantities of forcing during those cycles. Seems like a lot of assumptions on sketchy data. Yes we can calculate changes in insolation do to orbital positions, wobble and so forth. But I understood RC believes the rest of the glacial cycles are due to CO2 forcing. The lag time however troubles me there. And rapid changes during Dansgaard-Oeshger events seemed to be problematic. Do you understand all the forcings for those events as well? It would be helpful if you are honestly trying to educate me, if you provided more detail criticism. Otherwise all I am hearing is that you all know and I don’t, therefore I should not talk. Not good educational tactics.

    # Martin Vermeer Says:”in the Keeling curve. The other half enters the ocean, where is is also observable as a lowered pH.” I am also very aware of the Keeling curve. The last time I looked the 2008 CO2 did not surpass the previous annual peak. I realize it is only for a very short period of time but in a time of increasing CO2 use, I find that peculiar. My first assumption was the cooling temperatures we have recently experienced were connected but I would honestly appreciate counter-arguments to such speculation.

    Comment by gusbobb — 3 May 2008 @ 5:18 PM

  86. Re #73: gussbobb

    just in case you’re being serious, it’s worth pointing out that:

    (a) since the entire glacial-present interglacial transition was associated with a global temperature increase of 5 – 6 oC (about right?) and atmospheric CO2 concentrations rose from 180-280 ppm, it’s rather more likely that the temperature-CO2 relationship that relates to temperature-dependent changes in ocean solubility is more like 17-20 ppm per oC (100ppm/5-6 oC) rather than your guess of 40-60 per oC.

    (b) that CO2 is certainly not coming out of the oceans in response to our current warming. In fact CO2 from our emissions is going into the oceans in prodigious amounts:

    e.g. Sabine et al (2004) The oceanic sink for anthropogenic CO2; Science 305, 367-371.

    or: Feeley et al (2004) Impact of anthropogenic CO2 on the CaCO3 system in the oceans; Science 305, 362-366.

    (c) that CO2 is a greenhouse gas and thus it’s enhanced atmospheric concentrations always leads to an enhanced warming forcing. Just because the initial phases of the glacial-interglacial cycles led the increased atmospheric CO2 levels (which is rather obvious and well-understood) doesn’t negate that straightforward fact. Perhaps a better means of assessing the temporal relationship between Earth’s surface temperature and raised greenhouse gas concentrations would be to look at the extinction events in the deep past, seemingly associated with massive release of greenhouse gases (e.g. the end-Cretaceous; the Paleo-Eocene Thermal Maximum; the Permian-Triassic extinctions; the Triassic-Jurrasic event and so on….)

    or why not consider the rather large late 20th century warming in relation to the preceding massive enhancement of the atmospheric CO2 concentrations starting especially from the mid-60′s…

    Comment by Chris — 3 May 2008 @ 5:49 PM

  87. Gusbob,

    Assuming you’re not intentionally misinterpreting what you read,
    Click here: http://www.realclimate.org/index.php/archives/2007/05/start-here/

    Clich here:
    http://www.aip.org/history/climate/index.html

    While people have been willing to retype that material for you, you’re blowing it off. Stop, please. Read, ask questions about the reading material after you’ve read it and failed to understand it.
    Or if you understand it, ask smarter questions based on understanding.

    Comment by Hank Roberts — 3 May 2008 @ 7:25 PM

  88. I know I’m wasting my breath but I’ve had a half dozen drinks and I still have some time to kill before my friends get here.

    “If we assume for a moment that CO2 is a result of temperature change instead of a driver of temperature change, then we can easily see that solar induced decreases in ocean temperatures 1) increases CO2 solubility and 2) decreases the potential energy stability of the ocean by decreasing surface temperatures and allows greater vertical mixing.”

    It’s not either/or. CO2 is a feedback element; it is both output and input.

    Before you repeat yourself, no this does not imply hysteresis. While hysteresis can occur in non-linear feedback systems it is not guaranteed by any means and hysteresis doesn’t occur at all in linear feedback systems. What that means is that for a system to exhibit hysteresis it almost always has to cross a non-linear “tipping point” first.

    So, if we had an ice age that was caused by a decrease in solar output, it’s conceivable but not inevitable that the solar output would have to rise above it’s original level to get back to the original temperature. Since the ice ages cycles of the last million years are not connected to the amount of energy the earth receives from the sun there won’t be any well documented examples of this.

    “The AIT cartoon implied that CO2 traps the energy forevermore, and you seem to imply that such a notion is correct. So are you saying that once a CO2 molecule absorbs a photon of IR it remains in an excited state forevermore? That it doesn’t re-radiate that energy, roughly 50% back to earth and 50% back to space? I am very confused as to what you are suggesting. Perhaps if you could give the details of the observational confirmations that you refer to it will clear up my muddled thinking.”

    The AIT cartoon did not imply anything of the sort. Adding CO2 reduces the chance that an IR photon will make it out of the atmosphere. Since you need to have the same amount of energy exiting the top of the atmosphere as entering for stable temperature, the earth will warm until enough extra IR photons are created to make up for the ones blocked by CO2.

    Comment by L Miller — 3 May 2008 @ 8:01 PM

  89. John Mashey, I am afraid that some of us were raised in the “How many times must I forgive my brother,” and “Turn the other cheek” traditions, so if a troll asks what appears to be a sincere question, I am liable to answer it–as long as it does not stray too far off topic.

    Comment by Ray Ladbury — 3 May 2008 @ 8:18 PM

  90. Gusbob, You can find a good account of what happens to the energy in “A saturated gassy argument.” Basically, the energy remains trapped until the planet comes to a new equilibrium. The key thing to remember is that the photon gas tries to come into equilibrium (that is look like a black/grey body spectrum) with the gas where it can interact with the gas (e.g. absorption bands). Since there is an over-abundance of photons (i.e. they are coming from warmer areas down below, some of the photon energy has to go into kinetic energy of the gas to reach equilibrium. This happens quite easily with CO2 because the vibrational state has a lifetime on the order of microseconds, so collisional relaxation is more likely than radiative relaxation. I don’t know if this helps you, but try reading the saturated gassy argument piece.

    Comment by Ray Ladbury — 3 May 2008 @ 8:25 PM

  91. …I am unsure why my honesty would be an issue.

    Well, in my mind it hasn’t been an issue any more for some time… but like in Parliament, you don’t say that “The Hon. Rep. for X is making this up.”

    I post for the benefit of other readers who are honest and may be deceived by your talking points, believing that the science is in as messy a state as your misrepresentation of it.

    About teaching you, I am a modest guy… the other readers are referred to the link “What does the delay…” top right. Summary: there is one forcing, Milankovich, small; and two amplifying feedbacks, CO2 and albedo, both needed to make the books close, CO2 at a short term equilibrium doubling sensitivity of (not much less than) 3C. (The delay was originally estimated at 800 yrs, but now thought to be much smaller.)

    About your other talking points and change-of-subject/goalpost ping-pong, sigh, life is short. Undermining your credibility will have to do.

    Comment by Martin Vermeer — 4 May 2008 @ 3:54 AM

  92. Re 70. Gavin, thanks for the answer, and for ignoring the stupid double negative. Anyway, we have a very interesting decade to look forward to, with the new solar cycle, PDO switch and very likely business-as-usual GHG emissions. Whatever happens, let’s hope not too many people have to feel embarrassed or have their skills scorned. It is science, after all, not a betting game.

    Comment by Dodo — 4 May 2008 @ 6:02 AM

  93. In reconstructing paleo-climates is the shortened day (faster rotational speed)and larger radius at the equator factored in? Thanks for all of the info freely given.

    Comment by asaguy — 4 May 2008 @ 9:07 AM

  94. Totally off-topic… would someone please take a cluestick to The Register?

    Comment by A. Simmons — 4 May 2008 @ 9:30 AM

  95. Meanwhile, back in the present, we seem to have awaked the big monster (23 times more potent than CO2), who is poised to bear down on us. Methane is now gassing out off Siberia.

    See: http://www.climateark.org/shared/reader/welcome.aspx?linkid=97317

    And sooner than we think we may have quite definitively solved the denialists’ problem: do GHGs cause warming, as well as simply result from (lag behind) warming? Assuming there are any denialists left after this all plays out.

    Comment by Lynn Vincentnathan — 4 May 2008 @ 10:14 AM

  96. # Ray Ladbury Says:Gusbob, You can find a good account of what happens to the energy in “A saturated gassy argument.”

    I searhed “A saturated gassy argument” But got a link to a Portugese translation and comments but no article.

    [Response: click on the english/US flag icon on the bottom of the sidebar to reset your cookies to English versions. - gavin]

    Comment by gusbobb — 4 May 2008 @ 11:10 AM

  97. I agree with John Mashley (84). Note the revealing pattern of these comments in response to gusbob:

    “And your statement about how CO2 works bears no resemblance to the the way radiative transfer really works,…” Raypierre

    “just in case you’re being serious, it’s worth pointing out that…” Chris

    “Assuming you’re not intentionally misinterpreting what you read…” Hank Roberts

    “I know I’m wasting my breath but I’ve had a half dozen drinks and I still have some time to kill before my friends get here.” L. Miller

    “I post for the benefit of other readers who are honest and may be deceived by your talking points, believing that the science is in as messy a state as your misrepresentation of it.” Martin Vermeer

    “About your other talking points and change-of-subject/goalpost ping-pong, sigh, life is short. Undermining your credibility will have to do.” Martin Vermeer

    The frustration is evident and entirely justified.

    Comment by Ron Taylor — 4 May 2008 @ 11:40 AM

  98. Re #85 gusbobb, and your comment:

    ["# Martin Vermeer Says:”in the Keeling curve. The other half enters the ocean, where is is also observable as a lowered pH.” I am also very aware of the Keeling curve. The last time I looked the 2008 CO2 did not surpass the previous annual peak. I realize it is only for a very short period of time but in a time of increasing CO2 use, I find that peculiar. My first assumption was the cooling temperatures we have recently experienced were connected but I would honestly appreciate counter-arguments to such speculation."]

    It might be mildly peculiar if it were true, but it isn’t: Here’s the updated Keeling curve for the local Mauna Loa observatory or globally averaged from the marine surface sites (scroll down a bit):

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    You can see that the 2008 atmospheric [CO2] is well above the previous annual peak.

    So counter-arguments to your speculation are unnecessary since your speculation is based on a false assumption.

    Comment by Chris — 4 May 2008 @ 12:23 PM

  99. I also agree with #31 Harold Pierce’s suggestion regarding improved readability by limiting paragraph size.

    Another great improvement in readability would be to stop using undefined acronyms in posts. I am used to this in my own narrow area of scientific expertise, but such letter babble is inappropriate on a site intended for educating the public. It takes a lot of effort to understand a simple post if one doesn’t know what the words mean.

    Finally, although I have learned some information from the multiple responses to gusbob, they are getting repetitive. I suggest that the limited time of those who generously teach here would be more effective if some attention were shifted to the more straightforward and less provocative questions.

    Steve

    Comment by Steve Fish — 4 May 2008 @ 12:45 PM

  100. Gusbob says:
    “Sorry to try your patience Raypierre”

    I rather doubt that gusbob or tom watson or similar posters are sorry about that. As John Mashey points out, such posters are here mainly to disrupt the overall discussion and are deliberately intending to try people’s patience by bringing up discredited claims over and over again. Such posters on realclimate often succeed in steering the thread into some oft-refuted corner, some leading examples being the radiative physics of CO2 in the atmosphere, and the role of any solar variation in recent global warming. Those were the two topics that tom watson and gusbob latched onto, respectively (as well as Martin Lewitt).

    However, most readers of this blog should be able to recognize that gusbob statements like “Most people have no problem linking solar change to increased temperatures but then argue that since 1985 it has decoupled” are simply dishonest.

    It is always worth remembering that between 1998 and 2005, ExxonMobil granted some $16 Million to “Global Warming Skeptic” organizations, according to UCS. Further millions are coming from the coal industry. (1)

    A study by the Union of Concerned Scientists found that “virtually all of them publish and publicize the work of a nearly identical group of spokespeople, including scientists who misrepresent peer-reviewed climate findings and confuse the public’s understanding of global warming. Most of these organizations also include these same individuals as board members or scientific advisers.”

    “Firefighters’ budgets go up when fires go up,” explains Fred L. Smith, head of the Competitive Enterprise Institute.

    The posters here follow a typical pattern – they’ll ask questions that seem reasonable, and once they get a response, they bring up all kinds of ridiculous notions (solar forcing, bad physics, etc.). The primary goal is just “creation of the appearance of scientific uncertainty.”

    The best response to this is just to go back to the original topic of the post, which is a lot more interesting than PR efforts (those are handled over at desmogblog).

    The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.

    So, we are changing the atmospheric CO2 levels (and CH4 and N2O) at a rate some 30X greater than anything ever seen in the Antarctic core record. (2, 3) What kind of uncertainty does that inject into the predictions of global climate models that are largely tested based on their ability to predict past and present climate changes?

    When you take a system and force it harder and faster than it has been forced in millions of years, once should probably expect to see some unusual nonlinear behavior. This kind of uncertainty really affects the high end of possible climate responses – the incidence of weather extremes, the spread of ocean anoxia – radical changes not seen in millions of years, in other words.

    Slowing the rate of global warming by halting the use of fossil fuels really is the only way to reduce the probability of truly catastrophic climate changes.

    1) DesmogBlog: Clearing the PR Pollution that Clouds Climate Science
    http://www.desmogblog.com/

    2) J. Jouzel, et al. (2007) Orbital and Millennial Antarctic Climate Variability over the Past 800,000 Years, Science 317, 793.
    http://www.climate.unibe.ch/~stocker/papers/jouzel07sci.pdf

    3) The Keeling CO2 curve, updated to 2007 (notice the slight exponential shape, so far?):
    http://www.esrl.noaa.gov/media/2007/img/co2_data_mlo.2007.m.gif

    Comment by Ike Solem — 4 May 2008 @ 12:55 PM

  101. Ike Solem #100:

    The primary goal is just “creation of the appearance of scientific uncertainty.”

    The best response to this is just to go back to the original topic of the post, which is a lot more interesting than PR efforts (those are handled over at desmogblog).

    I agree Ike. But that isn’t always easy. Consider also new visitors to this site without much background in the science, the most vulnerable ones.

    I admire the way the scientists running this blog sometimes jump in. I admire how they keep their calm — apparently that comes with confidently, both deeply and broadly, knowing your stuff, which distinguishes them from us amateurs. But they cannot be everywhere.

    One technique I have found useful is to feed the troll until he burps up a real blooper, something that even an interested amateur can be made to appreciate the absurdity of, and then jump for the kill. But, unfortunately you have to do that again and again on every new thread where it happens.

    And then there was this wretched female concern troll that professed taking a pity in poor gusbobb… fancy that :-(

    Comment by Martin Vermeer — 4 May 2008 @ 4:56 PM

  102. Let’s not assume the little colored pixels convey gender accurately.
    We might be visited by a boy named Sue any time and not know it.

    Comment by Hank Roberts — 4 May 2008 @ 5:37 PM

  103. Regarding models and ENSO, about a year ago I read an article dating from 2001 that intrigued me.

    An Orbitally Driven Tropical Source for Abrupt Climate Change by
    AMY C. CLEMENT, MARK A. CANE, AND RICHARD SEAGER

    “Results from a tropical coupled ocean–atmosphere model show that, under certain orbital configurations of the past, variability associated with El Nino–Southern Oscillation (ENSO) physics can abruptly lock to the seasonal cycle for several centuries, producing a mean sea surface temperature (SST) change in the tropical Pacific that resembles a La Nina.”

    Oddly enough, we appear to very near just such an orbital configuration at this time.

    Gavin stated that current GCM don’t attempt to factor variations to ENSO; however, if the climate locked into either a semi-permanent La Nina (or El Nino), I’m guessing the current models would be drastically off.

    Comment by Jim Cross — 4 May 2008 @ 5:53 PM

  104. Guys, can’t you visualize three or four college students gathered around a computer, laughing uproariously each time they provoke a serious response to their childish provocations? It looks that way to me. I do think there comes a time when the moderators can legitimately put a stop to it under the comments policy. I get so much from RC that it really bothers me when anyone so obviously plays games with it and the good will of the scientists here, and others who are seriously seeking the truth.

    Comment by Ron Taylor — 4 May 2008 @ 7:15 PM

  105. > certain orbital configurations of the past …
    > Oddly enough, we appear to very near just such
    > an orbital configuration at this time.

    Citation, please, for this claim? Yes, I did read the abstract and those of the citing papers available to me. I don’t see a basis to say “we appear to very near” (presumably ‘very near to’?)

    Comment by Hank Roberts — 4 May 2008 @ 8:19 PM

  106. I am amazed that people have turned my personal scientific uncertainty into some conspiracy or prank or threat to be dealt with in what looks like a call for me to be deleted. Unless I agree with the mainstream opinion or pledge myself to adopt it, I get demonized. Maybe I just always believed in challenging authority but that’s the only way I come to my personal understanding. And from the get go I have fuly witnessed but not understood why what Martin refers to as the tactic of “undermining my credibility” as the best way to educate me and others.

    I am now reading Part 2 of saturated gassy article and and will have challenges to some assumptions. Think what you may of me, but that is the only way I gain understanding and test my beliefs.

    Chris says :You can see that the 2008 atmospheric [CO2] is well above the previous annual peak.

    So counter-arguments to your speculation are unnecessary since your speculation is based on a false assumption.

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    Well Chris that last red data point must have jut been added in the last few days because it was not there a week ago. I see it now says May 08, so I suspect that is the case. True the new data point undermines the need for my question.

    Comment by gusbobb — 5 May 2008 @ 1:11 AM

  107. gusbobb writes:

    If we assume for a moment that CO2 is a result of temperature change instead of a driver of temperature change,

    In a natural deglaciation, the temperature rises first, then CO2, and the increase in CO2 amplifies the increase in temperature since the greenhouse effect increases. It’s a feedback, not one-way causality.

    That is not what is happening now. CO2 has led temperature for the past 150 years. That’s because we’re producing it artificially and pumping it into the air in vast quantities.

    Comment by Barton Paul Levenson — 5 May 2008 @ 5:52 AM

  108. Re: #105

    Sorry, pasting the link didn’t work very well. The comment about “orbital configuration” relates to the article cited, which you readily google.

    “The paleoclimate record shows that the pacing of abrupt climate change could be linked to the solar forcing. Heinrich (1988) found that episodes of major ice rafting in the North Atlantic recur on an 11-kyr timescale over the last glacial at times of boreal winter and summer insolation maxima, as in the results shown here. Bond et al. (1999) show that the timing between the ice rafting events decreases as the earth moves into full glacial conditions, but that earlier in the glacial, these detrital events occur on a roughly 11-kyr timescale (G. Bond 2000, personal communication). Our results raise the possibility that the YD is the return to these orbitally paced events once the influence of the ice sheets is diminished.”

    And finally, this is what I was referring to:

    “We note that the modern ENSO (zero forcing) is close to the transition period during which abrupt ENSO shutdowns can occur, suggesting that currently ENSO may be fairly sensitive to external forcing.”

    Comment by Jim Cross — 5 May 2008 @ 6:07 AM

  109. Gusbobb wrote: “Well Chris that last red data point must have jut been added in the last few days because it was not there a week ago. I see it now says May 08, so I suspect that is the case. True the new data point undermines the need for my question.”

    It never should have even come up in the first place since the 2008 data is incomplete.

    First it was the graphs of 2008 temperature anomalies, then it was the extent of this winter’s Arctic ice cover, now it’s the CO2 data.

    One has a hard time assuming those who don’t know how to read a graph know much about what they base their arguments on.

    Comment by Jim Eager — 5 May 2008 @ 7:44 AM

  110. Off topic, but appropos of keeping on topic:
    In the absence of a technological mechanism for viewing only on-topic responses, perhaps one solution would be to direct questions of an elementart/general/tutorial nature to a single thread. The Start Here thread could serve the purpose, although it would become a rather long thread. One could pretty quickly sort out the sincerely confused from the trolls/nutjobs by seeing which ones were willing to conform to the policy after a single suggestion. Indeed, on some forums, an inappropriate entry is cross referenced and redirected to the appropriate thread.
    Personally, I think that Realclimate fulfills many valuable roles. One of these is as a place where lay people can come for an accurate reflection of the science. I’d sure rather they come here than a whole lot of other places. Still, I recognize the need to avoid hijacking a thread, and I am aware that trolls thrive on the anonymity of the Internet.

    Comment by Ray Ladbury — 5 May 2008 @ 7:51 AM

  111. Ah – no conspiracy, gusbob, just an awareness of standard public relations behavior in the modern age, as well as the knowledge that the American Petroleum Institute gave $100 million to Edelman PR services, an avid fan of using online bloggers to get their client’s messages out to the public. People who are genuinely interested in the science don’t repeat standard fossil fuel lobby talking points ad naseum, and college students would probably not be trying to cause dissension on global warming science blogs for kicks – just a guess, there.

    In any case, is there any disagreement over the basic fact that CO2 levels are currently rising some 30 times faster than ever seen in the Antarctic ice cores?

    As far as the lead time, that’s been explained over and over – a little warming leads to a little CO2 increase leads to a little more warming leads to a little more CO2 increase – and we still have quite a bit of carbon locked up in the frozen permafrost.

    It’s like a match. Friction causes combustion of the chemicals in the matchhead, and so we say that friction is the cause, just as we say that Milankovitch cycles are the cause of the glacial cycles of the past few million years.

    Regarding the article that Jim Cross cites, most of the paragraph he quoted from was omitted:

    The model used in this study is highly idealized (see Clement et al. 1999 for further discussion). More complete models are needed to test the infuence of additional processes on the mechanism proposed here. We note that the modern ENSO (zero forcing) is close to the transition period during which abrupt ENSO shut downs can occur, suggesting that currently ENSO may be fairly sensitive to external forcing. Further investigation into the link between abrupt climate changes and orbital forcing of the past, both from a modeling and observational perspective, is clearly important for understanding the nature of abrupt climate change, and also for evaluating the possible future behavior of ENSO.

    Global warming models have a notoriously hard time handling El Nino, particularly the timing, so just because a model run created a “seasonal lock” – well, that means very little, but is interesting. However, Jim Cross’s comments are really a very large distortion of the paper, which has to do with this:

    These studies suggest ways in which the climate in the North Atlantic region can change in response to a La Nina-like SST change at the time of the YD. (the Younger Dryas cold period, some 12,000 years ago)

    .

    Comment by Ike Solem — 5 May 2008 @ 8:26 AM

  112. Re #85 Gusbob,

    As a moderate skeptic myself (I am pretty sure that more CO2 increases temperature somewhat, but at/below the low side of current GCM’s projections), I have compiled a web page here where is explained that humans indeed are the cause of the recent increase in CO2. Temperature plays a role in month-by-month variability of the increase speed, but hardly in the total increase.

    Based on ice cores (Vostok, Law Dome), the CO2/temperature ratio was about 8 ppmv/°C for long-term (hundreds to thousands of years) changes. Based on the 1992 Pinatubo cooling and 1998 El Niño warming, the short-term variation is about 3 ppmv/°C. Thus temperature can’t have caused more than about 10 ppmv of the total 100 ppmv increase since the start of the industrial revolution…

    All:
    This is about the one-way influence of temperature on CO2 levels, that doesn’t say anything about the influence of CO2 on temperature. If we may believe climate models, the influence is somewhere between 1.5-4.5 °C/2xCO2, or a factor 3 between minimum and maximum effect. But there is little empirical evidence from the past for any of these figures, as in most cases there was an overlap between (leading) temperature and (lagging) CO2 level. With one exception: the end of the Eemian, the previous interglacial. A drop of 40 ppmv CO2 had no influence on temperature or ice sheet growth within the measurement error margins… That doesn’t prove that there was no influence at all, but it points to an influence towards the low side of (or below) the range.

    Comment by Ferdinand Engelbeen — 5 May 2008 @ 8:37 AM

  113. Re #106 gusbobb:

    … my personal scientific uncertainty …

    Didn’t sound much like uncertainty the first time I heard it… RC is an invaluable, and important, learning resource for those willing to learn. A stubborn unwillingness to look things up combined with a loud mouth degrades the learning experience for everyone else. Yes, that is evil. No, I don’t believe you’re getting a penny from Exxon Mobil. There’s enough private crankiness to go round.

    I am now reading Part 2 of saturated gassy article…

    Good for you! There’s more out there.

    Well Chris that last red data point must have jut been added in the last few days because it was not there a week ago. I see it now says May 08, so I suspect that is the case. True the new data point undermines the need for my question.

    No it doesn’t. Your suspicion was invalid to start with. The new data point just rubs it in.

    Question (no trick question): do you understand why? Once you do, there’s a lot you have learned about the dangers of looking at short pieces of noisy time series. And about the need for statistical significance before jumping to conclusions.

    Comment by Martin Vermeer — 5 May 2008 @ 9:07 AM

  114. (Elementary,educational and off topic of lead article)

    As everyone here knows there are different kinds of skepticism ranging from honest questioning to the propagandist and misinforming , and from the naive to the sophisticated. It is sometimes hard to distinguish between a source and a sink of misinformation. Only the latter is interested in learning.

    One quite effective ploy used by the sources of ‘middle level’ misinformation is to offer only a binary choice. According to this view either all or none of the recent warming must have been anthropogenic. You must choose between absolute certainty and complete rejection of the anthropogenic theory.

    Only slightly more subtle, is the discussion which has confused some people is based on the ice core evidence. The discussions on Realclimate have not always been sufficient to remove the doubts which some people have. They are unfamiliar with feedback loops and suspect a fiddle. I think the difficulty here is that it looks like an example of a real binary choice , either CO2 leads or it lags the temperature changes and each has a definite causal interpretation. How do you encourage people to count beyond 2?

    Here is my explanation to these doubters for what is worth; it involves nothing new; just more words and perhaps an extra step and graph. There was more than one cause of the warming during the thaw so the simple choice breaks down. Since the temperature rise is the sum of at least two terms T(greenhouse) and e.g. T(Milankovitch) which behave differently, the simple before and after dichotomy is over-simplified.

    It can longer be said that ‘the temperature’ leads the CO2, only part of the temperature. Whereas the second of these ,the direct term, leads the CO2 change, the first (feedback) term lags the CO2, but only very slightly on this time scale. The feedback term is zero or weak at first, but then grows with time, as it warms up, until it dominates, when the lag becomes invisible on the graph. Temperature and CO2 changes become almost coincident. This version may still be over-simplified but it partially fills in a gap in Al Gore’s film which has been higlighted everywhere especially in the popular blogs in the newspapers.

    Comment by Geoff Wexler — 5 May 2008 @ 10:34 AM

  115. Martin’s right in #113, Gus, and it’s not a trick question. It may be a chore to understand, but it’s needed so the charts will be understandable. And you’ll start recognizing a lot of the PR talking points, which often use exactly that lack of understanding. “Warming stopped …” is one of those.

    Comment by Hank Roberts — 5 May 2008 @ 10:54 AM

  116. re #95

    The question about CO2 having peaked after peaks in global warming may
    be explained by having had multiple bursts in methane that were less
    frequent as CO2 accumulation slowed.

    Also note: … Last year alone global levels of atmospheric carbon dioxide, the primary driver of global climate change, increased by 0.6 percent, or 19 billion tons. Additionally methane rose by 27 million tons after nearly a decade with little or no increase. …

    http://www.sciencedaily.com/releases/2008/04/080423181652.htm

    Also, see latest CO2 measured at Mauna Loa:

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    Comment by pat n — 5 May 2008 @ 10:58 AM

  117. From A Saturated Gassy Argument “As it moves up layer by layer through the atmosphere, some is stopped in each layer. To be specific: a molecule of carbon dioxide, water vapor or some other greenhouse gas absorbs a bit of energy from the radiation. The molecule may radiate the energy back out again in a random direction. Or it may transfer the energy into velocity in collisions with other air molecules, so that the layer of air where it sits gets warmer. The layer of air radiates some of the energy it has absorbed back toward the ground, and some upwards to higher layers. As you go higher, the atmosphere gets thinner and colder. Eventually the energy reaches a layer so thin that radiation can escape into space.”

    I don’t see that what I said is any different,” #80 CO2 doesn’t trap heat like Al Gore [edit] portrayed in inconvenient Truth. CO2 delays re-radiation back into space.”

    And I understand that when CO2 absorbs a photon that energy will be transformed into an increased kinetic energy. But for how long? (and why the edit of my 2-word description of the cartoon characters???)

    Camping in Death Valley one feels how quickly the day’s heat escapes. One way to quantify the time delay for heat escaping would be a comparison of changes in average temperatures at 4 AM local time vs other average day time or 24 hour temperatures. The change in daytime temperatures are confounded by the sun’s input. If the release of heat is just delayed by CO2 we should see very little change in the average annual temperatures as time intervals approach 4 AM local time. . If the heat is truly trapped we would see the 4 AM average temperature mirror the average daily temperature. Are there studies of the temperature change at various times, after sunset, etc?

    From A Saturated Gassy Argument “Moreover, researchers had become acutely aware of how very dry the air gets at upper altitudes — indeed the stratosphere has scarcely any water vapor at all. By contrast, CO2 is well mixed all through the atmosphere, so as you look higher it becomes relatively more significant.”

    I am curious about this generalization. On top of Mt. Whitney, (14000 ft) you can see cirrus clouds overhead and heavy breathing reminds you that you atmospheric density has dropped by close to 50%. Also CO2 is more dense than other major atmospeheric components. I thought that despite being a well mixed gas, that CO2 concentrations get lower the higher the altitude in addition to the lower density. The condensation of water vapor into clouds suggests it increases the density of the water. So within the cloud layer I would expect water vapor to have increased in importance vs CO2. The cloud layers may be as high as 12 km. Once above the cloud layer, the density of CO2 would be so small, I would not expect these upper CO2 layers to detain much radiation.

    My other questions are once a molecule of CO2 absorbs a particular wavelength it will radiate the energy out randomly but at lower frequencies. What percentage of these re-emitted frequencies can then be absorbed by CO2? That would lead me to suspect that absorbance at ground level by CO2 would be most efficient and then the layers of CO2 above would miss most of the re-emitted photons from below.

    If some one could clear this up, it will help my continued reading and understanding.

    Comment by gusbobb — 5 May 2008 @ 11:33 AM

  118. # Ferdinand Engelbeen Says: As a moderate skeptic myself (I am pretty sure that more CO2 increases temperature somewhat, but at/below the low side of current GCM’s projections), I have compiled a web page here where is explained that humans indeed are the cause of the recent increase in CO2. Temperature plays a role in month-by-month variability of the increase speed, but hardly in the total increase.

    That is my position as well. I see CO2 has added to the average temperature but smaller amounts than argued. And I have no doubts that humans are responsible for the recent increase in CO2 concentrations. I just believe the role of the sun is not properly valued.

    Comment by gusbobb — 5 May 2008 @ 1:18 PM

  119. Re #116 Patn (and #95 Lynn):

    Methane was following temperature with far less lag than CO2 in the past (as seen in ice cores). And is near flat in current levels since about 1998. The 2007 increase (probably caused by a warmer Arctic) is hardly significant in the graph:

    http://www.esrl.noaa.gov/gmd/ccgg/iadv/

    Comment by Ferdinand Engelbeen — 5 May 2008 @ 1:58 PM

  120. Off topic:

    Conservative talk radio host Neil Boortz asks this question:

    Even if the climate is warming, is that a bad thing? What IS the ideal temperature for the earth?

    Don’t worry, I know the answer. But I fear that many of his listeners do not. Perhaps someone could help with their education.

    Comment by catman306 — 5 May 2008 @ 2:56 PM

  121. Re Gusbob @ 117: “Camping in Death Valley one feels how quickly the day’s heat escapes.”

    And why? Because there is less water vapour in the air column above a desert.

    “If the heat is truly trapped we would see the 4 AM average temperature mirror the average daily temperature. Are there studies of the temperature change at various times, after sunset, etc?”

    Why do you think regular posters here answer questions and provide links if you don’t bother to read them?

    Yes, night time temperatures have increased in correlation with the rise in daytime temperatures.

    “I thought that despite being a well mixed gas, that CO2 concentrations get lower the higher the altitude in addition to the lower density. The condensation of water vapor into clouds suggests it increases the density of the water.”

    No, it suggests a very real drop in both temperature and pressure with altitude, both of which reduce the amount of water vapour that can exist, so it condenses into water droplets and freezes into ice crystals. CO2 does not condense in Earth atmosphere.

    “Once above the cloud layer, the density of CO2 would be so small, I would not expect these upper CO2 layers to detain much radiation.”

    You would be wrong.

    “My other questions are once a molecule of CO2 absorbs a particular wavelength it will radiate the energy out randomly but at lower frequencies.”

    No, the same wavelength.

    “What percentage of these re-emitted frequencies can then be absorbed by CO2?”

    Therefore potentially all of it.

    Comment by Jim Eager — 5 May 2008 @ 2:57 PM

  122. Re #117 gusbobb: here some answers that are within my knowledge.

    If the release of heat is just delayed by CO2 we should see very little change in the average annual temperatures as time intervals approach 4 AM local time. . If the heat is truly trapped we would see the 4 AM average temperature mirror the average daily temperature. Are there studies of the temperature change at various times, after sunset, etc?

    Actually the IPCC report contains language on the DTR, or Diurnal Temperature Range. It says

    “• A decrease in diurnal temperature range (DTR) was reported in the TAR, but the data available then extended
    only from 1950 to 1993. Updated observations reveal that DTR has not changed from 1979 to 2004 as both
    day- and night-time temperature have risen at about the same rate. The trends are highly variable from one
    region to another. {3.2}

    So the answer appears “yes”. We do see trapping.

    I am curious about this generalization. On top of Mt. Whitney, (14000 ft) you can see cirrus clouds overhead and heavy breathing reminds you that you atmospheric density has dropped by close to 50%.

    Yes… but those are clouds made up of ice crystals. There is very little water vapour up there, and what little there is, can be made to freeze out very easily: the low temperature means that the saturation water vapour pressure is also very low.

    Also CO2 is more dense than other major atmospeheric components. I thought that despite being a well mixed gas, that CO2 concentrations get lower the higher the altitude in addition to the lower density.

    No, that would only happen if the atmosphere were stagnant — and even then, very very slowly. The real atmosphere is well mixed by currents, both horizontal and vertical. I found it surprisingly difficult to find a Web reference on this, apparently because it is self-evident. But look at this for how well mixed the atmosphere is in the horizontal direction. There are graphs for stations at different latitudes. The annual “ripple” is due to vegetation on the Northern hemisphere.

    Closer to the ground, to vegetation and urban areas, you will find greater variations in CO2 concentration.

    The condensation of water vapor into clouds suggests it increases the density of the water. So within the cloud layer I would expect water vapor to have increased in importance vs CO2.

    Within the cloud, relative humidity will be close to 100%. Never more. The cloud itself is liquid water in the form of droplets. Note that the higher up you go, the less the concentration of water vapour is that corresponds to 100% relative humidity.

    The cloud layers may be as high as 12 km.

    Yes, but at those heights you will get those wispy cirrus clouds made of ice crystals. And at those low temperatures they may even form from air containing only minute traces of water vapour.

    My other questions are once a molecule of CO2 absorbs a particular wavelength it will radiate the energy out randomly but at lower frequencies.

    No… don’t look at individual molecules, that’s not fruitful. They collide and exchange energy with other molecules all the time. According to Kirchhoff-Bunsen, an air parcel will radiate at the same frequencies that it absorbs on. So, in frequency ranges where CO2 is opaque to radiation, the heat will slowly migrate upward by emission-absorption-emission as in the Saturated Gassy Argument description.

    Also, it is not fruitful to look at the effect of greenhouse gases as a temporal (delay) effect. Rather it is a spatial (barrier) effect. Consider that the heat has to be transported from the Earth surface to the “edge of space” where it may be radiated out. The latter always occurs at the Stefan-Boltzmann equilibrium temperature of 255K. In order to transport the heat, whether this happens radiatively (as described above) or convectively (along the so-called adiabatic lapse rate, some -5C/km of height), you need a temperature gradient to drive the heat transport. For the adiabatic lapse rate, the “edge of space” being at 6 km, you find that you need a temperature difference of 30C, making the Earth surface temperature 255+30K=285K. This is the natural greenhouse effect.

    Now add CO2. The “edge of space” will push upward as air gets more opaque, and the surface will get warmer.

    It’s like adding blankets to the bed you’re sleeping in. A heat camera on the outside would observe the same temperature, at which the top blanket manages to get rid of your body heat; but underneath you’re sweating more and more, the more blankets are added. I would say “heat trapping” or “thermal blanketing” is an apt term.

    Hope this helps (a little).

    Comment by Martin Vermeer — 5 May 2008 @ 3:58 PM

  123. From the Dept. of stupid questions:

    1. What is the lag between the release of CO2 and when it starts trapping heat?
    If I exhale today, does that molecule start blocking radiation of solar heat immediately?

    [Response: There is no lag. However, there is a lag for a change in emissions to affect the atmospheric concentration, and there is a lag for the change in concentration to affect temperatures. Note that the CO2 you exhale doesn't count since that carbon came from the atmosphere itself fairly recently (the last year or so). - gavin]

    2. Is this lag time the same for CO2 and CH4?

    [Response: The 'no lag' is the same. But perturbations of the total CH4 concentration in the atmosphere reacts faster to changes in emissions. A decade or so rather than decades to centuries. - gavin]

    3. It is my understanding that the arctic has been 5-6 degrees warmer than present, and some of the GIS remained intact. Do we know what the effect of this higher temp was on the marine methane deposits?
    Do we know how old these methane deposits are, and what temperature ranges they have survived?

    [Response: Likely small (look at CH4 concs during the Eemian. But no, and no. - gavin]

    4. Could melting of the GIS help keep the North Atlantic waters cold enough to prevent the methane
    release – as a negative feedback?

    [Response: Not likely. It could slow the rate of warming in the N. Atlantic though. However, I don't know where you are really coming from - has someone suggested that marine methane hydrate release is likely or expected? Because although there are uncertainties, it doesn't appear to be an imminent problem - though it is one that people are concerned about. - gavin]

    5. Can someone give me an easy-to-understand answer on my main question on Hansen’s new paper?
    I still can’t figure out how if glaciation happened with CO2 levels at 425ppm that we are seeing so much
    warming at 385ppm.

    [Response: First off there is a difference between equivalent CO2 and actual CO2. Plus you have to factor in 'unfavorable' orbital configurations (not what we have now)... - gavin]

    6. Does methane evenly distribute in the atmosphere like CO2?

    Let’s be honest, I have many more stupid questions, but I’ll stop there for now.

    Comment by Consumer — 5 May 2008 @ 4:00 PM

  124. Gus, when you quote, attribute the quotation — right click on the timestamp, copy/paste — and say who you’re quoting.

    Any thread here will have copious droppings from at least one hobbyhorse, usually more than one, as well as much speculation from ordinary readers like me, as well as cites to real science. Help keep track.

    Anything quoted comes from a context and should be read that way.

    Comment by Hank Roberts — 5 May 2008 @ 4:42 PM

  125. Martin, thanks, this helps me — we went ’round and ’round on the earlier threads in which those of us with insufficient math were trying to understand radiation physics using prose.

    Your posting above helps, clear and concise

    > it is not fruitful to look at the effect of
    > greenhouse gases as a temporal (delay) effect.
    > Rather it is a spatial (barrier) effect. Consider
    > that the heat has to be transported from the Earth
    > surface to the “edge of space” where it may be
    > radiated out.

    Comment by Hank Roberts — 5 May 2008 @ 5:19 PM

  126. Gusbobb says in #117:” I don’t see that what I said is any different,” #80 CO2 doesn’t trap heat like Al Gore [edit] portrayed in inconvenient Truth. CO2 delays re-radiation back into space.

    “And I understand that when CO2 absorbs a photon that energy will be transformed into an increased kinetic energy. But for how long? (and why the edit of my 2-word description of the cartoon characters???)”

    There are different levels of understanding of the effects of GHGs on Earth’s surface temperature depending on how detailed the explanation. “A Saturated Gassy Argument” gives a high level of scientific explanation. Others, who adress mainly an audience of non-scientists, may use words such as a “blanket” or “trapping” effect, rather than go into scientific detail about back radiation and energy balance. Why do these more simplified explanations bother you?

    As to your next question about why the edit,it’s because the word you use is inflammatory,an insult, it’s an ad hominem attack. Ad homs add nothing to the discussion, regardless of the source.

    Comment by Lawrence Brown — 5 May 2008 @ 5:36 PM

  127. Thank you for the quick response.

    1. If there is no lag, why is there a lag between concentrations and temperature. Is this a lag between concentration and net heat, or between net heat and temperature?

    4. I was thinking of the article linked to by #95 Lynn:

    **Meanwhile, back in the present, we seem to have awaked the big monster (23 times more potent than CO2), who is poised to bear down on us. Methane is now gassing out off Siberia.

    See: http://www.climateark.org/shared/reader/welcome.aspx?linkid=97317

    Comment by Consumer — 5 May 2008 @ 5:39 PM

  128. Gusbob, per my earlier suggestion, I will try to compose a coherent reply that addresses some of your queries and misconceptions over in the “Start Here” Thread to avoid hijacking the main thread. Hopefully, this will serve as an incentive to you and others to avail yourselves of the other offerings vectored by that thread.

    Comment by Ray Ladbury — 5 May 2008 @ 5:58 PM

  129. Gusbob, spoke too soon. Start Here Thread is closed. Let’s go back to the “A Day when Hell Was Frozen” Thread:
    http://www.realclimate.org/index.php/archives/2008/02/a-day-when-hell-was-frozen/langswitch_lang/sw

    A tutorial post would be quasi-on-topic there.

    Comment by Ray Ladbury — 5 May 2008 @ 6:15 PM

  130. Consumer,

    the lag is for the same reason that a swimming pool doesn’t heat up immediately on a hot day– it takes time to heat the planet, and especially water (with a high heat capacity) and covers 70% of the planet. If the planet had no atmsophere and no oceans, than it would change temperatures much quicker, and be more sensitive to things like changes in the sun (hotter days, colder nights).

    As gavin mentions, the methane feedback might be harmful over centuries, but for now I’d be more worried about the increases in greenhouse gases (mainly CO2).

    Comment by Chris Colose — 5 May 2008 @ 6:18 PM

  131. Re #119, #116, #95,

    I think the 2007 is significan – in light of:

    … “Of particular concern are potential positive feedbacks
    that could amplify increases in the concentrations of greenhouse gases
    — water, CO2, methane and nitrous oxide (N2O) — effectively escalating
    climate sensitivity to initial anthropogenic carbon input”.

    Jan. 2008, Zachos, Dickens & Zeebe on “An early Cenozoic perspective
    on greenhouse warming and carbon-cycle”

    http://www.es.ucsc.edu/%7Ejzachos/pubs/Zachos_Dickens_Zeebe_08.pdf

    It still seems logical to me that bursts of methane from thawing
    permafrost and heating of wetlands can explain an apparent warming-CO2
    lag in some past episodes of warming.

    Comment by pat n — 5 May 2008 @ 6:56 PM

  132. Gusbob, I think I was a little too critical of you before. You seem to be getting the message. Just pose your questions in the form of “questions from a humble learner,” instead of as assertions challenging the competence of the scientists here. It will really work for you.

    Comment by Ron Taylor — 5 May 2008 @ 8:17 PM

  133. I have two comments to make.

    1. For a Paleoclimate discussion thread, there is a remarkable lack of discussion regarding the Paleoclimate (which does not seem to be correlated with CO2 levels much at all throughout climate history.)

    2. Methane concentrations in the atmosphere have stabilized and are probably falling now so all the doom and gloom discussion about Methane releases are not supported by the facts.

    Comment by Lowell — 5 May 2008 @ 8:23 PM

  134. As a scientist (geologist) who is neither a sceptic nor a warmer I visit sites such as this to get an appreciation of where the science regarding AGW is heading. After reading many of the posts here I can get an appreciation of how Copernicus or Darwin must have felt. You should collectively hang your heads in shame at the close mindedness expressed here. It is arrogant in the extreme to assume that you can effectively model something as complex as the atmosphere and its interaction with the earth and the solar system using the current state of computers and knowledge of how the ocean or any other process that you haven’t yet become aware of works and yet be so certain and dismissive of anyone such as gusbob who dares to put his hand up with different ideas.
    Geology teaches me that nothing is “ideal” or constant in the scheme of things and that the earth will go on regardless. Don’t forget that we are just passengers on this large billiard ball with a thin plastic coating that is the atmosphere.

    Comment by greg smith — 5 May 2008 @ 8:51 PM

  135. Lowell, check out some of Dana Royer’s work for the connection between CO2 and climate over million-year timescales. The supergreenhouses of the Cretaceous, snowball earth, glacial-interglacial cycles, the PETM, the hot earth during the faint sun, the evaporated oceans of Venus, the history of Mars, and many other examples all have CO2 in them in some way or another. You could say the relation of carbon dioxide to planetary climate is like evolution is to biology.

    MEthane does appear to be levelling off, but it’s hard to say with high confidence what the future trends will be. In any case, the methane from human activities is a bit of a separate issue from the deep ocean methane feedback, which is supposedly a response to warmer temperature and occurs over centennial timescales.

    Comment by Chris Colose — 5 May 2008 @ 8:55 PM

  136. I do not always agree with Gavin, but he has more patience than any saint. However, like a saint, I fear he is a naïve when dealing with the professional agent provocateurs that come in and steal his column inches.

    Column inches with readership have value, and I fear that professionals are stealing them. That is theft and wire fraud.

    Wire fraud is a law with teeth. In this case, it might not be prosecuted under the Bush Administration, but Bush’s tenure is shorter than the wire fraud statute of limitations.

    Comment by Aaron Lewis — 5 May 2008 @ 9:39 PM

  137. Lowell,

    Your first para. states a belief but is very vague. What’s your source for it? Why do you rely on that source?

    Compare what you believe to what you read here for example
    http://www.ncdc.noaa.gov/paleo/globalwarming/temperature-change.html

    or at least the picture from the beginning of that long page:
    http://www.ncdc.noaa.gov/paleo/globalwarming/images/temperature-change.jpg

    Your 2nd para. is a belief, but doesn’t follow logically from the stated fact. You appear to assume the only thing that can cause an increase in methane is the prior release of methane. Not true.

    Methane clathrate stability is related to the slow warming of the seabed with increases in ocean temperature.

    “It would take around 2300 years for an increased temperature to diffuse warm the sea bed to a depth sufficient to cause clathrates’ release ….” en.wikipedia.org/wiki/Paleocene-Eocene_Thermal_Maximum

    That change lags air temperature and CO2 is the forcing most directly involved. By the time that amount of warming is ‘in the pipeline’ (look up “committed warming”) it’ll be a different world already.

    Permafrost above the seabed warms faster, making it more imminent as a source of methane — you can look it up. There’s no ‘wisdom’ option, you still have to choose what you read, but try the first few dozen hits:

    http://www.google.com/search?q=permafrost+methane

    What source are you relying on for what you believe?

    Comment by Hank Roberts — 5 May 2008 @ 9:55 PM

  138. RE #133, what I’ve read (and it’s also highly stressed by AGW denialists) is that warming episodes of the past, whatever their initial causes, led to an increase in GHGs.

    I also read some years back that CH4 levels have been steady over the past many years. However, this does not mean they will always stay steady. Recent studies in Siberia indicate that methane is now outgassing in Siberia, presumably from melting ocean hydrates and permafrost, so it may be that CH4 levels will start increasing.

    See: http://www.climateark.org/shared/reader/welcome.aspx?linkid=97317

    My meager understanding has it that CH4 only lasts in the atmosphere about 10 years (unlike CO2, a small portion of which can last up to 100,000 years), and then the CH4 degrades into CO2 and other things?? The problem then is the rate at which CH4 goes into the atmosphere. If this is very slow, no problem, it could maintain a steady level, but if it should get into the atmosphere a lot faster, compounding the amount of CH4 before it can degrade, then the warming effect could really jump up, since CH4 is about a 23 times more potent GHG than CO2.

    One of the new things about this current global warming episode is the speed with which we are pumping GHG into the atmosphere. I think in the past it took hundreds (maybe thousands) of years for the warming to happen.

    What’s slow is our ability to size up a problem and respond in a timely fashion. I’m thinking the evacation efforts during Katrina (they did know the hurricane was coming and that many lived below sea level); or 9/11 — it seems the FBI or CIA agents did know something fishy was going on in flight schools. Bureaucratic and societal inertia. That’s the name of our game. No matter how fast our computers get, no matter how much we know from paleoclimate and current climate science and physics, we’re just dumb slow in our response. Except when it comes to pumping GHGs into the atmosphere — we’re fast in that.

    Comment by Lynn Vincentnathan — 5 May 2008 @ 10:09 PM

  139. “After reading many of the posts here I can get an appreciation of how Copernicus or Darwin must have felt. …” – greg smith@134

    Just me, but I think what you understand, greg, is how the doubters of those two men must have felt.

    Comment by JCH — 5 May 2008 @ 10:12 PM

  140. > Copernicus or Darwin

    “Alas, to wear the mantle of Galileo it is not enough that you be persecuted by an unkind establishment, you must also be right.”

    – Robert Park. (Stolen from the header of Conspiracy Factory blog)

    Comment by Hank Roberts — 6 May 2008 @ 12:34 AM

  141. Re: 133 #2 Methane releases… if not a scientific citation, at least accept a recent Chicago Tribune article:

    “Chersky, Russia – Sergei Zimov waded through knee-deep snow to reach a frozen lake where so much methane belches out of the melting permafrost that it spews from the ice like small geysers.”

    http://www.chicagotribune.com/news/nationworld/chi-siberia-loner_rodriguezmay05,0,7326792.story

    Freezing to show warming trend
    Though dismissed in Russia, scientist’s climate research in remote Siberia is heating up discussions in the West

    Comment by Richard Pauli — 6 May 2008 @ 1:24 AM

  142. #138, Fast and furious at dumping Lynn, the masses extraordinarily lethargic in seeing a freight train going to crash at 1 MPH, and are paying extra gas $$$ to make it go faster. The big story Up Here in far away Arctic lands is the big blue sky ever so lasting till tonight, AH such souvenirs…… From cosmic ray theorists, proclaiming more clouds during a solar minima? Where are these guys? GCR theorists care to comment?

    Going back to CO2 and greenhouse gases, I was very impressed during winter just past, by the lack of total cooling as autumn big blue turned dark during the long night. The surface might have been cold for a brief moment during mid winter, to the delight of vengeful contrarians, but alas they don’t really understand climate science, and now, nothing but blue sky, ever so steadily warming as the sun rises till solstice day. AGW is a rock solid theory, all those so far trying to discredit have failed, too bad for all of us, many press outlets don’t see it the same way.

    Comment by Wayne Davidson — 6 May 2008 @ 1:56 AM

  143. Re #134 greg smith: Both Copernicus and Darwin were well acquainted both with the ideas their theories were to replace, and with the observational record. About gusbobb, we’re working hard on getting him there too ;-)

    BTW I find rather unhistorical the implication that modelling the revolutions of the heavenly orbs, or the origin of species, were any less arrogant undertakings in their days than general circulation modelling today.

    Comment by Martin Vermeer — 6 May 2008 @ 4:13 AM

  144. Re #111

    Ike, there wasn’t any attempt to distort by leaving out the context of a sentence. My reply was to Hank’s request for a citation regarding the current orbital configuration and its possible effect on ENSO.

    The article is not just about the YD. It is rather about a 11 thousand year orbital influence on ENSO.

    In regard to “idealized” models, all models are idealized. The model of the article I cited covers time periods of thousands of years whereas the GCMs in the IPCC report operate on decades and centuries. This to me seems to create scaling problem in trying to employ the observations and models of the paleo-record to the models we are trying to use to predict future climate.

    What’s more, we know dramatic changes can and do occur in the paleo-record, however, the current GCMs are heavily biased towards the conservative. That’s understandable. It’s the safe play – like bet-ting on red and black on a rou-lette wheel and gamb-ling that the green zero doesn’t come up. Odds are it won’t during the next few decades. Eventually, however, the zero will come up. It could be a sudden melting of the Antarctic ice sheets, a mysterious shutdown of the THC, a super volcano, or perhaps even a global energy and economic crisis leading to a dramatic reduction of greenhouse gases.

    Note: Odd hyphenated spellings above are to bypass the WordPress spam filter.

    Comment by Jim Cross — 6 May 2008 @ 5:57 AM

  145. gusbobb writes:

    I thought that despite being a well mixed gas, that CO2 concentrations get lower the higher the altitude in addition to the lower density.

    No. It’s well-mixed at least through the troposphere and stratosphere, which is pretty much 99% of the mass of the atmosphere.

    Comment by Barton Paul Levenson — 6 May 2008 @ 6:38 AM

  146. greg smith posts:

    As a scientist (geologist) who is neither a sceptic nor a warmer I visit sites such as this to get an appreciation of where the science regarding AGW is heading. After reading many of the posts here I can get an appreciation of how Copernicus or Darwin must have felt. You should collectively hang your heads in shame at the close mindedness expressed here.

    Somehow this doesn’t sound like a geologist talking to his colleagues. Especially at the mention of Galileo so beloved of pseudoscientists. Where did you get your degree and when?

    It is arrogant in the extreme to assume that you can effectively model something as complex as the atmosphere and its interaction with the earth and the solar system using the current state of computers and knowledge of how the ocean or any other process that you haven’t yet become aware of works and yet be so certain and dismissive of anyone such as gusbob who dares to put his hand up with different ideas.

    What is arrogant about it? Atmospheric models work. They’re not perfect, but they don’t have to be. You’re assuming that we have to have perfect knowledge of the atmosphere to say anything about it at all, and that’s just not true. And it’s hard to believe a geologist wouldn’t know that. Even a geologist has to know something about the atmosphere, if only to understand weathering and geochemical cycles.

    Geology teaches me that nothing is “ideal” or constant in the scheme of things and that the earth will go on regardless.

    What part of “geology” says that?

    Don’t forget that we are just passengers on this large billiard ball with a thin plastic coating that is the atmosphere.

    We know.

    Comment by Barton Paul Levenson — 6 May 2008 @ 6:47 AM

  147. Greg Smith, Your attitude represents such a profound level of ignorance, it belies your claims to be a scientist. Not only are you seemingly unaware that we model complex systems all the time, you don’t even seem to be aware of the purpose of modeling. We model the relativistic collisions of Uranium atoms. We model the explosions of supernovae. We model percolation of oil, water and other fluids through rock and soil. We model the dynamics of DNA molecules consisting of hundreds of thousands of individual atoms. I personally have modeled cratering on icy satellites. We model these systems not to reproduce every detail of their dynamics on the computer, but rather to gain insight into their dynamics. The fact that you are so unnerved that you throw up your hands when confronted with a complicated problem merely shows that you are unwilling to invest the effort to understand anything. Yours is an attitude that is profoundly anti-scientific.

    Comment by Ray Ladbury — 6 May 2008 @ 6:48 AM

  148. Re #133 Lowell

    Of all of the potential variable influences on climate during the long history of the Earth, atmospheric greenhouse gases do seem to have the dominant influence:

    Thus the Earth’s cool and warm periods match rather well with the periods in which proxy measures of atmospheric CO2 were, respectively low and high. This data has recently been comprehensively compiled by Royer:

    e.g.:
    D. L. Royer (2006) CO2-forced climate thresholds during the Phanerozoic Geochim. Cosmochim Acta 70 5665-5675

    see also:
    Came RE, Eiler JM, Veizer J et al (2007) Coupling of surface temperatures and atmospheric CO2 concentrations during the Palaeozoic era Nature 449, 198-201.

    and very recent analyses of paleoproxy CO2 data (fossil leaf stomatal index) indicates a climate:CO2 coupling throughout the Miocene:

    Kurschner WM et al (2008) The impact of Miocene atmospheric carbon dioxide fluctuations on climate and the evolution of terrestrial ecosystems Proc. Natl. Acad. Sci. USA 105, 449-453

    And if one considers catastrophic events in Earth’s history associated with extinctions, for example, the most dominant correlate is massive tectonic events, at least one consequence of which is large increases in the atmospheric greenhouse gas concentrations:

    So, a very recent reanalysis of the argon-argon dating “clock” has established, for example, that the massive Permian Triassic extinction correlates with the massive volcanic events associated with the formation of the Serbian Traps (see Science 25th April 2008)]. The end-Cretaceous extinction correlates with the tectonic events associated with the Deccan Traps formation, although an asteroid impact was involved too! The Paleo-Eocene Thermal Maximum extinctions with the tectonic events associated with the opening up of the North Atlantic at a plate boundary. The Triassic-Jurrasic extinction (201.6 mya) associates with the massive volcanic outpourings of the central Atlantic magmatic province…and so on…

    …and the first great glaciation in Earth’s history seems likely to have resulted from the evolution of the earliest photosynthetic organisms that produced oxygen which, having oxidised enormous amounts of iron salts to iron-oxide, was released into the atmosphere where it oxidised the methane that was “helping” to keep the early Earth warm, and thus caused a marked cooling….

    e.g.:

    Beerling DJ et al. (2002) An atmospheric pCO(2) reconstruction across the Cretaceous-Tertiary boundary from leaf megafossils; Proc. Natl. Acad. Sci. USA 99 (12): 7836-7840

    Keller G (2005) Impacts, volcanism and mass extinction: random coincidence or cause and effect?; Austral. J. Earth Sci 52 725-757.

    Kelley S. (2007) The geochronology of large igneous provinces, terrestrial impact craters, and their relationship to mass extinctions on Earth ; Journal of the Geological Society 164, 923-936

    Comment by Chris — 6 May 2008 @ 7:07 AM

  149. Lynn Vincentnathan (138) wrote:

    My meager understanding has it that CH4 only lasts in the atmosphere about 10 years (unlike CO2, a small portion of which can last up to 100,000 years), and then the CH4 degrades into CO2 and other things?? The problem then is the rate at which CH4 goes into the atmosphere. If this is very slow, no problem, it could maintain a steady level, but if it should get into the atmosphere a lot faster, compounding the amount of CH4 before it can degrade, then the warming effect could really jump up, since CH4 is about a 23 times more potent GHG than CO2.

    A ten year residence time?

    Usually, but not always. It depends upon the size of the pulse. A larger pulse could use up the OH radicals – and then it may stick around a little longer.

    Please see:

    When methane (CH4) enters the atmosphere, it reacts with molecules of oxygen (O) and hydrogen (H), called OH radicals. The OH radicals combine with methane and break it up, creating carbon dioxide (CO2) and water vapor (H2O), both of which are greenhouse gases. Scientists previously assumed that all of the released methane would be converted to CO2 and water after about a decade. If that happened, the rise in CO2 would have been the biggest player in warming the planet. But when scientists tried to find evidence of increased CO2 levels to explain the rapid warming during the LPTM, none could be found.

    The models used in the new study show that when you greatly increase methane amounts, the OH quickly gets used up, and the extra methane lingers for hundreds of years, producing enough global warming to explain the LTPM climate.

    “Ten years of methane is a blip, but hundreds of years of atmospheric methane is enough to warm up the atmosphere, melt the ice in the oceans, and change the whole climate system,” Schmidt said. “So we may have solved a conundrum.”

    Methane Explosion Warmed the Prehistoric Earth, Possible Again
    December 10, 2001
    http://www.giss.nasa.gov/research/news/20011210/

    Schmidt, G.A., and D.T. Shindell 2003. Atmospheric composition, radiative forcing, and climate change as a consequence of a massive methane release from gas hydrates. Paleoceanography 18, no. 1, 1004, doi:10.1029/2002PA000757.
    http://pubs.giss.nasa.gov/abstracts/2003/Schmidt_Shindell.html

    [Response: Thanks for the plug (but really, there is no need!). Note however, that the 'hundreds of years' involves two effects - one is the increased lifetime (which would occur, but is limited to a few decades at most), and the second which is an assumption of continuous emissions. You can't do it by increased lifetime alone. - gavin]

    Comment by Timothy Chase — 6 May 2008 @ 7:45 AM

  150. Jim, you apparently have the actual article, not just the abstract. Can you quote exactly what orbital parameters that says are currently the same as at some past time when an abrupt change happened? There are _lots_ of variables, like the planet’s inclination, how oval or circular the orbit is. Which do they specify and which are similar?

    Comment by Hank Roberts — 6 May 2008 @ 7:58 AM

  151. Gusbobb (117), as a fellow skeptic whose skepticism lies, in part, in the same area, I’ll try to clarify some of your simpler questions. A CO2 molecule absorbs IR radiation only in discrete frequencies, though there are large numbers of lines within the primary CO2 absorption band of 15microns, +/- 2. These frequencies are based on the quantum energy levels of bond vibration, and are somewhat problematic ala quantum mechanics. This absorption does not increase the real temperature of CO2 molecules.

    The re-emission of energy from vibrational energy will also be within the same frequency lines, though because of the Doppler effect may appear to be a different frequency to a subsequent CO2 molecule. [A similar effect can apply to the initial CO2 molecule absorbing primary radiation.] Re-emission stems from the molecule’s tendency to return to its internal equilibrium ala equipartition. More likely, especially at lower denser altitudes, the CO2 molecule will relax (lose) its absorbed vibration energy to another molecule’s translation energy (1/2mv^2) via collision. The collision is most likely with the predominant O2 or N2 molecules and will evidence a temperature rise in the collidee – produce atmospheric heating. Other stuff can also increase the atmosphere’s temperature.

    O2 or N2 will do its common transferring energy back and forth through collision (including an occasional translation transfer to, say, CO2), spreading the wealth, as it were, per Maxwell. It might also relax its translation energy through emission – up or down, but this emission is of Planck’s blackbody nature, not the discrete IR absorption/emission limited to vibration and/or rotation energy levels/modes unique to CO2, methane, H2O, etc. (but which can also emit blackbody-like.) Escaping radiation comes from GHG re-emissions at high sparse altitudes, atmospheric blackbody radiation, or some that sneaks its way directly from the surface.

    If I’ve erred here we’ll know soon enough.

    Comment by Rod B — 6 May 2008 @ 8:26 AM

  152. Hank (124), I always wondered how you did an RC cite. Thanks! Now if you’ll explain how to include smiley faces, I’ll be happy…. though maybe not Gavin

    Comment by Rod B — 6 May 2008 @ 8:38 AM

  153. Rod B #151:

    If I’ve erred here we’ll know soon enough.

    Yep ;-) (made by just typing a text smiley inside blanks, and verifying with preview)

    Only your last paragraph requires a further comment. You write

    It might also relax its translation energy through emission – up or down, but this emission is of Planck’s blackbody nature, not the discrete IR absorption/emission limited to vibration and/or rotation energy levels/modes unique to CO2, methane, H2O, etc. (but which can also emit blackbody-like.).

    True, but. At Earth atmospheric densities N2, O2 etc. are practically transparent in the thermal IR, which means according to Kirchhoff-Bunsen that they emit very little. It’s really the greenhouse gases doing the job of both absorbing and emitting, but in local thermodynamic equilibrium with the non-greenhouse gases. Kirchhoff-Bunsen says that the emission intensity for a frequency is equal to the product of Planck’s curve (for that frequency and local temperature) and absorptivity of the air parcel in %.

    When you look down at the atmosphere from above, you see what is piecewise a Planck black-body radiation curve. Not as a whole. The reason is that for some frequencies, e.g., within the 15µ absorption band, the radiation seen by the satellite comes from a level 10 km or more above ground, where it is cold; whereas at other frequencies (outside absorption bands) we are looking straight down to the ground which is much warmer. So the net result is a “chimera” of Planck curves emitted at different temperatures.

    It is worth playing with Dave Archer’s software. It shows (as a 1-D model simulation) what gets radiated to space. If you remove everything but CO2, you’ll see how around wave number 700 cm-1 (i.e., 15µ) radiation comes from a level high up where temps are 220K (yellow Planck curve), i.e., the tropopause and above. Outside the band, radiation comes from the level of 297K (bit below the green Planck curve) which is the tropical ground temperature. Etcetera. So you look down to different levels depending on frequency. This is only a model, but real measurements from real satellites looking like this exist too.

    Comment by Martin Vermeer — 6 May 2008 @ 9:34 AM

  154. Rod,
    One nit. You continue to distinguish between “blackbody” and quantum radiation. There is no distinction. Quantum interactions with surrounding matter are simply the mechanism by which the radiation field comes into equilibrium with itself and with the surrounding matter. Of course the radiation field can only interact with the materials that have absorption bands that overlap its wavelengths. For CO2, that band is around 15 microns. No matter what you do, you won’t get CO2 to radiate outside this band (you can distort the band, but only so far), and diatomic gasses such as N2 and O2 will only radiate as a result of a collisional interaction that alters their magnetic symmetry.

    Comment by Ray Ladbury — 6 May 2008 @ 10:38 AM

  155. Re #151

    Hi RodB,

    You do smiley faces by typing : – ) without spaces :-) and sad faces by typing : – ( :-(

    You wrote “This absorption does not increase the real temperature of CO2 molecules” which is not strictly correct. I am assuming that by “real temperature” you mean kinetic energy of the molecules. All gas molecules have this type of energy, and that is what is measured with mercury thermometers. The absorption of infrared radiation by greenhosue gases causes these molecules to be vibrotaionally excited, and much of this energy is lost to other air molecules through collisons raising the kinetic energy (real temperature) of the air. The warmer air molecules then share their kinetic energy with the CO2 molecules so raising the real temperature of the CO2.

    CO2 does not emit Planck black body radiation. Only solids and liquids do that. But the CO2 vibration and vibrotation lines tend to merge and produce a band emission to space which can look like a segment of the Planck function.

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 6 May 2008 @ 11:15 AM

  156. Re #151

    Rod B writes

    “These frequencies are based on the quantum energy levels of bond vibration, and are somewhat problematic ala quantum mechanics.”

    What is problematic about them?

    Comment by Paul Middents — 6 May 2008 @ 11:20 AM

  157. Re #153

    Martin,

    You wrote:

    “Kirchhoff-Bunsen says that the emission intensity for a frequency is equal to the product of Planck’s curve (for that frequency and local temperature) and absorptivity of the air parcel in %.”

    Can you give me a reference for that fact? Kirchhoff died in 1887, and Bunsen in 1899 but Planck did not publish his function until 1900, although one was already thought to exist. As I understand it, it was not until 1915 that Einstein showed that gases obey Kirchhoff’s law.

    Cheers, Alastair.

    Comment by Alastair McDonald — 6 May 2008 @ 11:44 AM

  158. RE #134 & your claim that scientists are arrogant, acting as if they know it all.

    Let me go over (again) my gripes with both science and denialism: science requires 95% confidence there’s a link before making a claim (I also demand that in the social science research I and my students do); and denialists require either 99% or 101% confidence.

    Scientists have to avoid the false positive (making untrue claims) to preserve their reputation. But those concerned with living in the world (people, policy makers, environmentalists) should be concerned with avoiding false positives (failing to address true problems).

    I call this “THE MEDICAL MODEL” – we’d be disconcerted if our doc told us he/she was only 94% confident our lump was cancerous, and to come back in a year to see if it could get up to 95% so they could operate.

    We buy insurance without much certainty a tornado or other harm will damage our house. The virtue is called prudence.

    So too we should have rigorously started mitigating AGW at least by 1990 (5 years before the first scientific studies started reaching 95% confidence or the .05 alpha significance level in 1995) — esp since doing so actually saves us money and helps the economy (I found out to my amazement). Any other strategy is fool-hardy, profligate, and immoral.

    What we need is not so much more and more science (tho keep up the good work, all you scientists!), but more and more persons with strength of character and morality, persons who are not ashamed to stand up, do the right thing, and mitigate AGW.

    Comment by Lynn Vincentnathan — 6 May 2008 @ 11:47 AM

  159. Re:#151

    Black (grey) air?

    Martin Vermeer has answered you already and I’ll try to express it another way. We are now down to the level of quantum theory. Matter can only radiate by jumping between quantum levels and in this case, the transition which involves the emission of photons has to satisfy another condition. If you start with something symmetrical like an oxygen molecule and make it vibrate it will still be symmetrical. That means that the negative charge in the molecule cannot have moved relative to the positive. The symmetrical vibrating molecule will not experience any interaction with a vibrating electric field. An incoming photon cannot excite such a vibration and conversely such a vibration cannot decay by emission of a photon. But things are quite different for triatomic molecules like CO2 and H2O which have electrically asymmetrical vibrational states.

    So can air never radiate infra-red at all? I am now speculating and opening myself to correction. Might it be possible (at very low level) by breaking the symmetry in a collective way? The first possibility might be during an oxygen-nitrogen (and possibly an O2-O2 collision) collision when we have a different set of quantum states existing for a short time. Has this ever been considered? Tyndall never observed it , as far as I know but that was along time ago. It is thus likely to be far too small to affect the greenhouse.

    A quite different question. A low level contrarian argument is based on the assertion that CO2 is only a tiny proportion of the air. The rough answer of course is that CO2 and H2O are 100% of what matters and that this argument diverts attention to the less significant non absorbing parts of the atmosphere. But what about a more careful thought experiment, what would happen to our greenhouse if the oxygen and nitrogen were removed? This would remove a heat sink and alter the convection and lapse rate. Is the answer obvious?

    Comment by Geoff Wexler — 6 May 2008 @ 12:37 PM

  160. Re 158

    Well said, Lynn Vincentnathan. But strength of character and morality are not sufficient, when there are such powerful forces which negate efforts to mitigate AGW. Many decades of indoctrination and propaganda have produced massive inertia. Maybe it takes a sort of social Kuhnian paradigm shift, requiring that the old thinking in the old (and powerful) heads has to die before anything really changes….e.g.

    http://www.orionmagazine.org/index.php/articles/article/2962

    Comment by CL — 6 May 2008 @ 1:20 PM

  161. Alastair #157:

    Good question! According to Wikipedia, Kirchhoff discovered black-body radiation already in 1862.

    Kirchhoff’s law can be proven from the principle of thermodynamic equilibrium. I’m sorry to use Wikipedia again, but I don’t own any statistical physics textbook; any good one should do.

    (I didn’t manage to find anything on Einstein 1915 regarding Kirchhoff’s law. Did Einstein do experimental work? I know he worked theoretically on stimulated emission.)

    What Max Planck did was find a closed analytical expression for this empirical curve (a surprisingly fractious problem!). And reluctantly conclude that a reasonable physical interpretation would be that the radiation comes in “packets” of energy proportional to the frequency… and the rest is history :-)

    Comment by Martin Vermeer — 6 May 2008 @ 2:29 PM

  162. Back to the Present – it looks like a blob of unusually warm air from eastern Siberia is about (May 6th) to flow over the thinnest part of the Artic ice. Will the reported “unusually thin ice” melt through a month or two early, or is the ice actually not quite so fragile?
    I suspect somewhere between the two. See: NH Jetstream and Northern Sea ice

    Comment by Ken Rushton — 6 May 2008 @ 3:39 PM

  163. > http://www.orionmagazine.org/index.php/articles/article/2962

    Excellent article.

    Those with old fading eyes like mine, note, this link is “orion” like the constellation, not “onion” like the humor magazine.

    Comment by Hank Roberts — 6 May 2008 @ 3:55 PM

  164. Re #159 Geoff Wexler:

    So can air never radiate infra-red at all? I am now speculating and
    opening myself to correction. Might it be possible (at very low level)
    by breaking the symmetry in a collective way? The first possibility
    might be during an oxygen-nitrogen (and possibly an O2-O2 collision)
    collision when we have a different set of quantum states existing for a
    short time. Has this ever been considered?

    This was shortly mentioned in the comments to raypierre’s Venus article:

    http://www.realclimate.org/index.php/archives/2008/03/venus-unveiled/#comment-82917

    Comment by Martin Vermeer — 6 May 2008 @ 4:10 PM

  165. #125 Hank, didn’t think I used any math, or did I?

    Comment by Martin Vermeer — 6 May 2008 @ 4:20 PM

  166. Lowell wrote (#133): “Methane concentrations in the atmosphere have stabilized and are probably falling now so all the doom and gloom discussion about Methane releases are not supported by the facts.”

    That is incorrect. According to the latest data from NOAA, methane levels rose sharply in 2007 after a decade of stability:

    Greenhouse Gases, Carbon Dioxide and Methane, Rise Sharply in 2007
    ScienceDaily
    Thursday 24 April 2008

    Excerpt:

    Last year alone global levels of atmospheric carbon dioxide, the primary driver of global climate change, increased by 0.6 percent, or 19 billion tons. Additionally methane rose by 27 million tons after nearly a decade with little or no increase. NOAA scientists released these and other preliminary findings today as part of an annual update to the agency’s greenhouse gas index, which tracks data from 60 sites around the world.

    [...]

    Methane levels rose last year for the first time since 1998. Methane is 25 times more potent as a greenhouse gas than carbon dioxide, but there’s far less of it in the atmosphere-about 1,800 parts per billion. When related climate affects are taken into account, methane’s overall climate impact is nearly half that of carbon dioxide.

    Rapidly growing industrialization in Asia and rising wetland emissions in the Arctic and tropics are the most likely causes of the recent methane increase, said scientist Ed Dlugokencky from NOAA’s Earth System Research Laboratory.

    “We’re on the lookout for the first sign of a methane release from thawing Arctic permafrost,” said Dlugokencky. “It’s too soon to tell whether last year’s spike in emissions includes the start of such a trend.”

    Unfortunately, “gloom and doom” discussion about methane is supported by the facts.

    Comment by SecularAnimist — 6 May 2008 @ 5:18 PM

  167. Martin and Alastair, I recommend the derivation of the Planck distribution and related material in Landau and Lifshitz, “Statistical Mechanics”–elegant and straightforward. Well, as straightforward as anything ever is in stat mech.

    Comment by Ray Ladbury — 6 May 2008 @ 6:16 PM

  168. A brief social science on the end of this article:

    Should the paleo-community therefore increase the emphasis on synthesis and allocate more funds and positions accordingly? This is often a contentious issue since whenever people discuss the need for work to be done to integrate existing information, some will question whether the primacy of new data gathering is being threatened. This meeting was no exception. However, I am convinced that this debate isn’t the zero sum game implied by the argument. On the contrary, synthesising the information from a highly technical field and making it useful for others outside is a fundamental part of increasing respect for the field as a whole and actually increases the size of the pot available in the long term. Yet the lack of appropriately skilled people who can gain the respect of the data gatherers and deliver the ‘value added’ products to the modellers remains a serious obstacle.

    All sciences benefit from skilled synthesizers, especially those who can make the core ideas of a field accessible to those outside the field. However, the current organization of higher education, and the way resources are allocated within it, provides the greatest reward for those who publish small pieces of new data in referred journals over those who write the book length pieces necessary for synthesis. To accomplish a change of emphasis in paleo-climatology would require reforming the entire higher education enterprise — something that would be a good idea, but not likely to happen quickly enough to make a difference for this particular project.

    Comment by Sue — 6 May 2008 @ 7:37 PM

  169. It’s fun watching all those people picking over computer architecture vs. performance details I worked through for my PhD c. 1996 (John Mashey may even remember it: he was an examiner).

    No one has asked me to help with performance problems since then yet a lot of the “big iron” problems of the early 90s have come back as commodity performance problems with multicore.

    But anyway, back to the main question: I find it really encouraging that the paleo climate is starting to get more attention. Of course there are huge difficulties but anything that sheds more light on where we are going has to be good (except for the doubters, whose standard of proof will increase to 200% — Lynn Vincentnathan #158).

    Comment by Philip Machanick — 6 May 2008 @ 10:25 PM

  170. Re #134 Greg Smith:

    Presumably you are similary dismissive of the modelling done by earth scientists? For example:

    Determining Chondritic Impactor Size from the Marine Osmium Isotope Record
    François S. Paquay, Gregory E.
    Science 11 April 2008: 214-218.
    The difference in osmium concentrations and isotopes between seawater and asteroids allows reconstruction of impact occurrence and size, including for the Cretaceous.

    Comment by Chuck Booth — 6 May 2008 @ 11:01 PM

  171. Re #166 and others about methane levels…

    One need to look at the real figures… The “sharp” increase of methane levels in 2007 is pushing methane levels from about 1790 to 1795 ppbv, not much beyond the year-by-year variability of methane levels in the past decade, which are near stable.
    See: http://www.esrl.noaa.gov/gmd/ccgg/iadv/ and look for CH4.

    Some interesting point is that the d13C content of methane is slightly increasing. This may point to a change in source (either more human induced – like from rice paddies) or more fossil methane, but I have no figures seen (yet) to know the d13C content of different methane sources. Unfortunately d13C measurements in methane only started at the moment that methane levels stabilised.

    About historical events: One can’t compare the current situation with the PETM event, where enormous levels of methane were released, far more than natural oxydation could transform into CO2 and water. Better compare to the previous interglacial, the Eemian.

    The Eemian in average was up to 3°C warmer than current, during at least 5,000 years. This caused melting of a (large?) part of the Greenland ice sheet and temperatures in the Alaskan tundra of at least 5°C warmer than current. If we may suppose that the higher temperatures were also present in the rest of the Arctic, then we can have a clue what the permafrost thawing can give for extra methane.

    Methane levels in ice cores followed temperature trends during the whole Eemian (see here) and reached a maximum of about 700 ppbv at maximum temperature, from a level of 650 ppbv at a temperature equal to the current temperature. Today we have reached 1795 ppbv, largely due to human emissions, where land use change may be the largest contribution. Thus even if the temperature increases to Eemian levels, that would add not more than 50 ppbv to the current methane levels…

    Comment by Ferdinand Engelbeen — 7 May 2008 @ 3:09 AM

  172. Re posts 147 and 170

    I don’t regularly post on any forums but do feel compelled to answer both of the above. Geologists and geophysicists use models constantly to try and evaluate what we can’t see or directly measure, that is, what is under the earth. We also have a pretty good appreciation of palaeoclimate both on a long term (ie geological) time frame and also over a human scale. We need to understand the latter to make predictions of rock facies away from known data points such as a well bore or mine. It is making predictions under uncertainty that is our business and the more successful of us are better at putting together the disparate pieces of often unreliable or fragmentary information required to make predictions. We also understand that no model, no matter how sophisticated with wonderful visualisation etc is accurate. This is why we use Monte Carlo simulation so much. Even the most sophisticated reservoir modelling software used to make predictions of how, say, a petroleum reservoir will perform will only give an approximation of the real thing and will be subject to the accuracy of the algorithms used and the quality of the imput data. Such simulators which until the last ten years required supercomputers to run them, may be modelling 30 to 40 variables and are nothing like trying to model the atmosphere.

    In addition I have not read anywhere how climate modellers account for long term (for humans that is) variations in climate such as the PDO which is making the news at present. Nor have been able to find out how you account for random, but frequently happening, events such as volcanic eruptions. Both of these events obviously change the climate. In addition how do you estimate changes in the level or otherwise of the level of forestation. Don’t forget that North Africa kept much of the Roman Empire happily fed for over 400 years. I could look it up but I doubt that there is alot of wheat grown in Libya these days.

    I guess I could also say that when people start saying things such as there is a “consensus” or the “science is settled” that I am unnerved which explains my reference to Copernicus. We can never fully understand a complex system such as the earth with millions of variables and to say otherwise is arrogant . For example electromagnetic theory in the 1890s seemed to physicists at the time to give a full explanation of interaction between particles and electricity/magnetism but the physicists were constrained by their inability to adequately measure very small scale things and had their hats knocked into a snook when Einstein came along. (As an aside, I wonder who peer reviewed him? “Peer review” is also a term I read frequently on blogs such as this)

    The point of all of this is that we never know enough about the physical world, something we have only been able to measure or image half adequately in the last twenty years or so.

    So when a poor sod like Gus-bob asks a few, to you, dumb questions just remember to treat him with respect.

    Comment by Greg Smith — 7 May 2008 @ 3:46 AM

  173. Re #134: Damn, does this mean I should chuck my book on the Physics of the Earth’s Interior? The blurb on the back says it’s for geophysicists and earth scientists, but it only has physics and maths (models) in it :-(

    More seriously, in the modern sciences we can’t get very far before running smack into a model or the need for one: whether it is statistical only, mathematical, physical, or a more qualitative construction, a model is about the only thing that connects data to ideas about how that data came to be. Mathematical models based on the physical aspects of a system IMO push research towards understanding far more effectively than waving hands about and saying it is all too complicated.

    Comment by Donald Oats — 7 May 2008 @ 4:08 AM

  174. #166 Secular Animist,

    Further to what you post.

    1)
    NSIDC now state the current Arctic ice pack state points to a likelihood of a new record minima this year. http://nsidc.org/arcticseaicenews/index.html That’s despite the past behaviour of the ice pack indicating that a new minima is not followed by a new minima in the next year. Maslowski’s projection of an ice-free arctic summer by 2013 may indeed be optimistic as it didn’t include last year’s events. We could see the transition to a seasonally free Arctic happen as fast as figure 3 of Nghiem “Rapid reduction of Arctic perennial sea ice” (2007) implies:
    http://seaice.apl.washington.edu/Papers/NghiemEtal2007_MYreduction.pdf 303kb pdf

    2)
    2007′s melt caused surface Arctic Ocean waters in the exposed region to warm by as much as 5degC.
    http://www.sciencedaily.com/releases/2007/12/071212201236.htm

    3)
    And at least part of the 540 billion tonnes of methane (clathrate) on the Arctic ocean shelf floor may alread be beginning to destabilise.
    http://www.spiegel.de/international/world/0,1518,547976,00.html
    QUOTE
    Data from offshore drilling in the region, studied by experts at the Alfred Wegener Institute for Polar and Marine Research (AWI), also suggest that the situation has grown critical. AWI’s results show that permafrost in the flat shelf is perilously close to thawing. Three to 12 kilometers from the coast, the temperature of sea sediment was -1 to -1.5 degrees Celsius, just below freezing. Permafrost on land, though, was as cold as -12.4 degrees Celsius. “That’s a drastic difference and the best proof of a critical thermal status of the submarine permafrost,” said Shakhova.

    Paul Overduin, a geophysicist at AWI, agreed. “She’s right,” he said. “Changes are far more likely to occur on the sea shelf than on land.”
    ENDQUOTE

    Would an increasing amount of open water in the Arctic summer cause an increasing amount of vertical mixing through wind/sea-surface induced transport/turbulence?
    I suspect so, but I’m not qualified enough to be sure either way.

    Comment by CobblyWorlds — 7 May 2008 @ 4:22 AM

  175. In 162 Ken Rushton writes “Back to the Present – it looks like a blob of unusually warm air from eastern Siberia is about (May 6th) to flow over the thinnest part of the Artic ice. Will the reported “unusually thin ice” melt through a month or two early, or is the ice actually not quite so fragile?” I wonder if soimeone can clear up a mystery for me. I understand that “old” ice is thicker and melts slower than new ice, but I dont think I understand annual ice. This is the approximately 9 m sq kms of ice that melts and freezes every year, and so is, by definition, always less than one year old. It is my understanding that the thickness of annual ice is purely a function of how soon winter arrives, and how cold it gets. In many parts of the Arctic this season, winter arrived early, and was colder than average, so the annual ice in these parts is reported to be thicker than average. I have seen such reports from Hudson Bay, and the Bering Strait. What is the situation with respect to annual ice in the Arctic at the present?

    Comment by Jim Cripwell — 7 May 2008 @ 7:02 AM

  176. Greg Smith, I will give you the benefit of the doubt that perhaps you are unfamiliar with Gusbob’s history. Suffice to say he came on here and hijacked more than one thread with off-topic diatribes on pure pseudoscience. Given that history, I would contend that he is treated with exceptional tolerance here.

    As to your other contention that one must throw up one’s hands when confronted with a complicated system, I can only conclude that you have not done much modeling. Global climate models are dynamical models, and the parameters in the models are not adjustable parameters, but rather constrained by independent data. Such models are inherently more manageable than statistical models, where fit to the data determines parameter values and their error bars.
    To say that we never know enough about the physical world is meaningless. Enough to do what? We do not understand everything about the solar system, and yet this does not stop us from landing space probes on planets and asteroids. Likewise, there is much we do not understand about climate, but what we do not understand does not invalidate our well constrained understanding of CO2 and other greenhouse gasses.
    What is more, climate scientists understand the difference between climate–a purely long term entity–and weather–under which we can file things like variability of PDO, etc. and which average out over time. They also understand that paleoclimate–which deals with ancient climates and is inherently on timescales much longer than a human life. Since you do not understand these things–and much else as well, it seems–I would contend that you are not in a position to comment competently on issues of climate science.

    Comment by Ray Ladbury — 7 May 2008 @ 7:39 AM

  177. When I look at the plot (The Eemian in the Vostok Ice Core which was referenced by Ferdinand Engelbeen in his comment #171), I do not see how he can claim that “Methane levels in ice cores followed temperature trends during the whole Eemian”. ?

    Comment by pat n — 7 May 2008 @ 7:56 AM

  178. I have an question about the sensitivity of the climate system of our Planet.
    How is it?
    The vegetation on Earth is like a puffer for climate change?
    Therefore can i say that our vegetation is importand for the sesitivity?
    As i understand it you use examples of climate change to explane the future,
    In the past there was a functioning ecosystem, with a lot more biomass on the continets.
    ( at least in times with same clobal temperatur ), so in the past the ecosystem reacted with the warming process. For example rainforests. Through evaporation they create a cloudcover, like a protection for heat. Hotter temperatures they started crowing towards the pols, maybe, more cloudcover for the continents, and a slower heating up of the whole. ( think most kinds of forests are doing this, more or less ) Also if i remember it correct, in much hotter times, most of our continets were covert by rainforrests and other kinds of forests, was like a puffer for continental temperatures i guess.
    In our times we will have finished the rainforests maybe in 2040. More and more area will be needed for food production, while exhausted ground will be left behind. With or without clobal warming, we will have finished our ecosphere by 2100, growing numbers and so.
    (only war, or climate change can stop this)
    If this is not stupit and makes sense i come to my main question.
    Is this principle inside the ipcc report or not? Sinking sesitivity of the system towards 2100
    Sorry for interrupting

    Comment by Jan Umsonst — 7 May 2008 @ 8:32 AM

  179. re: 172 and volcanic effects. The effects from volcanic eruptions on climate are short-term compared with the long-term warming trend. For example, look at the effects after Pinatubo. There was short-term cooling but the warming trend rapidly resumed. Furthermore, volcanic eruptions do not produce more CO2 than mankind despite the skeptic’s urban myth to the contrary.

    Comment by Dan — 7 May 2008 @ 9:05 AM

  180. Re 172:
    Disrespect? With monumental patience, all of gusbobb’s points and question have received detailed, relevant, lengthy and continuing responses. Disrespect would be to ignore him. The peevish and frustrated tone of some of the responses is no more than I would expect if I were to enter a geophysics forum and blithely overturn tectonic theory with the latest Hollow Earth ideas.

    Comment by spilgard — 7 May 2008 @ 9:15 AM

  181. Re Greg Smith @ 172: “So when a poor sod like Gus-bob asks a few, to you, dumb questions just remember to treat him with respect.”

    Gregg, if you peruse gusbob’s earlier comments in the Galactic Glitch thread at
    http://www.realclimate.org/index.php/archives/2008/03/a-galactic-glitch/langswitch_lang/sw
    you’ll see that he is hardly the ‘poor sod’ thaat you assume, though to his credit he is now making an attempt to understand what people have been trying to tell him for quite sometime now.

    Comment by Jim Eager — 7 May 2008 @ 9:25 AM

  182. Looks like the Ice has taken a recet nose dive below 2007 and its only May.

    http://nsidc.org/data/seaice_index/images/daily_images/N_timeseries.png

    Comment by pete best — 7 May 2008 @ 10:03 AM

  183. Martin and Ray,

    I am not looking for a derivation of Planck’s function. What I would like to know is the justification for using Planck’s function for black-body radiation to calculate the emissions of line radiation, a very different beast. The Fraunhofer lines are where solar radiation does NOT fit Planck’s curve.

    Cheers, Alastair.

    Comment by Alastair McDonald — 7 May 2008 @ 10:14 AM

  184. We can never fully understand a complex system such as the earth with millions of variables and to say otherwise is arrogant .

    This is simpl a restatement of the old denialist canard “if we can’t understand everything, we understand nothing”, used by anti-evolutionists, anti-climate scientists, anti-everythingists.

    If you’re really a scientist, Greg, surely you can do better than this.

    Comment by dhogaza — 7 May 2008 @ 10:38 AM

  185. I posted this on another thread this morning, but it seems to fit better here:

    If you want to so something “chilling,” watch this Japanese video of polar satellite images, showing the disintegration of multi-year ice during last winter.

    http://www.homerdixon.com/download/arctic_flushing.html

    From the video:

    The image is a low-resolution reproduction of a sequence of satellite images of Arctic ice this past fall and winter. The sequence runs in a continuous loop from October 01, 2007, to March 15, 2008. A link to the high-resolution video file is also provided.

    Note the stream of multi-year ice flowing out of the Arctic basin down the east coast of Greenland at one o’clock in the image. As of the middle of March, most of the basin, including the pole itself, appears to be covered only by seasonal ice.

    Comment by Ron Taylor — 7 May 2008 @ 10:48 AM

  186. Alastair, Blackbody radiation is not a distinct type of radiation from line radiation. Atoms, molecules, and solids can only radiate between their energy levels, so “blackbody” radiation is composed of all such modes that are thermally excited. That is why you get a much better approximation of a blackbody spectrum from a solid than a gas (more modes to add togetether and more thermal motion to smear out those modes).
    The Planck distribution is simply the equilibrium energy density of a photon gas at temperature T. However, since photons are noninteracting, equilibrium can only be achieved by the photon gas interacting with surrounding matter. In effect, once the radiation field is in equilibrium with the matter, there will be as many excited modes decaying radiatively as there are photons exciting new modes, and the energy of the mode multiplied by the number of photons/phonons/collisions… capable of exciting the mode will give the energy density.

    Comment by Ray Ladbury — 7 May 2008 @ 1:21 PM

  187. Re #177 Pat N,

    The plot is a little confusing, but if you look at the beginpoints and endpoints of the warming and of the CH4 increase, there may be a (in geological terms) slight lag of CH4 increase after temperature increase of less than 1,000 years. After the maximum temperature/CH4 levels ended, there is no visible lag of CH4 if you take the (older) temperature proxy of Petit e.a. as base, or CH4 leads the temperature drop with several thousands of years during a long period, but lags again at the moment that temperature is at minimum, if you take the (more recent) temperature proxy by Jouzel as base.

    It is physically possible that already starting ice sheets growth led to stronger reductions of methane release than can be deduced from the temperature curve…

    Anyway, it doesn’t look that methane release from permafrost and/or clathrates will be a problem for this century. A temperature increase of 3°C will not give more than 100 ppbv more methane (made an error in #171: the CH4 scale in the graph needs to be doubled…). This leads to an increase of retained warmth (without feedbacks) of about 0.06 W/m2, not really impressive…

    Comment by Ferdinand Engelbeen — 7 May 2008 @ 1:21 PM

  188. #171, Ferdinand Engelbeen,

    From the graphs of yours:

    “This points to a low influence of CO2 on temperature.”

    Does it really?

    What I see is a process being initiated by insolation maxima for “Spring” (April May June), as suggested by Hansen et al in “Climate change and trace gases”, fig 2b, table 1, and accompanying text. http://pubs.giss.nasa.gov/abstracts/2007/Hansen_etal_2.html

    CO2 and Methane track closely until greater areas of the surface are exposed by the receding ice-sheets and made available to biological activity at 13k yr BP. That’s when the methane races ahead of CO2, it rises in tandem with ice sheet recession.

    Likewise your statement about 113k yr BP seems flawed to me because again you’re seeing the methane level drop as more physical area is covered by ice, thus reducing it’s unit-area biological output.

    CH4′s atmospheric residence time is much less than that of CO2 (for which Ocean/Atmosphere must be considered), so I’d expect it to react more rapidly to drops in temperature.

    As for the Eemian/PETM as an analogue, I’m not so sure that the past offers much of a guide given the multiplicity of human impacts. We’ll find out soon enough whether the reduction of Southern Ocean CO2 uptake, the expansion of the Hadley Cells, the ongoing transition of the Arctic to a seasonally ice-free state, are part of a wider pattern of changes ahead of time, or if they’re outliers/”noise”.

    From where I’m sitting it really looks like we’re much deeper into the briar patch than we can yet assert to a scientifically robust degree.

    Comment by CobblyWorlds — 7 May 2008 @ 1:42 PM

  189. Greg Smith wrote: “Nor have [I] been able to find out how you account for random, but frequently happening, events such as volcanic eruptions.”

    I’m very surprised by this statement, since James Hansen’s earliest circulation model from the 1980s included a random volcanic eruption and the model prediction matched the short-term aerosol cooling effect of the 1991 Pinatubo eruption fairly closely. This has been pretty widely discussed as an example of the model’s ability. One would have to not look very thoroughly to not know this.

    Comment by Jim Eager — 7 May 2008 @ 1:47 PM

  190. Hi,
    I was wondering if there is anywhere on this site where I can find a response to the following report:
    http://epw.senate.gov/public/index.cfm?FuseAction=Minority.SenateReport#report

    I would like to make it clear that this report does not in any way undermine my view that we must strive to to reverse the effects of climate change that most experts agree are occurring. I would also like to add that I think it is amazing that the mdeida require a very high burden of proof that the climate change fears are justified. Let’s face it, if most expert in nuclear physic were to warn us of a 10% chance of meltdown in a nuclear plant, everyone would clamour to have it shut down. Yet the dangers of climate change are almost infinitely greater. Nonethless, if anyone can point me to well-inforemd response to the document I’ve just mentioned, I would be most grateful.

    Thanks and best wishes,
    Brian.

    Comment by Brian Patterson — 7 May 2008 @ 1:54 PM

  191. #183 Alastair,

    The Fraunhofer lines are where solar radiation
    does NOT fit Planck’s curve.

    Oh, but it does! The Planck curve of a level higher up in the Solar atmosphere, were temperatures are lower. Higher opacity (stronger absorption) means that the photosphere — the fuzzy visible surface from where light escapes to space — is at a higher level within these lines than in the continuum.

    Also in the Solar atmosphere, like in the Earth one, there is a negative temperature gradient with height. Only, we’re talking about visible light here, not infrared. The negative gradient can also be observed in the form of “limb darkening”.

    On a conceptual level, there is no difference between black-body radiation and line radiation. It’s all Planck, just the absorbtivity == emissivity varies.

    Comment by Martin Vermeer — 7 May 2008 @ 2:51 PM

  192. Re 176 Ray Ladbury

    Ray, you have not answered my questions. Perhaps you or others in this forum could point me towards a site/papers that explain exactly and concisely what variables are in your “dynamic models” and what constraints are placed on the data that are entered into these models and who decides which set of variables are relevant which ones are not and why. In addition could you also please explain the algorithms and their inherent assumptions and error bars which have been used to drive the models, the outcomes of which are being used to drive fundamental changes to the world’s economy. I, for one, would like to know how you understand they work before being dismissed as incompetent to talk about climate science. As I said in my initial post I am neither a sceptic nor a warmer but I am a scientist and want to make scientific assessments based on actual data and not rhetoric. I too am a fellow passenger on this space ship GIGO

    Comment by Greg Smith — 7 May 2008 @ 4:27 PM

  193. Pete wrote:
    “Looks like the Ice has taken a recet nose dive below 2007 and its only May.”

    Pete, I expect you saw a temporary hiccup in the charting software output, which the site warns can happen. Click “Data Note” on the home page:
    http://www.nsidc.org/arcticseaicenews/disclaimer1.html

    Comment by Hank Roberts — 7 May 2008 @ 4:39 PM

  194. re #187, tis true, its half way between record and nominal

    Comment by pete best — 7 May 2008 @ 5:05 PM

  195. Ray Ladbury writes:

    For CO2, that band is around 15 microns. No matter what you do, you won’t get CO2 to radiate outside this band (you can distort the band, but only so far), and diatomic gasses such as N2 and O2 will only radiate as a result of a collisional interaction that alters their magnetic symmetry.

    I think CO2 actually does have other absorption lines, e.g. at 4.7 microns; it’s just that 14.99 microns is the real big one.

    Comment by Barton Paul Levenson — 7 May 2008 @ 5:29 PM

  196. Geoff Wexler writes:

    But what about a more careful thought experiment, what would happen to our greenhouse if the oxygen and nitrogen were removed?

    The greenhouse effect would be a bit less due to the lower total pressure.

    Comment by Barton Paul Levenson — 7 May 2008 @ 5:33 PM

  197. Greg Smith writes:

    Don’t forget that North Africa kept much of the Roman Empire happily fed for over 400 years. I could look it up but I doubt that there is alot of wheat grown in Libya these days.

    Agriculture is Libya’s second largest economic sector after oil.

    I guess I could also say that when people start saying things such as there is a “consensus” or the “science is settled” that I am unnerved which explains my reference to Copernicus.

    ANY scientist in ANY field would be familiar with “the scientific consensus” and would realize that that and peer review are how modern science works.

    We can never fully understand a complex system such as the earth with millions of variables and to say otherwise is arrogant .

    Nobody is saying we can fully understand it. We can understand it well enough to draw a number of conclusions. That has been true ever since Torricelli and others established that higher altitudes were generally colder and that pressure fell with altitude back in the 1600s.

    For example electromagnetic theory in the 1890s seemed to physicists at the time to give a full explanation of interaction between particles and electricity/magnetism but the physicists were constrained by their inability to adequately measure very small scale things and had their hats knocked into a snook when Einstein came along. (As an aside, I wonder who peer reviewed him? “Peer review” is also a term I read frequently on blogs such as this)

    Einstein’s peer reviewers were the editors of the peer-reviewed journal, Annalen der Physik, which he published in.

    Comment by Barton Paul Levenson — 7 May 2008 @ 5:48 PM

  198. Greg, I found this website about climate models to be helpful:

    http://tinyurl.com/5mdu9y

    Comment by JCH — 7 May 2008 @ 5:52 PM

  199. Does anybody have a link to a debunking of the “400 scientists” list? I think I saw this covered on Deltoid and a few other science blogs, but I don’t remember the details. I think the main point was that most of the people on the list are not climatologists. Ross McKittrick, if I’m not mistaken, is a mining engineer.

    Comment by Barton Paul Levenson — 7 May 2008 @ 5:58 PM

  200. Eric (skeptic) – if you’re still on this page, I’ll post some information on clouds for you. It shows that reduction in albedo from 1985 to 1998 contributed many times more to global warming than greenhouse gases. Albedo has been increasing again in the last few years.

    Comment by Mike — 7 May 2008 @ 6:10 PM

  201. Barton, I don’t know which one of the many to suggest!
    http://www.google.com/search?q=debunking+%22400+scientists%22+climate

    Comment by Hank Roberts — 7 May 2008 @ 7:02 PM

  202. Greg Smith, presuming you are actually serious about wanting to learn about climate modeling, you have a steep learning curve. Your criticisms to date don’t exactly demonstrate that you’ve mastered the basics. For this reason, I suggest you start with some systematic study of the basic physics. Ray Pierrehumbert has written a pretty darn good textbook with problems. Start here:
    http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html

    Shorter accounts can be found on this website, e.g. here
    http://www.realclimate.org/index.php/archives/2007/08/the-co2-problem-in-6-easy-steps/

    and here

    http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

    If what you are looking at is why climate models have adopted a value for CO2 forcing of 3 degrees celsius per doubling, that is a bit more involved. There are more than 20 GCMs actively being used and they all use somewhat different methods for constraint. The fact that they agree as well as they do ought to tell you something. The fact that where they do not agree, all the uncertainty is on the high side ought to tell you more. If you are serious, I am afraid that you will have to learn the science.

    As to dynamical models in general, do you understand the difference between them and statistical models?

    Comment by Ray Ladbury — 7 May 2008 @ 7:27 PM

  203. Re 187, Ferdinand,

    As shown on the Eemian Vostok Ice Core plot … it’s clear that methane lagged temperature until about 129,000 thousand years ago but that temperature, methane and CO2 peaked about the same time (128,000 years ago). The peak CO2 concentration 128,000 years ago was near 290 ppm – much lower than current.

    It’s also clear that things other than temperature were at work during the Eemain – thus I believe we cannot assume as you said that … “A temperature increase of 3°C will not give more than 100 ppbv more methane” …

    Comment by pat n — 7 May 2008 @ 7:48 PM

  204. Barton, if you meant the newer “500 Scientists” bogus list from Heartland, that’s debunked here:

    http://scienceblogs.com/deltoid/2008/05/i_must_be_psychic.php

    Comment by Hank Roberts — 7 May 2008 @ 8:38 PM

  205. Martin (153), I agree. My supposition that N2 (might) convert some of its translation energy to blackbody infrared was just a blue-sky thought. I had no reason to believe it.

    Also, if a CO2 emits (radiates) energy that it just absorbed into a precise quantized energy level wouldn’t it radiate out the exact same energy regardless of its environmental temperature? Collectively the sum emission from a large number of molecules would be less than the energy absorbed (and look “cooler”) but that’s because a bunch do not re-emit at all.

    Ray (154), I still disagree with your distinctions, though not the concept. Blackbody broadband radiation characteristics are different from quantized line spectrum radiation. It is true that a molecule absorbs precise quantized energy packets (and re-emitting similar) from a pool of broadband blackbody radiation and doesn’t know the difference. But the energies and genesis are not directly the same. On the other hand, I guess radiation is radiation

    You say, “…diatomic gasses such as N2 and O2 will only radiate as a result of a collisional interaction that alters their magnetic symmetry.” This is new to me and interesting. Does this happen often?

    Paul Middents (155) says, “…What is problematic…? Whether a molecule actually fills one of its energy pots from the availability environmental radiation is a quantum mechanic process, and it might or might not.

    Geoff Wexler (159) says, “…Matter can only radiate by jumping between quantum levels…”
    Not true. All of your radio, TV, cellular, e.g. radiation does not happen that way. I’m with you on the rest of your post.

    I have been tempted as a skeptic to make something out of the extremely minor amount of CO2 e.g. causing such major problems. But it is not valid. There are all kinds of things that rely on relatively minute amounts to accomplish big things: most catalyst aided reactions for instance; a teensy amount of poison can fell a 300lb guy in seconds. CO2 seems to act mainly as a shovel: scoop up the radiation, shovel it out to N2 or O2; a little shovel can handle tons of coal.

    Comment by Rod B — 8 May 2008 @ 1:00 AM

  206. With my traning in chemical engineering, I’m wondering if the climate is behaving like a distillation column. Like adding more steam (i.e., heat) to a reboiler, GG’s absorbed more of the heat that would have been lost to space. In a distillation column, when more steam is sent to the reboiler (i.e., a heat exchanger at the bottom of the tower), then more liquid is vaporized and sent up the tower. If unchecked, more of the higher boiling component (think water and ethanol as the two main components inside the tower) is sent up the tower. Eventually, the purity of the ethanol going overhead as the distillate decreases as the concentration of water increases (not good!). So, how does one compensate if steam to the reboiler (i.e., the source of heat to the system) cannot be cut back? Easy, you send some the condensed overhead stream (called reflux) back down the tower as a liquid to cool it down. So, if more heat is being redirected back to the earth’s surface due to GG’s, then it appears to me that the atmosphere over the oceans (such as in the SH) causes more water to be vaporized. But instead of producing a positive feedback, it condenses at some altitude as it is loses heat. Eventually, this heat finds its way out to space. If all this happens very fast (i.e., it’s in equilibrium), I don’t see how “climate sensitivity” occurs.

    So, to continue the analogy, the incoming irradiance plus the additional heat from GG absorption is like the steam to the reboiler, and the coldness of outer space is the overhead condenser. When more heat is applied (via GG’s), all that means is more vaporization and condensation going on between the earth’s surface and outer space. This is exactly what happens in a tower when more steam and reflux is added. There is more vapor traffic up the tower due to additional heat added to the reboiler, and when distillate is sent down the tower to compensate, there is more liquid traffic going down the tower. Overall, this does tend to increase the temperature at the bottom of the tower (or the earth’s surface, if you will) and lower the temperature at the top of the tower (or the stratosphere, if you will). However, both temperatures reach a plateau. The temperature does not runaway. In the example above, the overhead and bottoms temperature of the distallation tower reach their respective boiling points at the pressure of the tower. So, if the tower was run at atmospheric pressure, the bottoms stream can never exceed 212 F (the bp of pure water) and the top stream can never exceed 176 F (the bp of the ethanol-water azeoptrope, if I recall).

    Maybe this is why we are seeing no temperature increases across the SH (mostly covered by oceans), and the highest temperature increases across the NH, particularly Asia (the largest land mass of all). In essence, there is not enough moisture to carry the heat into the atmosphere where it can “lose” it (via condensation) at higher altitudes. It also emphasizes the importance of convection, since it is convection that transfers heat up the tower.

    Comment by Chris N — 8 May 2008 @ 1:04 AM

  207. All of the above, et al: I greatly appreciate the responses re atmospheres (gases) emitting blackbody-type radiation. They do, however, illuminate a significant dilemma. Part of Kirchhoff’s assertions (law) was the universality of blackbody radiation (which, incidentally, was known of before Planck solved the UV problem with his formula and quantizing) which said the all bodies radiate per their temperature regardless of the body’s form, shape, or substance. Planck continued this belief, and Einstein and others confirmed it, Einstein explicitly discussing gases, e.g., and going to the extreme and claiming a single atom/molecule (the extreme of a gas) has a characteristic radiation. However, the latter was never fully recognized by physicists (and really hard to visualize, especially if the single atom’s radiation is continuous broadband!), and this was the initial kink in the armor.

    Most (at least all of the few that I’ve seen, Prof. Kimberly Strong of University of Toronto, PHY 315 Course Notes, e.g.) atmospheric science textbooks develop energy flux equations and results by dividing an atmosphere into multiple layers and analyzing flux, absorption (with absorption constants and optical depth stuff) and energy transfer using Planck’s formulation of blackbody radiation. On the other hand it can be inferred that this is just a construct and might not reflect physical realities – it’s based on monochromatic formulations (Kirchhoff, Beer-Lambert, Schwarzchild, et al), e.g., and has to be integrated somehow to cover the full spectrum. But, back to the first hand, they do calculate Earth’s actual atmospheric absorption on a broadband Planck distribution, though the actual absorption is a very small portion of the incoming SW and LW radiation. The SW absorption process is materially different than LW IR absorption, however, and might not be a conflict.

    One recent paper (Robitaille of Ohio SU, 2006 – citation for Hank’s appetite :-) ) explicitly refutes the universality stuff. It goes a step too far though, IMO, by stating only solids can generate blackbody radiation. The paper nimbly ignores stars for example, which until maybe white dwarf or neutron stage are anything but solid. But then of course a plethora of respected scientists and others, many posting here, refute a gas’ ability to radiate ala Planck, except in the case of very dense gases.

    Continuing the back and forth, if gases (or equivalent) do not radiate ala blackbody, how is the Cosmic Background Radiation explained? (I’ve asked this here before to no avail.)

    Another part of Kirchhoff’s law(s) does pose a problem, as some here have asserted: emissivity equals absorptivity. (NB! This is not the same as emission = absorption, which is not true.) If an atmosphere is emitting blackbody radiation it also must have the capability to equally absorb blackbody-type radiation, though this again is a monochromatic property strangely within a broadband concept that I’ll just ignore for now – it gives me a headache. This means that the selected atmospheric surface– tropopause for discussion here (a strange two-sided surface at that… but never mind) – in addition to radiating outward must absorb from below. Since the temp of the tropopause is less than the surface it must be a net blackbody absorber of the radiation emanating from the surface. This is a problem since N2 and O2 seem to have no atomic/molecular mechanism to absorb such radiation. Unless it absorbs it directly into translation energy modes, which to me is not intuitively obvious, or some other obscure process unique to blackbody broad spectrum radiation – really weird. I don’t really know. Anybody??

    This is all relevant to my line of inquiry and skepticism from two angles. One, this physical process, critical to GW/AGW, ought to be pretty well-known and clear, but it doesn’t seem to be. Two, I’m still trying to account for the “massive” downward LW radiation of 324 watts, just 66 watts less than the surface’s IR emission and 129 watts more than is emitted from atmosphere and clouds and escaping into space. Clouds I understand can radiate a bunch – blackbody type too (?); 67 watts absorbed from SW insolation is a noticeable source, though I don’t know how that gets converted to down welling infrared. CO2 and H2O re-emissions seem woefully short, the vast majority of their absorbed IR going to heat N2 and O2 via collision. Maybe then, my thought, it is enhanced by atmospheric blackbody down welling. Though even if the atmosphere did radiate ala Planck, it doesn’t seem it would radiate much.

    I do not recall ever seeing a spectral analysis of the “back draft” infrared LW radiation reaching the surface. Is there such a thing?

    See what I’m curious about and getting at?

    Comment by Rod B — 8 May 2008 @ 1:21 AM

  208. Get off it, dhogaza (184). Greg S. did not say “if we can’t understand everything, we understand nothing”. He said it is arrogant to claim 100% knowledge certain of a system that is less than certain. And who is anti-climate scientist??

    Comment by Rod B — 8 May 2008 @ 1:42 AM

  209. Ray (186) the fundamental generation of radiation is an accelerating charge. You can easily get that from ionized molecules or substances that otherwise have “free” electrons. The energy “states” of free/ionized electrons are continuous and have no quantized modes or levels. In some cases vibrating whole molecules can also display charge acceleration.

    Comment by Rod B — 8 May 2008 @ 1:55 AM

  210. Rod B writes:

    He said it is arrogant to claim 100% knowledge certain of a system that is less than certain.

    Yes, but who’s claiming that? It’s a straw-man argument.

    Comment by Barton Paul Levenson — 8 May 2008 @ 5:53 AM

  211. Ray Ladbury post #202, says “If what you are looking at is why climate models have adopted a value for CO2 forcing of 3 degrees celsius per doubling, that is a bit more involved.”

    You could start with the IPCC report, section 8.6.3.2. There, they say that the basic Equilibrium Climate Sensitivity (ECS) of 1.2 (this compares OK with the 1.1 in Stephen E Schwartz’s paper http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf
    ). They then bump it up to 1.9 for “water vapor feedback”, and from there to 3.2 for “cloud feedback”. They also make it abundantly clear that they have no idea how clouds work (that permeates the whole IPCC report), and that the cloud “feedback” they use for ECS is a major uncertainty.

    So it appears that the figure they use for ECS, 3.2 deg C per doubling of CO2, is thoroughly unreliable.

    Comment by Mike — 8 May 2008 @ 6:23 AM

  212. Mike, Actually, to say that the cloud feedback is unconstrained is inaccurate. Yes there are uncertainties, but it is extremely difficult to get a model that comes even close to paleoclimate data without such a feedback. What is more, if you look at the uncertainties, they are almost all on the high side rather than the low side. If you are hoping that somehow clouds or aerosols will bail us out and make the climate crisis go away, you are delusional.

    Comment by Ray Ladbury — 8 May 2008 @ 7:08 AM

  213. Chris N., The problem with your argument is that there is still a significant amount of CO2 well above the cloudtops where almost all the water has condensed out. CO2 remains well mixed even into the stratosphere, so it adds to the greenhouse effect until adiabatic cooling peters out. The reason the Southern hemisphere has warmed less than the North does have to do with the preponderance of water in the South. However it has more to do with the oceans as a heat reservoir than it does with the radiative properties of cloud condensation.

    Comment by Ray Ladbury — 8 May 2008 @ 7:18 AM

  214. Rod B #207:

    This means that the selected atmospheric surface– tropopause for discussion here (a strange two-sided surface at that… but never mind) – in addition to radiating outward must absorb from below. Since the temp of the tropopause is less than the surface it must be a net blackbody absorber of the radiation emanating from the surface. This is a problem since N2 and O2 seem to have no atomic/molecular mechanism to absorb such radiation. Unless it absorbs it directly into translation energy modes, which to me is not intuitively obvious, or some other obscure process unique to blackbody broad spectrum radiation – really weird. I don’t really know. Anybody??

    What is the problem? The CO2 in the air absorbs indeed radiation coming from below (and from above, and from the side), and at the same time emits both upward and downward (and sideward) — but only for frequencies within the CO2 absorption/emission band, where the air is sufficiently opaque thanks to the presence of CO2.

    For frequencies outside the CO2 absorption/emission band (ignoring water vapour and other GHG), nothing happens — the radiation flies off from the surface into space unimpeded, N2 and O2 molecules do nothing, just bounce with each other and the CO2 molecules to maintain local thermodynamic equilibrium. The air is transparent at these frequencies — this is how the desert at night cools so quickly.

    …and note that this (the absorbtion-emission-absorbtion thing) happens not only at the tropopause, but throughout the troposphere [and indeed into the stratosphere for the core of the absorbtion band]. Every layer of air is a little cooler than the one below it, and a little warmer than the one above it. They are all emitting and absorbing to/from their neighbours, and the net effect is that heat slowly migrates upward. As I pointed out earlier, heat transport happens along (and requires) a temperature gradient.

    I have some difficulty visualizing what your problem is, as these things have been obvious to me since my student days (actually I learned them for the Sun, not the Earth). I now realize that they are indeed difficult.

    Comment by Martin Vermeer — 8 May 2008 @ 7:21 AM

  215. #189 “James Hansen’s earliest circulation model from the 1980s included a random volcanic eruption and the model prediction matched the short-term aerosol cooling effect of the 1991 Pinatubo eruption fairly closely”

    did/does that model agree or disagree with the Keenlyside/Leibniz projections for a short term cooling?

    Comment by Alan K — 8 May 2008 @ 7:34 AM

  216. Rod, you could do a lot worse than read up on Ray Pierrehumbert’s upcoming book. It’s not an easy read, but the fundamentals are very well explained. You would be interested in Chap. 3 (and also 2 and 4).

    http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html

    Comment by Martin Vermeer — 8 May 2008 @ 7:48 AM

  217. Rod,
    First, it is true that free electrons can absorb and radiate at any frequency. However, except in metals and plasmas, free electrons are quite rare. Ionization is a high-energy process–with activation energies on the order of a few eV. If you do the math, at room temperature, the proportion of molecules that are ionized there are only a few thousand ionized molecules per mole of gas. This compares with a few percent of CO2 molecules in the vibrational state corresponding to 15 micron absorption. The fact of the matter is that CO2 simply won’t radiate outside of it’s quantized energy transitions–there is no continuum of radiation corresponding to the blackbody curve. Instead, what you have is that proportion of photons in the absorption band coming into thermal equilibrium with the gas and being present in the amount expected if the phantom Planck curve for that temperature were present. And for the rest of the spectrum, the gas is transparent–it’s as if it weren’t there. That’s what is meant by grey-body. Now, multibody interactions can distort the absorption (and emission) somewhat, so you get some absorption in the tails of the absorption line, but this is feeble.

    In a solid, of course, there are many more vibrational states, so you have transitions corresponding to more energies, and more of the black-body curve gets filled in. And for a metal, the free electrons fill the curve in pretty well. Even, so, there is no such thing as a perfect blackbody, because there is no such thing as a perfect absorber/emitter. Since photons are noninteracting, the only way the photon gas comes into thermal equilibrium is by being absorbed and emitted by surrounding matter.

    Note: this is why you have different portions of the emission curve for Earth seeming to correspond to blackbodies at different temperatures. The temperature corresponds to the altitude at which an emitted photon has a snowball’s chance in hell of escaping without being reabsorbed by another gas molecule.

    As to absorption of IR by N2 and O2, see:
    http://www.iop.org/EJ/article/0022-3700/10/3/018/jbv10i3p495.pdf?request-id=094497b6-2e62-4983-9806-5396039081a5

    Basically, two like molecules have no dipole, so they don’t radiate when they rotate vibrate, etc. However, when there’s a collision, you can induce a dipole, and then you can get absorption. Again, there’s an energy threshold, so it’s not a terribly common process.

    Comment by Ray Ladbury — 8 May 2008 @ 7:54 AM

  218. Mike #211, I am a bit surprised, especially in the light of the well-known uncertainties in the cloud feedback acknowledged by the IPCC, at your confidence that ECS would be as little as 1.2°C. The water vapour amplification is pretty robust; there isn’t a whole lot of wiggle room there. To get from an ECS before clouds of 1.9 to one after clouds of 1.2, you need a negative cloud feedback of -60%. That’s a lot. And for all we know, if it really were that big it could just as well be positive. A positive feedback of +60% would produce an ECS of no less than 4.75°C (!).

    The IPCC value of 3.2°C, which contrary to the above speculative ones has some science to back it up, would then correspond to +40%.

    Comment by Martin Vermeer — 8 May 2008 @ 8:06 AM

  219. ps — and to say, as Martin does, that the only difference between line and blackbody radiation is simple emissivity, is torturous contortion. It’s blackbody if it has an emissivity of 1.0 at 548.35 nm wavelength and 0.0 everywhere else??!!??

    Comment by Rod B — 8 May 2008 @ 9:05 AM

  220. Mike (211)

    The climate sensitivty for 2x CO2 is 1.2 C without feedbacks (except for the T^4 feedback from Planck). We know with high confidence that the WV feedback is strongly positive. The Schwartz paper is interesting, but has been discussed here at RC, and probably is not a very good estimate because of using a “1-box” model of the climate system, whereby the oceans/atmsophere/land all seem to hat up in sync with each other. In fact the oceans do not heat up as quickly as land and a lot of the heat is “in the pipeline” so if you level off changing conditions, you have to allow the system to return into equilibrium for a true climate sensitivity.

    As for clouds, while there is considerable uncertaintty in the magnitude (and sign), we have strong evidence against substantial negative feedback, and so no low climate sensitivity.

    Comment by Chris Colose — 8 May 2008 @ 9:12 AM

  221. Barton Paul Levenson (196) says, “Geoff Wexler writes: ‘But what about a more careful thought experiment, what would happen to our greenhouse if the oxygen and nitrogen were removed?’
    The greenhouse effect would be a bit less due to the lower total pressure.”
    Interesting thought question. Wouldn’t the greenhouse effect be considerably less because CO2 would easily approach saturation because it has no convenient way to relax its vibrational energy, which it mostly does through collisions with N2 and O2?

    Comment by Rod B — 8 May 2008 @ 9:17 AM

  222. Re #206, Chris N

    I am a Chemical Engineer as well, frankly, you should have more confidence with the Climate scientists as represented by the contributers to this site and the IPCC.

    I am an Australian, and I can tell you that your line: “Maybe this is why we are seeing no temperature increases across the SH…”, is way off the mark. Check out:
    http://www.bom.gov.au/cgi-bin/silo/reg/cli_chg/timeseries.cgi?variable=tmean&region=aus&season=0112

    Comment by Lawrence McLean — 8 May 2008 @ 9:20 AM

  223. Barton (210): “Yes, but who’s claiming that?” — I dunno; if the shoe fits it can be worn.

    “It’s a straw-man argument.” — probably so. I don’t know why dhogaza hammered it, but I didn’t want to let it pass…

    Comment by Rod B — 8 May 2008 @ 9:46 AM

  224. Re #188 Cobblyworld (and in part #203 pat n),

    It is difficult to know the real world influence of CO2 on temperature if there is an overlap in (initial) temperature increase/decrease and CO2 increase/decrease, as is the case in most periods during the past near million years. But there is one period where CO2 didn’t follow temperature, the end of the Eemian (113-105 kyr BP). That period points to a low influence of CO2 on temperature.

    Methane did follow temperature more closely than CO2 for all periods, including the end of the Eemian. Thus no solid evidence for the real influence of methane can be deduced from ice cores. There is physical evidence from spectral analyses that the about 400 ppbv increase of methane (at the start of the Eemian) would increase heat retainment with about 0.4 W/m2, which gives an offset of about 0.12°C (without feedbacks). The total increase was about 9 to 11°C (depending of which temperature proxy is taken). Even with a lot of feedbacks, methane seems of not much help in the whole process.

    Hansen summed up the forcing caused by the two main GHGs (CO2 and methane), which reached a change of ~3 W/m2. With an increase of 0.6°C at the surface, the increase in heat retention is completely set off.
    One can have a lot of discussion about the real effect of GHGs, but it seems to me that a small change in albedo over the ice age transitions (a few % more or less clouds e.g.) has more effect than the whole increase/decrease of GHGs.

    Anyway, one can not compare the effect of a small increase/decrease in energy over the ice age transitions (when a lot of albedo changes occur) with the effect of the same increase/decrease at one of the more or less bistable situations: either an ice age (to colder ends) or an interglacial (to warmer ends).

    The Eemian can be used as analogue of the effect of temperature on methane and CO2 levels in the pre-industrial past. For CO2, there was a quite good linear relationship with temperature of about 8 ppmv/°C. I haven’t calculated it for methane in the full Vostok ice core, but based on the Eemian graph, the relationship is about 40 ppbv/°C, with a maximum of around 700 ppbv at maximum temperature.

    That was without human intervention. Nowadays we are at much higher levels of about 1800 ppbv, due to human influences, but these haven’t had much direct effect in the Arctic tundra or oceans. Only a future temperature effect would have an influence, comparable to what happened during the Eemian, which is minor compared to what humans now emit…

    Comment by Ferdinand Engelbeen — 8 May 2008 @ 9:52 AM

  225. Rod B:

    Get off it, dhogaza (184). Greg S. did not say “if we can’t understand everything, we understand nothing”.

    He didn’t say so explicitly, but his strawman argument strongly implies it.

    And, if you read Greg Smith’s posts carefully, I’m sure that even you can deduce that despite his initial claim, he is no scientist.

    Comment by dhogaza — 8 May 2008 @ 10:50 AM

  226. #211 Mike,

    “thoroughly unreliable”?

    What, like your assesment of how a 3degC peak of PDF is calculated?
    e.g.
    Annan & Hargreaves: “Using multiple observationally-based constraints to estimate climate sensitivity.” Suggests a pdf centred on ~3degC.
    Pre-print: http://www.jamstec.go.jp/frcgc/research/d5/jdannan/GRL_sensitivity.pdf
    RC’s take on it: http://www.realclimate.org/index.php/archives/2006/03/climate-sensitivity-plus-a-change/
    That study combines estimates constrained with observations. So if the “Iris Effect” were working in past events (Into the LGM/volcanic eruptions/20th Century), it’s implicitly included.

    If you’re going to need to constrain unknowns in the models, using observations of the real world seems wise, as that’s what’s being modelled.

    So it’s not “thoroughly unreliable”, it’s a best estimate with what is available. That is all.

    By the way, for those trying to foster complacency from uncertainty, how do YOU demonstrate that a high (>4.5degC) sensitivity is unfeasible?

    Comment by CobblyWorlds — 8 May 2008 @ 10:59 AM

  227. > it appears
    Only if you misread the report. Try Annan or Connolley on this.

    Comment by Hank Roberts — 8 May 2008 @ 11:01 AM

  228. Re the “400 scientists” list in 199:

    EcoBlogger Exposes Fake List Global Warming Skeptic Scientists
    Written by Hank Green
    Wednesday, 07 May 2008

    Kevin Grandia, who we are proud to be well acquainted with through working together in the ecoblogosphere, has just been through a bit of a saga.

    Curious about the Heartland Institute’s list of “500 Prominent Scientists” who deny global warming, Kevin decided to contact some of the folks on the list. He put together a list of 150 email addresses…simply the addresses he found it most easy to acquire. After only 24 hours, he’d received 45 emails from angry scientists saying that they, in no way, denied anthropogenic global warming.

    It turns out that the heartland institute had never told the scientists they were going on the list, nor did they check to see if these people actually had any doubts about the causes of climate change. Just a sampling of quotes from emails Kevin received:

    I am horrified to find my name on such a list. I have spent the last 20 years arguing the opposite.

    I have NO doubts ..the recent changes in global climate ARE man-induced. I insist that you immediately remove my name from this list since I did not give you permission to put it there.

    Please remove my name. What [they] have done is totally unethical!!

    The Heartland Institute has been publicizing their list for years, and not a single journalist took the time to check the names on the list. The Heartland Institute has now distanced itself from the list, and withdrawn its claim that they are supported by 500 prominent global warming skeptic scientists. But they have yet to apologize. Kevin deserves a great big “thank you” from the world. Check out DeSmogBlog and, if you think he’s as awesome as I do, you might even consider donating to help him keep DeSmogBLog alive.

    Comment by Jim Galasyn — 8 May 2008 @ 11:43 AM

  229. Rod B #207:

    I do not recall ever seeing a spectral analysis of the “back draft” infrared LW radiation reaching the surface. Is there such a thing?

    Curiosity is a great thing… googling for “downwelling infrared spectrum” brought up http://lidar.ssec.wisc.edu/papers/dhd_thes/node3.htm with a nice picture.

    I suggest you compare with Dave Archer’s simulator http://geosci.uchicago.edu/~archer/cgimodels/radiation.html

    Enter: Sensor Altitude 0, Looking up, Midlatitude Summer for nearly the same spectrum :-)

    Comment by Martin Vermeer — 8 May 2008 @ 11:46 AM

  230. More fun fallout from Listgate:

    Fury over ‘unethical’ warming website
    5:00AM Thursday May 08, 2008
    By Angela Gregory

    New Zealand climate scientists are upset their names have been used by an American organisation wanting to challenge the increasingly accepted view that climate change is human induced.

    Among the five scientists is Niwa principal scientist Dr Jim Salinger, who said he was annoyed the Heartland Institute was trying to use his research to prove a theory he did not personally support.

    The institute describes itself as a non-profit research and education organisation not affiliated with any political party, business or foundation.

    Dr Salinger said he was never contacted about his work which was being mis-used to undermine support for the idea that greenhouse gas emissions from human activities, largely fossil fuel burning, was warming the globe.

    “I object to the implication that my research supports their position … they didn’t check with me.”

    He said that he and the other New Zealand scientists all felt their work had been misinterpreted.

    “We say global warming is real.”

    Comment by Jim Galasyn — 8 May 2008 @ 11:52 AM

  231. Interesting thought question. Wouldn’t the greenhouse effect be
    considerably less because CO2 would easily approach saturation because
    it has no convenient way to relax its vibrational energy, which it
    mostly does through collisions with N2 and O2?

    Yes, very good one. And one that cannot be answered by merely thinking it over. The concept of local thermodynamic equilibrium comes in: the population of the vibration levels, i.e., the percentage of CO2 molecules in a given excited state, depends only on the energy level of that state relative to the ground state [and possibly its degeneracy], and the temperature (which is a sensible concept only for LTE). It doesn’t make any difference how many friendly neighbourhood N2, O2 molecules are hanging around :-)

    So your answer is no. Thinking conceptually like this isn’t easy and Ray L is much better at explaining these things.

    Comment by Martin Vermeer — 8 May 2008 @ 1:34 PM

  232. Rod B., Nothing says that the collisional relaxation MUST occur with a collision with N2 or O2. It could be Ar, CO2, CH4… It’s just that with lower pressure, collisions are less frequent–as Barton indicated.

    Comment by Ray Ladbury — 8 May 2008 @ 1:44 PM

  233. Martin (214), I was probably too obscure. The problem is mine, with my contention that the atmosphere radiates ala black body, continuous spectrum and all.

    I couldn’t tell if your 2nd from last paragraph was referring to all atmosphere gases, or just CO2 and other GHGs per earlier in your post. If the latter I don’t think the absorption-re-emission chain transfers temperature, at least directly. If the former, that would imply atmospheric blackbody radiation as I contend. Though it’s going the wrong way to support the massive down welling; maybe the right way ala thermodynamics.

    Comment by Rod B — 8 May 2008 @ 2:24 PM

  234. Re #192 where Greg Smith Says:

    “Perhaps you or others in this forum could point me towards a site/papers that explain exactly and concisely what variables are in your “dynamic models” … ”

    How about “Climate modelling uncertainty” from the BBC?

    For more detail you could try this: “A Climate Modelling Primer” by McGuffie & Henderson-Sellers http://www.amazon.com/Climate-Modelling-Research-Developments-Climatology/dp/0471955582

    Cheers, Alastair.

    Comment by Alastair McDonald — 8 May 2008 @ 3:54 PM

  235. Martin (229): Thanks for the reference; it looks useful. I can’t figure out how Google knows to hide this stuff when I search ;-) . I need to pour over it, but at first glance and eyeballing the integration it looks like the received IR surface power is significantly short from what the budget graphics call for — 324 watts. Any insight into this?

    Comment by Rod B — 8 May 2008 @ 4:55 PM

  236. Re #230 where # Martin Vermeer Says:
    8 May 2008 at 11:46 AM

    Curiosity is a great thing… googling for “downwelling infrared spectrum” brought up http://lidar.ssec.wisc.edu/papers/dhd_thes/node3.htm with a nice picture.

    I suggest you compare with Dave Archer’s simulator http://geosci.uchicago.edu/~archer/cgimodels/radiation.html

    Enter: Sensor Altitude 0, Looking up, Midlatitude Summer for nearly the same spectrum :-)

    It is a good match, but that is not surprising since both figures are calculated using similar computer codes: FASCODE and MODTRAN. :-)

    Cheers, Alastair.

    Comment by Alastair McDonald — 8 May 2008 @ 5:21 PM

  237. Moderator,

    Please remove by prior post. It was done in haste with 3 small kids on my legs. Also, a little harsh towards the end. Please substitute with the reply below.

    Lawrence,

    I’m afraid you are off the mark. The source of my data is the RSS satellite data, summarized below in C/period for 1979 till present:

    NH minus Tropics (20 / 82.5)
    0.0024 monthly
    0.288 decade

    Tropics (-20 / 20)
    0.0014 monthly
    0.168 decade

    SH minus Tropics (-20 / -70)
    0.0005 monthly
    0.060 decade

    World (-70 / 82.5)
    0.0015 monthly
    0.180 decade

    Trends since Jan 1998 are shown below:

    World (or global) trend is slightly negative at -0.036 C/decade

    NH minus Tropics (20 / 82.5)
    0.0003 monthly
    0.036 decade

    Tropics (-20 / 20)
    -0.0004 monthly
    -0.048 decade

    SH minus Tropics (-20 / -70)
    -0.0008 monthly
    -0.096 decade

    Also, world (or global) trend between 1979 and Dec 1993 was 0.05 C/decade. So, what were the trends between Jan 1994 and Dec 1998 (inclusive of the 1998 peak for dramatic effect)?

    World (-70 / 82.5)
    0.008 monthly
    0.96 decade

    NH minus Tropics (20 / 82.5)
    0.0066 monthly
    0.792 decade

    Tropics (-20 / 20)
    0.0105 monthly
    1.260 decade

    SH minus Tropics (-20 / -70)
    0.0067 monthly
    0.804 decade

    So basically ALL of the accrued warming since 1979 occurred in a 5 year span (1994 through 1998). That’s it! Let me point out that it was during this period that the divergence between NH and SH anomalies began, and, since 1998, has essentially held steady at approx. 0.4 delta C (or 0.5 C NH vs. 0.1 C SH).

    I recognize that climate modelers have done a lot of work, but the models are not satisfying to me until they can explain the above data. In fact, you have confirmed my observation that warming is occuring over land, not water!

    I have said before that there could many reasons for this besides CO2 driven climate change: land use changes, less aerosols, solar effects, etc. Please note that in my distillation example above, more heat and reflux added to the tower increases the bottom temperature and decreases the overhead temperature (as predicted by greenhouse gas theory, vis a vis, surface and stratospheric temps). However, it all depends on starting conditions (or assumptions with regard to climate models). If the bottom purity of the tower is already 99%, then adding more heat to the reboiler to increase purity to 99.9%, then the bp temp increases only from 99 C to 100 C. The analogy for the climate is whether or not the climate is close to zero or negative climate sensitivity. So, I’m not denying the effects of CO2, but questioning the validity of the models, which in my opinion, haven’t convinced me at all that more CO2 equates to more than a small amount of temperature increase.

    Comment by Chris N — 8 May 2008 @ 9:07 PM

  238. Just saw the thread regarding climate bets. Now, that’s interesting!

    Comment by Chris N — 8 May 2008 @ 9:10 PM

  239. CobblyWorlds #226
    I read both the links you supplied. I confess to not having enough time to read them slowly and look for confirming info elsewhere.
    However – correct me if I am wrong – my quick read of the papers seemed to say that the ECS was based on observed temperature changes vs observed CO2 concentrations, once other factors had been removed.
    If I have understood this correctly, then it seems to be pretty bad. When you apply the ECS so reached, and add in the other known factors, then the result necessarily matches the observed temperature change, and it looks like you have an accurate working model.
    But, if your initial assumptions are wrong – for example if some of the temperature change was caused by a natural factor that you haven’t taken into account – then your ECS factor is wrong.

    Comment by Mike — 9 May 2008 @ 1:02 AM

  240. Alastair #236: Oops, yes. Perhaps I should read the articles I link to…

    Anyway, Figure 3 in http://lidar.ssec.wisc.edu/papers/dhd_thes/node5.htm seems to be a comparison with measured data :-)

    Comment by Martin Vermeer — 9 May 2008 @ 1:53 AM

  241. Rod #233:

    I couldn’t tell if your 2nd from last paragraph was referring to all atmosphere gases, or just CO2 and other GHGs per earlier in your post. If the latter I don’t think the absorption-re-emission chain transfers temperature, at least directly. If the former, that would imply atmospheric blackbody radiation as I contend. Though it’s going the wrong way to support the massive down welling; maybe the right way ala thermodynamics.

    The radiative heat transport takes place only between GHG molecules, and almost exclusively (for Earth) within the absorbtion bands, not the continuum. However, due to LTE (local thermodynamic equilibrium) there is only one temperature at every level, in which all molecular species share.

    (Note that convective heat transport is also important)

    The net migration of heat is much smaller than the total amounts of heat sent up and down between levels. The “down” part of that, for those lower atmospheric layers that can radiate directly to the ground, represents the downwelling.

    Comment by Martin Vermeer — 9 May 2008 @ 2:04 AM

  242. Rod B #235

    …but at first glance and eyeballing the integration it looks like the received IR surface power is significantly short from what the budget graphics call for — 324 watts. Any insight into this?

    No… I haven’t studied this particular point at all. Don’t be tricked though by the units on the vertical axis, the graph has “…per steradian”, which seems to differ from Archer’s graphs.

    Comment by Martin Vermeer — 9 May 2008 @ 2:23 AM

  243. #224 Ferdinand Engelbeen,

    1)
    You say:
    “But there is one period where CO2 didn’t follow temperature, the end of the Eemian (113-105 kyr BP). That period points to a low influence of CO2 on temperature.”

    As CO2 isn’t the only factor in global average temperature it’s hardly surpising that you’ll get mismatches when focussing on the one variable on short timescales in a multi-variable situation. You could say the same about ice sheets/CO2/CH4 at ~10.4k yrs BPk, where all three show little connection to deltaTs(corrected).

    Because there are unknowns (known as well as unknown ones), Hansen’s approach as shown in CC & Trace Gasses (post 188 above) to model Temperature as a function of ice sheet albedo and CH4/CO2/N2O is much more likely to pay dividends and avoid incorrect conclusions. And it does so in the good general agreement between the simple model and observations as shown in figure 2c.

    Given that the door of uncertainty swings both ways, clouds may be either a +ve or -ve feedback, neither of us know for sure. But as I pointed out in post 226 above, a ballpark 3degC sensitvity looks reasonable, and that implicitly factors in cloud feedbacks.

    2)
    You also say:
    “Methane did follow temperature more closely than CO2 for all periods, including the end of the Eemian.”

    Which, as I previously stated, appears to tell us more about it’s sources and fate in the atmosphere than it does about Methane’s impact per se.

    3)
    Finally you say:
    “Nowadays we are at much higher levels of about 1800 ppbv, due to human influences, but these haven’t had much direct effect in the Arctic tundra or oceans. Only a future temperature effect would have an influence, comparable to what happened during the Eemian, which is minor compared to what humans now emit…”

    a)
    See my post #174 above, points 1 to 3 and the implicit 4th point presented as a question. Can you tell me where I’m wrong?

    As I said above, we’re forcing things so fast that I really am not sure past analogues can reasonably be used.

    b)
    Are you sure we’re not seeing a direct effect on the Arctic?

    e.g.1 Shindell 1999 “Northern Hemisphere winter climate response to greenhouse gas, ozone, solar and volcanic forcing” found GHG forcing caused a tendency towards +ve mode AO – Fram Strait outflushing.

    e.g.2 Bitz & Roe 2004 “A Mechanism for the High Rate of Sea Ice Thinning in the Arctic Ocean.” Just one of mechanisms by which small forcings can be amplified by the Arctic ice cap.
    http://www.atmos.washington.edu/~bitz/Bitz_and_Roe_2004.pdf

    We see such amplification of initial cause (Milankovitch cycles) in the ice ages. But the evidence clearly shows such cycles to be the most likely prime cause (e.g. Roe’s “In defence of Milankovitch.”). With only one instance of the current “experiment” (not several cycles), there remains more uncertainty as to the root driver of the Arctic ice cap’s ongoing demise than with regards the ice-ages. But it is not reasonable to claim human activity hasn’t “had much direct effect”, when there is evidence to show human activity may indeed by the prime mover of the changes we see in the Arctic.

    That aside we have the coincidence of what is going on there now in light of the recent human caused global warming e.g:

    “although the Arctic is still not as warm as it was during the Eemian interglacial 125,000 years ago [e.g., Andersen et al., 2004], the present rate of sea ice loss will likely push the system out of this natural envelope within a century….

    There is no paleoclimatic evidence for a seasonally ice free Arctic during the last 800 millennia.
    “Overpeck et al 2005 “Arctic System on Trajectory to New, Seasonally Ice-Free State” http://atoc.colorado.edu/~dcn/reprints/Overpeck_etal_EOS2005.pdf

    It currently looks like “within a century” could be changed to “by 2020″. Especially when you consider that the seasonally ice free state is intrinsically linked to the fate of perennial ice, who’s area is dropping off a cliff:
    Nghiem et al 2007 “Rapid reduction of Arctic perennial sea ice.” figure 3. http://meteo.lcd.lu/globalwarming/Nghiem/rapid_reduction_of_Arctic_perennial_sea_ice.pdf

    Comment by CobblyWorlds — 9 May 2008 @ 3:45 AM

  244. Re 236.

    Alastair…you did not read carefully enough. Read the caption and the text at the bottom of the page again (Fig. 1 at lidar.ssec.wisc.edu.)

    Figure 1: Illustration of atmospheric downwelling radiance relative to values derived from the Planck function for various temperatures. Also noted are the absorption regions for various atmospheric constituents. The spikes in the measured radiance, between 1400 and 1800 cmtex2html_wrap_inline2771, are a result of water vapor absorption lines which become opaque within the instrument.

    Note the reference to the “spikes in the measured radiance”….

    Cheers,
    Dave

    Comment by David donovan — 9 May 2008 @ 3:53 AM

  245. Re” #205

    Rod B Says

    “Matter can only radiate by jumping between quantum level Not true. All of your radio, TV, cellular, e.g. radiation does not happen that way”

    I don’t completely agree with your correction and certainly not with your choice of example. This is off topic so I shall be brief. While it is possible to partially de-quantise the explanation of the special role of CO2 and H20 it is not possible to do that with the difference between a copper aerial (analagous to CO2) and a failed one made from glass (analogous to molecular oxygen). No amount of simplifying can get away from the essential role of quantum mechanics in the latter. The reason is that there is no classical counterpart to the Pauli exclusion principle. Suppose you apply an alternating voltage to a piece of copper wire; it works by exciting the electrons into nearby vacant quantum levels thus upsetting the previous balance between electrons moving parallel to the direction of the electric field and those moving the opposite way. When you apply the same voltage to an insulator, the electrons discover that all the nearby quantum levels are full, so nothing much happens unless the electric field is large enough to cause a breakdown involving a big quantum jump to some more empty quantum levels.

    Comment by Geoff Wexler — 9 May 2008 @ 4:07 AM

  246. Re” #205

    Rod B Says

    “Matter can only radiate by jumping between quantum level Not true. All of your radio, TV, cellular, e.g. radiation does not happen that way”

    I don’t completely agree with your correction and certainly not with your choice of example. This is off topic so I shall be brief. While it is possible to partially de-quantise the explanation of the special role of CO2 and H20 it is not possible to do that with the difference between a copper aerial (analagous to CO2) and a failed one made from glass (analogous to molecular oxygen). No amount of simplifying can get away from the essential role of quantum mecahnics in the latter. The reason is that there is no classical counterpart to the Pauli exclusion principle. Suppose you apply an alternating voltage to a piece of copper wire; it works by exciting the electrons into empty quantum levels in the copper thus upsetting the previous balance between electrons moving with the electric field and those moving against it. When you apply the same voltage to an insulator, the difficulty is that all the available relevant quantum levels are full, so nothing much happens unless the electric field is large enough to cause a breakdown involving a big quantum jump.

    Comment by Geoff Wexler — 9 May 2008 @ 4:11 AM

  247. Re My #236.

    Please ignore that post. I was confused because the graph was in the theory section of the paper, whereas I would have thought it fitted more appropriately in the results.

    Rod,

    How did you calculate the shortfall of 324 W?

    Cheers, Alastair.

    Comment by Alastair McDonald — 9 May 2008 @ 4:27 AM

  248. Re: #231

    Back to Earth. I want to make sure that I have this summary right. In those regions which matter to the greenhouse , (even high up), is it always the case that LTE is a good approximation and that the temperatures of CO2, H2O, air (O2 and N2)at a particular point are equal?

    Thanks for responses to my previous question and to the link in #217 which I shall look out for later.

    Comment by Geoff Wexler — 9 May 2008 @ 5:04 AM

  249. Geoff Wexler #248:

    Back to Earth. I want to make sure that I have this summary right. In those regions which matter to the greenhouse , (even high up), is it always the case that LTE is a good approximation and that the temperatures of CO2, H2O, air (O2 and N2)at a particular point are equal?

    At some point high up it will break down… I read that in the mesosphere even laser action may occur. But it applies at least in the troposphere and lower stratosphere, where most of the greenhouse effect lives.

    Found this: http://rabett.blogspot.com/2007/03/what-is-local-thermodynamic-equilibrium.html

    Comment by Martin Vermeer — 9 May 2008 @ 8:15 AM

  250. Geoff, I would say that LTE is a pretty good approximation, and yes, this would imply aproximately equal temperatures for all constituents of Air. I also don’t know if you saw my link to this story:
    http://www.aip.org/pnu/2008/split/862-1.html

    Also seems to be germane to some of your questions.

    Comment by Ray Ladbury — 9 May 2008 @ 8:39 AM

  251. Alastair, the 324 watts of “re-emitted” LW radiation absorbed by the surface is right off the ubiquitous “tree and branch” graph/image of the earth’s energy balance, used by IPCC, e.g.

    Comment by Rod B — 9 May 2008 @ 9:36 AM

  252. Re 221 and

    The greenhouse effect would be a bit less due to the lower total pressure.”
    Interesting thought question. Wouldn’t the greenhouse effect be considerably less because CO2 would easily approach saturation because it has no convenient way to relax its vibrational energy, which it mostly does through collisions with N2 and O2?

    Well, I believe Hart (1978), following Elsasser (1960), related optical thickness to the square root of pressure, if that helps. (And the inverse fourth power of temperature.)

    Comment by Barton Paul Levenson — 9 May 2008 @ 10:44 AM

  253. Re #251
    Rob,
    I guess that you mean the Back Radiation shown in the IPCC AR4 FAQ 1.1 Figure 1.

    If you run David Archer’s model and change the following settings:

    Ground T. Offset C to 4.8
    Locality to Midlatitude Winter
    Sensor Altitude to 0 and Looking up

    they should give you conditions similar to that from the thesis Fig. 1. (Setting the Ground T. Offset to 4.8 makes the surface temperature 277K, the same as for the thesis.)

    I out will be 218.8 W/m2 a bit short of 324 W/m2 but then you would not expect midlatitude winter to give the global average value. 277K is 4C, or 40F, so it was not a typical day!

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 9 May 2008 @ 4:42 PM

  254. Martin (241), This is a quickie but salient response while I digest the rest and other responses.

    You said, “…The radiative heat transport takes place only between GHG molecules, and almost exclusively (for Earth) within the absorbtion bands, not the continuum. …”

    I do not think this is true. Simply, it has been stated here a number of times that the most likely molecular energy transfer process here is, after a CO2 (say) molecule absorbs a photon usually into a quantum vibration state (which must be the start of the chain), the most likely, by far, next step is a collision with another molecule (by the numbers far most likely N2 or O2) that transfers the energy from the CO2 vibration to N2 (say) translation, possibly with a couple of non-interesting steps. It has been said that at low altitudes a collision is 1000+ times more likely to happen before the CO2 regurgitates its vibration bond energy with another radiated photon. This is the basic way the surface radiated energy gets transferred to atmospheric heat – in the common temperature meaning of the term.

    Secondly, while we call the energy added to vibrational bonds “heat” and even refer to its “characteristic” temperature, it does not increase the [common] temperature of CO2, which is evidenced only through translation energy. Similarly, if it re-emits the “heat” (energy, really) as a photon, it does not cool. So the absorb-emit-absorb-emit chain through CO2 only will not add temperature to the atmosphere.

    This has been the general (but not broad) consensus. If disagreers want to refute (again), I would welcome it with relish – I’m still trying to conclusively learn this stuff.

    Comment by Rod B — 9 May 2008 @ 5:07 PM

  255. CobblyWorlds #226

    You say “Using multiple observationally-based constraints to estimate climate sensitivity.” and “That study combines estimates constrained with observations.”

    I said in #239 this was pretty bad.

    On reflection, it is worse than bad, it potentially destroys the IPCC case.

    Let me explain.

    If you theorise that CO2 warms the planet, and then in your model for CO2′s warming effect you constrain the model so that it matches actual temperature changes, then you cannot use that model (to the extent that it is so constrained) to demonstrate how much warming the CO2 gives. The reason is that you have used the required outcome as a basic input to the model.

    In the case of clouds and ECS, from what you are saying, the modellers have done exactly this, and from the figures I have seen, there is a large impact not minor.

    So, to the extent that they have used “constraint by observation”, the IPCC are NOT entitled to claim that the models show that the 20th century warming was caused by CO2.

    They are only entitled to claim that IF the 20th century warming was caused by CO2 THEN more warming can be expected in future if CO2 levels continue to increase.

    I have had a quick scan of the IPCC report, and there are about a dozen references to “constrained by observation” which look like they could be making the same error. I haven’t had time to go through them carefully.

    PS. How do you get an indented quote into a post?

    [Response: Use <blockquote> </blockquote> . However you would only have a point if the data too which the model was tuned was also the same as the data against which it was tested. It isn't. Models are predominantly trained against climatology (i.e. the statistics of 'modern' climate - means, variability etc.) and not the forced changes (such as the 20th C trend or the response to Pinatubo or the LGM). Thus all of those tests are properly 'out of sample' and are valid for demonstrating utility. - gavin]

    Comment by Mike — 9 May 2008 @ 5:08 PM

  256. Rod, Note that Martin said “radiative heat transport”. Other molecules for the most part do not radiate. Second, you are sort of mixing the concepts of temperature and energy. Things are a bit complicated. Consider a mole of CO2 near absolute zero. We have 3 degrees of freedom, so E~1.5kT. Now heat things up near 0 degrees C. Most of the molecules still have 3 degrees of freedom, but a few now are sufficiently energetic that they can excite vibrational modes collisionally, so the proportionality between E and kt will not be just a wee bit more than 1.5 and will increase until almost all of the molecules can be excited, so the proportionality will be 2.5, and so on. Now, it is true that exciting a vibrational mode of a molecule will not by itself increase temperature, but neither will accelerating a single molecule. Temperature is an intensive property.

    Comment by Ray Ladbury — 9 May 2008 @ 8:26 PM

  257. #239 Mike,

    You state:
    “But, if your initial assumptions are wrong – for example if some of the temperature change was caused by a natural factor that you haven’t taken into account – then your ECS factor is wrong.”

    Firstly, that’s one of the reasons why there’ll be a probability distribution function as seen in figure 1 of Annan/Hargreaves, as opposed to a precise figure. Some factors, such as aerosols, generally cause the uncertainty to tend to the higher side, as aerosols block incoming sunlight and cause relative “cooling” at the ground. Others like cloud response will cause a +/- uncertainty. Clouds could warm, as well as cool, although I understand a slight warming effect is expected based on current research (which I don’t follow in detail).

    However as the Earth is suspended in a (near) vacuum, it cannot lose the bulk of it’s ‘heat’ by conduction or convection, it can only lose heat by radiation. So the bulk of balancing against incoming solar radiation has to happen due to albedo (how much sunlight is reflected back into space) and infra red emission. To see this it’s best to start with a basic climatology course. For example see: www-paoc.mit.edu/labweb/notes/chap2.pdf There are 12 course modules in pdfs, who’s address is of the form “www-paoc.mit.edu/labweb/notes/chapX.pdf” where X= 1 to 12.

    I can also strongly recommend the resources under the “Start Here” and “Index” button at the top of each RealClimate page.

    #255 Gavin, thanks for that reply.

    Comment by CobblyWorlds — 10 May 2008 @ 3:36 AM

  258. Re 171 quote Some interesting point is that the d13C content of methane is slightly increasing. This may point to a change in source (either more human induced – like from rice paddies) or more fossil methane, but I have no figures seen (yet) to know the d13C content of different methane sources. Unfortunately, d13C measurements in methane only started at the moment that methane levels stabilised. unquote

    I have seen (a construction which hides the fact that I have forgotten where) a prediction that soil methane emissions were suppressed by acid rain and pollution controls would lead to an increase: it might be worth looking at the correctness of that prediction.

    JF

    Comment by Julian Flood — 10 May 2008 @ 5:03 AM

  259. Ray (250), looks like a fascinating link. Thanks.

    Comment by Rod B — 10 May 2008 @ 8:42 AM

  260. Rod B #254: Ray L has already shortly answered, but let me try to be more concrete in addressing your misconceptions (which are getting less and less, and tougher and tougher to address).

    I do not think this is true. Simply, it has been stated here a number of
    times that the most likely molecular energy transfer process here is,
    after a CO2 (say) molecule absorbs a photon usually into a quantum
    vibration state (which must be the start of the chain), the most likely,
    by far, next step is a collision with another molecule (by the numbers
    far most likely N2 or O2) that transfers the energy from the CO2
    vibration to N2 (say) translation, possibly with a couple of
    non-interesting steps. It has been said that at low altitudes a
    collision is 1000+ times more likely to happen before the CO2
    regurgitates its vibration bond energy with another radiated photon.

    This is true. Here we need a lecture on local thermodynamic equilibrium and degrees of freedom :-) Again, as I said earlier, it is not very fruitful to look at what’s happening to individual molecules and photons; the statistics are much simpler than these individual narratives, precisely because of LTE.

    Every molecule can move in a number of different ways. These are called degrees of freedom. For a single atom (like argon), there are three such degrees of freedom, linear movement in the x,y and z directions. Now, as molecules collide, energy will shift back and forth between these three directions of motion. Write the total kinetic energy as

    1/2 mv2 = 1/2 m (vx2+vy2+vz2).

    You see the separate contributions of each velocity component vx, vy and vz.

    Now, the total amount of kinetic energy in the x direction, on average per molecule under LTE, will be

    1/2 m ave(vx2) = 1/2 kT,

    where T is the temperature. The same holds in the y and z directions, actually 1/2 kT is the energy per molecule per degree of freedom.

    When you look at a whole mole instead of a single molecule, this makes macroscopic sense, but you have to multiply by Avogadro’s number and write R, the gas constant, instead of k, Bolzmann’s constant.

    Now we have three co-ordinates x, y an z, i.e., three DoF. This means for a one-atomic gas that the heat content per molecule will be 3/2 kT, again for temperature T.

    This is pretty much the classical definition of temperature: the amount of Joules that have been squeezed into every DoF available for the molecules to undergo random motions in. It is then easy to show theoretically that two bodies in contact at the same temperature will not undergo any net exchange of heat. (Those two “bodies” could be, e.g., the nitrogen and the CO2 in a given parcel of air.)

    Now if you look at two- or three-atomic molecules, there are more possibilities for them to move randomly: they may rotate, or vibrate, or bonds in the molecule may bend. This adds DoFs. Each of them can again contain an amount 1/2 kT of energy, on average per molecule. (This relates directly to the specific heat of a gas, the way the no. of DoFs has been classically measured — there is a long and fascinating story there in the lead-up to quantum theory.)

    The important thing to note here is now, that also these DoFs interchange energy all the time with the linear-motion ones and with each other. And the energy they contain is temperature too, in the classical sense. The equipartition of energy over the different DoF is achieved on a very short timescale.

    (Imagine a two-atomic molecule being hit off-center. This would add — or remove — rotational energy to/from the molecule, the balance being the change in linear kinetic energy of the molecules. Same with vibrations. It’s really just like in classical mechanics. Determining the interaction cross-sections though requires quantum theory — irrelevant here. The beauty of LTE is that you don’t have to know such gory details)

    This is the basic way the surface radiated energy gets transferred to
    atmospheric heat – in the common temperature meaning of the term.
    Secondly, while we call the energy added to vibrational bonds “heat”
    and even refer to its “characteristic” temperature, it does not increase the
    [common] temperature of CO2, which is evidenced only through translation
    energy.

    A core misconception. The energy added to CO2 vibration becomes part of the total heat content of the parcel of air considered, as it will be immediately equipartitioned with the other molecules and their various DoF in the parcel. By definition then, it will drive up the parcel’s temperature T. Your distinction between “characteristic” and “common” temperature does not exist for LTE.

    Similarly, if it re-emits the “heat” (energy, really) as a
    photon, it does not cool.

    Same misconception in reverse.

    So the absorb-emit-absorb-emit chain through
    CO2 only will not add temperature to the atmosphere.

    It will do so if the amount emitted by an air parcel is less than that absorbed by the same parcel. Only in a stationary state that difference would be zero overall.

    Honestly hope this helps.

    Comment by Martin Vermeer — 10 May 2008 @ 3:07 PM

  261. CobblyWorld, and Gavin : It makes no difference what modelling techniques the IPCC uses, and it doesn’t matter how much it thinks that it has used different data for tuning and testing. GIGO still applies, but can be indirect and therefore difficult to detect.

    The basic problem is that there is always the underlying assumption that climate is driven by CO2 – not totally, but to the point where anything not attributable to something else will get attributed, directly or indirectly, to CO2 (man-made GGs).

    This can obviously happen directly, but let me give you just two examples of where it has, or might have, come in indirectly.

    1. The full effect of solar variance is acknowledged by the IPCC to be greater than would be expected by measuring the change in direct radiation. But they openly acknowledge that they don’t understand the mechanism so they can’t model it, and haven’t modelled it. Consequently, every instance of “constrained by observation” has the capacity to wrongly ascribe climate changes to GGs when they are actually caused by solar variation.

    2. TS.6.4.2 in the IPCC report says : “Large uncertainties remain about how clouds might respond to global climate change”. This shows that the IPCC believes that cloud behaviour is driven by climate. Consequently, every instance of “constrained by observation” has the capacity to wrongly ascribe changes in clouds to some component of climate, and never realise that the driving was actually the other way round – that the component of climate in fact reacted to clouds.

    I have only given two examples, but there could well be more. Every one of the references to “constrained by observation” would have to be checked out for possible invalid assumptions like these.

    The end result is a set of models, all of which give a good match to recent climate measurements at the time of modelling (because the model has been made to fit observations), but give a poor match to climate from the date of modelling onwards. And this is what is happening now.

    [Response: Just because you say so, doesn't make it true. The response of the models to CO2 is a function of basic radiation physics combined with numerous feedbacks - it doesn't change as a function of what you think happened to solar or aerosols or mysterious substance X. - gavin]

    Comment by Mike — 10 May 2008 @ 4:01 PM

  262. Mike, where are you getting what you’re writing? What are you basing it on? I’ve been trying searches using phrases, and the only pages I come up with consistently are from scienceandpublicpolicy. You may be drawing from some other source, but would you care to tell us what you’re reading?

    Comment by Hank Roberts — 10 May 2008 @ 5:02 PM

  263. #237 Chris N Says

    I think your table of trends might be confusing to some readers. Trends have been discussed as a lead article in Realclimate and by Tamino briefly as a comment in Realclimate and again on his web site and by Stoat. So wouldn’t it be better if your version was made to be consistent with theirs?

    You wrote that ” trends since Jan 1998 are shown below:

    World (or global) trend is slightly negative at -0.036 C/decade (RSS satellite)

    Is this from a least squares fit? if so what was the error estimate using simple stats.?

    Tamino’s comment in Realclimate came with +/- error estimates; thus

    0.21 +/- .19 C/decade (from GISS GLB_TSST)

    or 0.07 +/ 0.19 (from HADCRUT3):
    for the same decade. The second was obviously useless because of the large error estimate. But Tamino (on his web site) discusses how nearly all estimates based on one decade are useless. He explains how interannual correlation increases the errors. That is hard to reconcile with your table of trends.
    You then go on to discuss “the trends between Jan 1994 and Dec 1998″ to show “dramatic effect” i.e. 0.792C decade.
    It would have been clearer if you had not referred to these short term fluctuations as dramatic trends.

    Comment by Geoff Wexler — 10 May 2008 @ 6:57 PM

  264. Mike, You need to understand the difference between statistical modeling (e.g. fitting parametric models to data) and dynamical modeling. In the former, you are looking to estimate parametric values from some goodness of fit criterion–in effect saying how well can this model work under the best of circumstances. (Note: You can also ask how far off you can be for some confidence level.) The test of the model is the goodness of fit criterion, and you will often see several models compared or even averaged on that basis.
    In dynamical modeling, you are defining the model based on the physics (or dynamics) of the phenomenon–in effect saying what physical processes are important. Now you may use data to estimate some of these processes, but the test for the model is to validate it against independent data. The two types of modeling are very different, and it sounds as if you are thinking more in terms of statistical modeling.

    Comment by Ray Ladbury — 10 May 2008 @ 7:48 PM

  265. Geoff,

    I agree with your points. As a matter of point, I used least squares analysis with RSS data. I suspect the error bars are greater than the trend itself (like the your HADCRUT example above).

    But the bigger issue is this: can the climate models (or current agw theory) explain the microtrends (either by decade, or by hemisphere, or by land/water surface)? If you want to argue that these trends are not statistically significant, then fine. I won’t argue with you, but I don’t agree with the concept. I think you guys are missing stuff right under your noses with regard to aerosols, climate sensitiviy by region, etc. The short-term trend data was only to show these microtrends.

    Thanks for the clarification. If I post similar trend data in the future, I’ll qualify it with caveats.

    Comment by Chris N — 10 May 2008 @ 8:26 PM

  266. Gavin “Just because you say so, doesn’t make it true”. True. Nullius in verba. But equally, just because you dispute it, doesn’t make it false.

    What I am saying is pretty basic stuff.

    Hank Roberts #262 : I am not getting it from anywhere. Really! I haven’t seen scienceandpublicpolicy, but maybe I should look at it.

    My education was in maths and science (a long time ago), followed by a career in computing. I didn’t doubt the “global warming” message until I read the IPCC report in order to learn more about it. I got a gut feeling, as I read it, that the body of the document didn’t support the findings, but it took me quite a long time to work out where. Obviously I have read papers and articles in other places, which have helped fill in some of the blanks, but I try to accept nothing if I can’t verify it at least in my own mind. I have tended to avoid the overtly partisan material on both sides.

    The position I have reached – and it is always open to change as new information becomes available – is that the IPCC report badly overstates the impact of CO2 (chiefly because of the modelling reasons that I have explained) and understates the impact of natural factors (ditto). Exactly what the ratio should be, I know not, but the real world certainly would appear to be heavily tilted towards the natural factors on, probably, all time scales. Certainly the behaviour of the world’s climate over the last few years suggests that the natural factors are easily able to overpower the CO2 effect on a decadal sort of time-scale.

    Just coming back to the modelling briefly : the output of the models gave a good match to the temperature increases of the last 15 or so years of the 20th century. Given what we know about El Nino, the non-modelling of solar variation, the major uncertainties about clouds and various other uncertainties, if the models reflected the known science faithfully, THEY WOULD BE VERY UNLIKELY TO GIVE A GOOD MATCH. The reason they did give a good match is almost certainly because of the “constraint by observation” modelling. Let me put it the other way round, and this may seem extremely paradoxical but please think about it carefully, it is a far more sensible statement than it might first appear to be : The fact that the models gave a good match to climate over that period tells you that it is extremely likely that there is something seriously wrong with the models!!

    [Response: Brilliant logic. We should therefore pay attention to models in inverse correlation to how well they do. We should apply that to other walks of life as well - fly planes that have the worst crash records, abandon quantum mechanics and general relativity, buy stocks based on the size of a company's losses etc. - gavin]

    Comment by Mike — 11 May 2008 @ 1:20 AM

  267. #261 Mike,

    I think Ray has hit the nail on the head, this is not a case of fitting.

    Just to illustrate Gavin and Ray’s points:

    I posted this paper in another (strangely similar) discussion above: http://pubs.giss.nasa.gov/abstracts/2007/Hansen_etal_2.html
    It’s a pdf file, download and check out figure 2c. What’s happening there is that the calculated and observed temperatures are plotted against each other, and agree well. But crucially for you is how that agreement was reached. Starting from section 2 “Climate Sensitivity”:

    2a) They discuss the derivation of an equation giving the effective forcing for 3 GHGs (CO2/CH4/NO2).

    2b) They discuss using sea level changes to calculate changes in global albedo due to ice sheet advance/recession.

    2c) Then they show that when you combine the calculated temperature deviations using changes in GHGs and ice sheet albedo, the match with actual* global average temperature is good. *note: they also discuss what they use as a proxy for global average temperature (see note 2 page 6).

    Note here that they have not “fitted” the Geenhouse Gas(GHG) impact, that has been derived previously and independently. There are points where the calculated temperature and proxy for global average temperature do not agree in terms of time and amplitude. Some of the reasons for such deviations are apparent from the discussions in parts 2a and 2b and elsewhere in the paper. The time domain mismatches are argued to be due to data resolution issues; were they reflecting real time-mismatches the result would be physically unsustainable (e.g. 160degC temperature increases).

    Both GHG and albedo changes must impact global average temperature because they affect the energy balance of the planet. GHGs affect IR “heat loss” to space, and albedo due to ice sheets affect how much incoming sunlight is reflected back to space. So it is physically reasonable that the evidence should point to these 2 primary factors as accounting for most of the global average temperature change in the ice-ages.

    As to the degree those factors may affect future global temperature: Now we don’t have those massive ice-sheets, so ice-albedo is a far less significant factor. But we’re still left with GHG sensitvity, and the implication that the equilibrium climate sensitivity is about 3degC for a doubling of CO2. The Hansen paper I refer to here is not the only source suggesting ~3degC for 2xCO2.

    Comment by CobblyWorlds — 11 May 2008 @ 3:11 AM

  268. Re: My previous comment #263.

    Perhaps I should have described these short term estimates of rates of warming as being misleading rather than useless. Isn’t this quote from #237 another example of the dangers of using a telephoto lens when a wide angle one would have been appropriate?

    “So basically ALL of the accrued warming since 1979 occurred in a 5 year span (1994 through 1998). That’s it!”

    Comment by Geoff Wexler — 11 May 2008 @ 4:49 AM

  269. Chris N., First, global climate models are not intended to reflect short-term variability or regional effects. However, you can put in perturbations corresponding to short-term events and see how they evolve. When you do, you find that a lot depends on initial conditions, which are not certain. I agree that regional and short-term behavior can be interesting, but it is inherently more uncertain than true global climate, which depends mainly on energy balance. Realclimate had a piece on regional climate projections:
    http://www.realclimate.org/index.php/archives/2007/08/regional-climate-projections/

    [Response: Ray, you need to be a little careful here. GCMs do calculate regional and local changes and have plenty of short term variability. The issue is whether there is any predictability at those scales. The problem is that there is a lot of variability at these scales and so the forced signal (from GHGs) is hard to detect over short time periods. - gavin]

    Comment by Ray Ladbury — 11 May 2008 @ 5:49 AM

  270. #266 Mike,

    Given what we know about El Nino,…. if the models reflected the known science faithfully, THEY WOULD BE VERY UNLIKELY TO GIVE A GOOD MATCH. The reason they did give a good match is almost certainly because of the “constraint by observation” modelling.

    Let me put it the other way round, and this may seem extremely paradoxical but please think about it carefully, it is a far more sensible statement than it might first appear to be : The fact that the models gave a good match to climate over that period tells you that it is extremely likely that there is something seriously wrong with the models!!”

    Wrong.

    On the most basic of levels this should be apparent because nobody has “fitted by observation” to predict weather accurately on week/month timescales (at least not in the British climate they haven’t). If this alleged fitting works at long timescales, it should be better at short.

    But you’re mainly wrong because climate is emergent order on long timescales.

    I cannot predict the next toss of a coin at all. Not in the sense that I can say with anymore skill than a random guess what the next toss will bring, heads or tails.

    But that does not mean I cannot say with some certainty that after 100 tosses I will have near equal occurences of heads and tails. Futhermore I can assert with confidence that as my set of results grows larger, 1000, 10,000, 100,000 etc etc. The sets of occurrences tend closer to being equal. i.e. the statistics are predictable. If that fails then I strongly suspect that the coin is biassed, because the theory is general and sound. Likewise, with ever rising CO2 when there is a “global cooling” blip, it tells me that some other factor is involved, as that theory is general and sound. (To be clear, CO2 is far from the only anthropogenic forcing, but it is the one that we can really play havoc with, we’re only about 1/10 through estimated available fossil fuels(mainly coal).)

    Similarly weather models explore weather (much less than 30 years), climate models explore climate (greater than 30 years).

    Remember I recommended RC’s index? Well try this: http://www.realclimate.org/index.php/archives/2005/11/chaos-and-climate/

    PS I’m also not a climate scientist, just a former ‘sceptic’ with an Electronics degree who got into climate science whilst examining his scepticism. When I say sceptic, actually I was misinformed with my critical faculties in neutral.

    Comment by CobblyWorlds — 11 May 2008 @ 11:55 AM

  271. > scienceandpublicpolicy, but maybe I should
    Check the references. Safer to stick with science sites, lest you be easily led into believing PR that agrees with your preconceptions. http://www.sourcewatch.org/index.php?title=Science_and_Public_Policy_Institute

    Comment by Hank Roberts — 11 May 2008 @ 3:43 PM

  272. I have a question about climate models.

    Has anybody modeled the climate for Earth assuming that the Earth axis is tilted 0 degrees? What would it look like. Then How would the climate change as the Earth’s axis become more and more tilted?

    Is there any model capable of doing that, that is, starting with a 0 degree tilt and then correctly getting our climate with its 23 degree tilt?

    As a lark, this website was my first hit on google for:
    Earth axis tilt

    http://www.divulgence.net/

    How did this site get to the top of the google hit list?
    Joel

    [Response: All the models use the tilt, precession and eccentricity of the orbit as input parameters, and they tend to get changed when doing paleo-climate experiments. I'm not sure that there have been many experiments with tilt=0 since I have no idea when that was likely to have been the case (if ever). You would expect reduced seasonality, reductions in polar ice for instance, and I'd anticipate significant differences in the ITCZ and monsoonal rains. But you should try it and see (EdGCM is probably your friend). - gavin]

    Comment by joel — 11 May 2008 @ 4:57 PM

  273. Gavin, I expected better of you. “fly planes that have the worst crash records” – that was, frankly, pathetic. I’m sorry, I really don’t want to use language like that here, but how else could I put it? You have maintained a high standard here, up to now.

    What I asked you to do was “please think about it carefully, it is a far more sensible statement than it might first appear to be”. No matter how much you disagree with me, what I was saying could not in any way whatsoever lead to flying planes with bad crash records.

    The point was that, with all the uncertainties acknowledged by the IPCC, with El Nino not represented in the models, with solar variation acknowledged as not being represented in the models, etc, etc, etc, it was actually UNLIKELY that the models would give a good fit to climate.

    About the “30 years” bit, and weather versus climate (CobblyWorlds) : The “good fit” given by the models, as claimed by its proponents, is either for a period of about the last 15 years of the 20th century, or for the average slope over the whole 20th century. There are significant deviations in the middle, but these don’t go past 30 years or so. The most vocal case for global warming is based around the last 15 years of the 20th century, and it would seem that we can now agree that this is nonsense – a longer period is required.

    If you look at the whole 20th century (the man-made CO2 thing really can’t apply over a much longer period than that), the graphs I have seen supporting the models give a 2-point fit (average slope is the same). But I would have found the models more credible if they had shown less slope – there was a net increase in solar activity over the 20th century, and it ended with a big El Nino (factors not represented in the models).

    Now if you look at the 20th century vs solar variation, you see quite a good fit – certainly a much better fit. And then if you look at solar variation against long term climate, you still see a good fit – and this is over periods of really major climate changes that have given sea-level rises of 120m+, etc. Purely on an intuitive level, this tells you that there is something wrong if the models attribute all of the 20th century warming to AGGs. But you have to get past intuition, and work out where the error comes from. I have done that to my own satisfaction.

    This next sentence is a level of argument that I have tried to avoid in the past, wanting to stick to essentials, but given the reference to “30 years” it seems I have to point it out : The proponents of the conventional view are claiming that the recent drop in temperature is caused by La Nina so we should ignore it as it has no long term implication. But I didn’t hear them say that about the previous El Nino.

    The point is that we have to try to be even-handed.

    There has been a fair amount of information given in response to my last 2 posts, and I haven’t gone through it yet. I will, but it will take me some time to do so.

    [Response: Well I hate to further disappoint you, but your point is both ill-informed and illogical. First off, all the models have ENSO-like behaviour - what isn't included is the exact sequencing of these events. Secondly, most include reasonable estimates - and some would say overestimates - of the solar changes since the 19th Century. The first issue implies that the models' short term trends cannot be compared to the observations, but has little impact on longer time scales. The solar change is frankly in the noise of the mean forcing over the 20th Century (most of the uncertainty is related to aerosol trends, not solar). The net impact of these and other issues is obviously to degrade the match to the one realisation that we have of the real climate. However where is there any evidence that the fit is 'too good'? Your whole point is predicated on nothing. As to whether anyone has blamed 1998's high on El Nino - just look around - those statements are everywhere. - gavin]

    Comment by Mike — 11 May 2008 @ 5:23 PM

  274. Gavin, Thanks for the clarification. I realize there are regional predictions, my point was meant to say that is not the purpose of the GCMs–so it’s not surprising they aren’t tuned for that. I should have been clearer.

    Comment by Ray Ladbury — 11 May 2008 @ 6:57 PM

  275. Mike, you have some serious misconceptions about how climate modeling is done–again dynamic vs statistical modeling.
    As an example:
    http://iri.columbia.edu/climate/ENSO/background/prediction.html

    I have to agree with Hank. Where are you getting these crazy ideas from? Did you see the paper cited by CobblyWorlds in #226? If you remove any of those constraints–especially those from volcanic eruptions–the probability distribution becomes very asymmetric toward high forcing.

    Comment by Ray Ladbury — 11 May 2008 @ 7:21 PM

  276. What I am saying is so simple.

    1. Stuff is missing.
    2. The IPCC say the missing stuff is significant.
    3. Because of the missing stuff, there should be a difference between model output and actual temperature.
    4. There isn’t.
    5. Therefore the model may be in error.

    How much is missing?

    Estimates of the net change in solar irradiance over the 20th century, that I can find, vary from about +0.12 W m-2 to +2 W m-2. The latter was from J Lean 2000 and has been challenged, so I’ll stick with +0.12. This tallies with the IPCC’s 0.12 from 1750 (TS.2.4), since 1900 and 1750 seem to have been about equal.

    IPCC report, para 1.4.3
    “The solar cycle variation in irradiance corresponds to an 11-year cycle in radiative forcing which varies by about 0.2 W m–2. There is increasingly reliable evidence of its influence on atmospheric temperatures and circulations, particularly in the higher atmosphere [refs]. Calculations with three-dimensional models [refs] suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree celsius.”

    11 peer-reviewed papers are referenced where I put [refs].

    So we should be looking for the models to understate 20th century warming by approximately 60% (0.12 / 0.2) of “a few tenths of a degree Celsius”, plus an additional factor to upgrade the period of time from a part of an 11-year cycle up to something nearer equilibrium, less a very small amount that the IPCC actually allowed. Only approximately, because we don’t know if it’s linear, we know there are a lot of uncertainties, and we don’t know what other factors there are. But given that the 20th century warming was only about 0.7 deg C, this is obviously significant.

    The models didn’t understate. They hit the 20th century warming spot on. Where did the discrepancy come from? I couldn’t find any other factors that offset it (that doesn’t mean they don’t exist), but I did find some “constrained by observation” places that did explain it.

    Gavin : The statements about El Nino didn’t start appearing until after the ones about the (later) La Nina. But who said what doesn’t affect the science, so this is a diversion.

    I still haven’t read the other posted links, but will try to find the time.

    Comment by Mike — 12 May 2008 @ 12:29 AM

  277. I have read all the papers linked here in reply to my last two posts. Let me know if I have missed any. I’ll keep my comments as short as I can, so ask if you want me to expand.

    “Using multiple observationally-based constraints to estimate climate sensitivity”, J. D. Annan and J. C. Hargreaves

    from 3.1
    “If the net forcing [AGGs vs aerosols] is small, then climate sensitivity would have to be very high to explain the observed warming.”
    This shows that there is an assumption that all the temperature increase is caused by AGGs. That is the point I was making.

    “Climate change and trace gases”, James Hansen et al.

    This whole paper is a stretch. They go to extraordinary lengths to try to convince themselves that they can explain the paleo cycles. All the time, they assume that the CO2 is what is causing the warming. Consequently, the warming phases are quite easy for them – they just say there are strong feedbacks, some fast some slow to match the time lags. But the long cooling phases are a big problem.

    This is an early paragraph, from page 1928 :
    Figure 1a reveals remarkable correspondence of Vostok temperature and global GHG climate forcing. [I agree, it does] The temperature change appears to usually lead the gas changes by typically several hundred years, as discussed below and indicated in figure 1b. This suggests that warming climate causes a net release of these GHGs by the ocean, soils and biosphere. GHGs are thus a powerful amplifier of climate change, comparable to the surface albedo feedback, as quantified below. The GHGs, because they change almost simultaneously with the climate, are a major ‘cause’ of glacial-to-interglacial climate change, as shown below, even if, as seems likely, they slightly lag the climate change and thus are not the initial instigator of change.

    Try reading that paragraph like this : The major cycles over geological time are caused by natural factors as yet not understood. Throughout these cycles, the atmospheric CO2 behaves exactly as expected. During warming, CO2 is released from the oceans etc, so atmospheric CO2 increases. Similarly, it decreases during cooling. There is a time-lag.

    RealClimate “Climate sensitivity: Plus ça change…” 24 March 2006

    This paper mercifully puts James Hansen et al out of their misery : “…generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales – the atmosphere and upper ocean on multi-decadal periods.”. And “we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation”.

    So the paleo temperature cycles are caused by changes in insolation, not CO2 – do you recognise in that anything that I have been saying? .But think more – IF the AGW theory is right then it HAS to apply to natural CO2 during paleo cycles. If the paleo cycles don’t behave as per AGW theory then either the theory is wrong, or there are more powerful natural forces in operation. If there are more powerful natural forces in operation, then those forces – which are still unknown – must still exist today. ie, they must have at least some influence. Therefore, it is plain wrong to assign all temperature change – other than from other known factors – to AGGs by using “constrained by observation” modelling techniques.

    NB. In talking about sensitivity, the paper still assumes that CO2 is responsible.

    RealClimate “Chaos and Climate” 4 November 2005.

    Not relevant to this argument.

    The paper on “Overview of the ENSO System” – statistical and dynamical models. Yes I understand that. It doesn’t affect my argument.

    Comment by Mike — 12 May 2008 @ 5:53 AM

  278. Mike, I’ll say it again. What you are missing is even the vaguest understanding of dynamical modeling. In dynamical models, you are restricted to the known physics. You keep harping on about “natural factors”. What natural factors? CO2 sensitivity is constrained by multiple lines of evidence to their current value. It is true that each by itself is not inconsistent with a broad range of values. The thing is that the sensitivity has to explain all of them, and that is a fairly tight constraint. Also if you look at the individual constraints in Annan and Hargreave, only the LGM constraint is not inconsistent with low values–and it is the most poorly known. Volcanic constraints are arguably the best known as they are based on real-time data, and they are crucial for narrowing the range of possible values.
    You are falling into a classic crackpot mode of assuming that because you don’t understand something (something to which you’ve devoted perhaps a few month’s effort while climate scientists spend a career), then it must be wrong.
    Look, take the same tack you are taking right now and apply it to evolution or the Big Bang or relativity or quantum mechanics, and you’ll find you don’t understand those eigher with a few month’s effort. Does that make them wrong, too?

    Comment by Ray Ladbury — 12 May 2008 @ 7:31 AM

  279. Re #276:

    Mike, you are conflating energy flux densities (watts per square meter) with temperature changes (Kelvins). A change of 0.2 watts per square meter in radiative forcing corresponds to about 0.15 K of temperature change, not most of 0.7 K.

    Comment by Barton Paul Levenson — 12 May 2008 @ 7:38 AM

  280. Interesting. OK I’ll take you guys on as long as it’s a level playing field. I have been mightily offended by the attack on scientific rationalism exemplified by your AGW extremists. I am prepared to bet my own money on a cooling in the next decade. You have to be betting your own personal funds, not research money, OK?

    [Response: This bet is specific to the Keenlyside et al authors. If you want to bet generally on global cooling (which we would not advise), please talk to Brian Schmidt or James Annan. They have appropriately constructed bets already worked out. - gavin]

    Comment by Lazlo — 12 May 2008 @ 8:16 AM

  281. Ray Ladbury #278 : “In dynamical models, you are restricted to the known physics.”.
    The “IPCC” models are basically dynamical models – they model the known physics. But there are uncertainties, and they have used some “closed loop” techniques to eliminate or reduce some of the uncertainty. This is a perfectly valid modelling technique in most circumstances. Its danger is that it can lead to invalid conclusions if an invalid assumption has been used. I contend that that is what has happened here. Please, read on …..

    Barton Paul Levenson #279. “A change of 0.2 watts per square meter in radiative forcing corresponds to about 0.15 K of temperature change, not most of 0.7 K”. Many thanks. That is absolutely my point. 0.2 watts per sq m CANNOT POSSIBLY deliver “a few tenths of a degree Celsius”. Now read the IPCC paragraph again. It says there is “increasingly reliable evidence” that it does!!

    From IPCC report, para 1.4.3
    “The solar cycle variation in irradiance corresponds to an 11-year cycle in radiative forcing which varies by about 0.2 W m–2. There is increasingly reliable evidence of its influence on atmospheric temperatures and circulations, particularly in the higher atmosphere [refs]. Calculations with three-dimensional models [refs] suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree celsius.”

    And it does it in just a few years, not the extended period needed to approach equilibrium, so the effect is actually stronger than first appears.

    Surely now you can all see that something significant is going on with solar variation, that is NOT built into the models. The observed effect of solar variation is much greater than its direct RF. There HAS TO BE a “feedback” mechanism that has not yet been identified.

    This unidentified “feedback” mechanism invalidates the use of some of the “closed loop” techniques in the models, because its existence invalidates the underlying assumptions.

    [Response: The IPCC reports quotes a result from the models and from that you claim that there is something missing in the models? I'm confused. - gavin]

    Comment by Mike — 12 May 2008 @ 10:42 AM

  282. Mike, you are positing a new, unknown mechanism on the basis of YOUR interpretation of an account in a summary report (not even the peer-reviewed studies themselves). And again, I’m not sure what you mean by a “closed loop” technique. That’s a very vague term, and whether there would be any spurious results from such a procedure would depend a great deal on the details.
    However, the most serious argument against your proposal is that it is at odds with the data–especially that provided by volcanic eruptions, which do not correlate with solar cycle. Almost by themselves, these provide a constraint against values below 1.5 degrees C per doubling.

    Comment by Ray Ladbury — 12 May 2008 @ 12:15 PM

  283. Mike, you missed the key point in what you quoted:
    > surface temperature changes

    That’s the changes in the temperature on the surface, that’s all they’re talking about there.

    Comment by Hank Roberts — 12 May 2008 @ 12:36 PM

  284. Mike, see also the extended discussion that ends

    “… The five lines of evidence discussed above suggest that the lack of such secular variation undermines the circumstantial evidence for a ‘hidden’ source of irradiance variability and that there therefore also might be a floor in TSI, such that TSI during Grand Minima would simply be that observed at current solar minima.”

    http://solarcycle24.forumco.com/topic~TOPIC_ID~4~whichpage~18.asp#top
    There, search for
    Leif Svalgaard’s post 05/11/2008 : 11:06:54

    Comment by Hank Roberts — 12 May 2008 @ 3:48 PM

  285. “The IPCC reports quotes a result from the models and from that you claim that there is something missing in the models? I’m confused. – gavin”

    The models used to estimate the actual effect of changes in solar radiation are using a different approach to the models as referenced in the IPCC report. The output from the former shows that there is something missing in the latter.

    If you want to follow it up, this paragraph shows which papers to find in the references at the back of section 1 of the IPCC report :
    “There is increasingly reliable evidence of its infl uence on atmospheric temperatures and circulations, particularly in the higher atmosphere (Reid, 1991; Brasseur, 1993; Balachandran and Rind, 1995; Haigh, 1996; Labitzke and van Loon, 1997; van Loon and Labitzke, 2000). Calculations with three-dimensional models (Wetherald and Manabe, 1975; Cubasch et al., 1997; Lean and Rind, 1998; Tett et al., 1999; Cubasch and Voss, 2000) suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree celsius.”

    Hank Roberts #283 : Surface temperature changes are exactly what the IPCC report deals with. From the Summary for Policymakers:
    “Eleven of the last twelve years (1995–2006) rank among the 12 warmest years in the instrumental record of global surface temperature9 (since 1850). The updated 100-year linear trend (1906 to 2005) of 0.74°C [0.56°C to 0.92°C] is therefore larger than the corresponding trend for 1901 to 2000 given in the TAR of 0.6°C [0.4°C to 0.8°C]. The linear warming trend over the last 50 years (0.13°C [0.10°C to 0.16°C] per decade) is nearly twice that for the last 100 years. The total temperature increase from 1850–1899 to 2001–2005 is 0.76°C [0.57°C to 0.95°C].”

    Hank Roberts #284 : I haven’t got access to that link. I’ll have to register etc. I’ll get back to you.

    Comment by Mike — 12 May 2008 @ 4:50 PM

  286. Ray Ladbury #282 – I put “closed loop” in quotes because it is a term used in control models, and the “IPCC” models, although they share many of the same features, are not actually used for control.
    All it means is that discrepancies between the outputs and observations are used to modify the inputs. In the case of the IPCC models, the inputs in question are some of the many parameters that are used to calculate results from the scientific formulae.

    Comment by Mike — 12 May 2008 @ 5:02 PM

  287. Mike, put the names (e.g. Cubasch) in the Search box at the top of the page to see previous discussion of those papers.

    It sounds like you’re trying to say that since the planet has warmed, but the sun hasn’t been shown to be responsible for the observed warming, there must be some hidden solar connection to make up the difference.

    But the difference is explainable by CO2 increases.

    If you want to argue against that, you also need to postulate a second hidden forcing, a negative one, to zero out the warming explainable from increasing CO2 in the models.

    Or did I miss your point?

    Comment by Hank Roberts — 12 May 2008 @ 6:10 PM

  288. Hank Roberts #284 : I did a search for “Leif Svalgaard” and it came up with a number of interesting-looking articles and papers. The first one has something in it for both of us:
    http://environment.newscientist.com/article/mg19125691.100
    It confirms what I have been saying about the influence of the Sun. It identifies two possible mechanisms, ultraviolet and cosmic rays. But it also has this:
    “”The temperature of the Earth in the past few decades does not correlate with solar activity at all,” Solanki says. He estimates that solar activity is responsible for only 30 per cent, at most, of the warming since 1970. The rest must be the result of man-made greenhouse gases, and a crash in solar activity won’t do anything to get rid of them.”

    Still, 30% is not trivial. And he has forgotten the El Nino. And a crash in solar activity resulting in a crash in temperature would cool the oceans which would then take up more CO2 … :) … no, I won’t go there!! (it probably wouldn’t be fast enough). Interesting that the bumping-up of ECS that I referred to many posts ago, from 1.9 to 3.2, is not a lot greater than this 30% (30% of 3.2 is about 1, 3.2-1.9 is 1.3).

    I think this opens up a whole new range of thought.

    Comment by Mike — 12 May 2008 @ 6:34 PM

  289. Well, new to you. Dig into it, I think you’ll find Leif’s posts at solarcycle24 suggest there’s nothing hidden away about this and it’s a formerly significant influence now getting lost in the CO2 change.

    Comment by Hank Roberts — 12 May 2008 @ 7:15 PM

  290. Mike, you are trying to get a more or less monotonic trend (modula noise) from an oscillatory source term–show me a differential equation (with real coefficients) that does that. You really aren’t approaching this thing very systematically, and that is a sure way to go down a blind alley. People study this stuff for years. To think that you can come in and show that the professionals who have been working at this for decades are wrong is downright arrogant and foolhardy. I am not saying that you can’t understand what is going on–only that it is clear that you do not yet, and what is more you won’t without systematic study.

    Comment by Ray Ladbury — 12 May 2008 @ 7:34 PM

  291. I’ve been away so this is belated. Sorry.

    A quickie: Geoff (245), you say, “….Suppose you apply an alternating voltage to a piece of copper wire; it works by exciting the electrons into nearby vacant quantum levels…. ”

    I may have totally misread what you meant, but that’s not at all how electric current flows. The electrons’ quantized energy levels have nothing to do with it — at least in most cases and all copper cases.

    Comment by Rod B — 12 May 2008 @ 10:09 PM

  292. Gavin, your idea to use paleorecords to validate GCMs is hopeless. The reason is that paleorecords are integrals, convolution of vastly multi-dimensional climate variables onto few one-dimensional functions, with uncertain time resolution, contaminated with noise, and morphed over ages. This process cannot be inverted, mathematically; thousands of different climate models can produce identical (within their error margins) paleorecords.

    Second, you continue to operate under a narrow assumption that global climate is a system with only one attractor – fixed point, and all changes to global climate must have an external cause. Given our current knowledge about dynamics of turbulence, dynamics of oceans, and dynamics of mantle convection, this assumption seems to be quite myopic.

    Comment by Al Tekhasski — 12 May 2008 @ 11:02 PM

  293. Martin (260) and Ray, et al: Martin says, “…it is not very fruitful to look at what’s happening to individual molecules and photons; the statistics are much simpler than these individual narratives, precisely because of LTE…”

    I disagree philosophically. Granted, it is more fruitful to assess the macro stuff with statistics, LTE, etc. But the foundation for all of this absorption and transport is at the atomic/molecular level. The precise function (within QM limits) of individual molecules and their pieces is critical to all of this happening, and, what bothers me, it’s not at all clear that there is knowledgeable agreement on how the fundemental atomic level process works. Logically, this of course does not refute the macro theory necessarily. But it sure as hell doesn’t provide a resounding vote of confidence, either.

    Martin, I think I agree with your 260 post with probably a semantics difference (which none-the-less is important). I basically said that excited vibration and rotation states do not effect the “classic” temperature of a molecule (or, a bunch of them for Ray’s sake). You said/implied they do — “…vibration becomes part of the total heat content of the parcel of air considered, as it will be immediately equipartitioned…” Equapartitioning moves the vibration energy (no classic temp) to translation (classic temp) within the same molecule or to another molecule via collision. I would further agree that there is a strong inclination to equipartition out vibration energy as average molecules, as I’ve read, are highly unlikely to pick up “LTE” energy in vibrations unless the temperature approaches ~1000K. (NOT true for rotation, however.) It would also seem to me that CO2 picking up vibration energy from anything other than absorbed infrared photons, like collisions and the LTE push, is highly unlikely.

    So, again (to somewhat disagree), a CO2 molecule either absorbing a quantized photon into vibration or emitting a similar photon, per se (NOTHING ELSE happening) has no effect on the temperature of the gaseous environment where the CO2 lives.
    CO2: absorb-emit-absorb-emit-absorb-emit to space: no temperature change.
    CO2: absorb-emit-absorb-crash: atmospheric temperature change.

    I think I understand the degrees of freedom (well, as best as a layman can, I guess). I think this aligns with and is analogous to my thought. As the local temperature environment increases E = 3/2kT goes to E= ~5/2kT goes to E= ~ 7/2kT (in not a very precise fashion ala QM) as the additional degrees of freedom come into play. This forms the basis for specific heat, and, specifically, specific heat increasing as DoFs come into play which is adding (some) energy to a substance without increasing its temperature.

    Comment by Rod B — 12 May 2008 @ 11:08 PM

  294. Re #293

    “I would further agree that there is a strong inclination to equipartition out vibration energy as average molecules, as I’ve read, are highly unlikely to pick up “LTE” energy in vibrations unless the temperature approaches ~1000K. (NOT true for rotation, however.) It would also seem to me that CO2 picking up vibration energy from anything other than absorbed infrared photons, like collisions and the LTE push, is highly unlikely.”

    Somewhat unlikely, not ‘highly unlikely’, according to Boltzmann CO2 at room T has about 5% of its molecules with more than the first vibrational energy.

    Comment by Phil. Felton — 13 May 2008 @ 12:40 AM

  295. Mike,

    It really pays to learn first then make a conclusion, rather than establish your conclusion and thrash about looking for the evidence to support it.

    PS, your post 277 is as wrong as just about everything else you’ve posted here. But you seem to have moved on so far…

    Comment by CobblyWorlds — 13 May 2008 @ 1:58 AM

  296. Rod B #293:

    So, again (to somewhat disagree), a CO2 molecule either absorbing a quantized photon into vibration or emitting a similar photon, per se (NOTHING ELSE happening) has no effect on the temperature of the gaseous environment where the CO2 lives.

    For what it’s worth, you are right (provided you exclude that particular molecule from the body of gas you’re studying the temperature of)… but it’s worth very little :-)

    CO2: absorb-emit-absorb-emit-absorb-emit to space: no temperature change.
    CO2: absorb-emit-absorb-crash: atmospheric temperature change.

    CO2: crash-emit-absorb-emit to space: atmospheric temperature change (opposite direction, cooling)

    The important thing, both classical and quantum, is time reversal symmetry: a reaction happens just as easily in the reverse as in the forward direction. This is why LTE doesn’t need to be informed about interaction cross-sections. This is why absorbtivity == emissivity. And absorb-crash == crash-emit.

    (I see you left out emit-to-earth-surface… )

    Comment by Martin Vermeer — 13 May 2008 @ 2:15 AM

  297. Rod B., I do not know what “classic temperature” is. The definition of temperature is partial of Energy wrt Entropy. Under some circumstances (e.g. for some gasses at some temperatures), this is 1.5k. However, you can’t say that this is generally true and define a “classic temperature” in terms of it–your definition will not be physically meaningful.
    As to likelihood of collisions exciting vibrational modes–do the math. At room temperature, about 4% of molecules have an energy (mostly kinetic) equal to or greater than the vibrationally excited state of CO2 corresponding to 15 microns. Even at 208 K, roughly 1% of molecules have this energy. That is why the blackbody curve for these energies peaks in the IR.
    You basically have the idea down about how DOF relates to specific heat–it’s very precise for a given molecule. Now you have to look how the Maxwell distribution for the gas as a whole comes into play.

    Comment by Ray Ladbury — 13 May 2008 @ 7:40 AM

  298. Al Tekhasski, that is the difference between a scientist and a nonscientist. When confronted with a problem of daunting complexity, the layman will throw up his hands, while the scientist will try to find as many constraints and independent data sources as possible and come up with a way to solve the problem. And, perhaps surprisingly, it works most of the time. And when it does not work, it certainly informs you of the fact, because you get contradictions among your data.

    Comment by Ray Ladbury — 13 May 2008 @ 7:45 AM

  299. Hank Roberts #278. Yes, you did miss my point. You said “It sounds like you’re trying to say that since the planet has warmed, but the sun hasn’t been shown to be responsible for the observed warming, there must be some hidden solar connection to make up the difference. .. But the difference is explainable by CO2 increases. .. If you want to argue against that, you also need to postulate a second hidden forcing, a negative one, to zero out the warming explainable from increasing CO2 in the models.”

    My point is that the models have taken away most of the Sun’s warming, by simply ignoring the evidence, and given it to the CO2 warming, by overestimating ECS. I have identified the places in the IPCC report where both occur.

    Hank Roberts #289. According to the agreement required on registration, solarcycle24 won’t protect my personal information, so I’m not prepared to register. You’ll have to extract the relevant info and post it here.

    Ray Ladbury #290. I have no idea what you are referring to, by “a more or less monotonic trend (modula noise) from an oscillatory source term”. I can’t see how it relates to the IPCC having ignored the effect of solar variance. Re the rest of your post : let’s stick to the subject.

    I can provide no evidence (links, documents, etc) for the following : I have spoken recently to four highly respected scientists in the climate field. Those conversations have strengthened my view that I am on the right track. One was review author for one of the working groups for the latest IPCC report, did not dispute the idea that the sun had greater effect on climate than in the models, but simply said that there was no known mechanism so they couldn’t model it. Another, after a long and interesting discussion about the topic, said that the next year or two would be interesting if sunspot activity stayed low for another 6 months.

    Comment by Mike — 13 May 2008 @ 7:47 AM

  300. Mike, Think about this in a logical, linear fashion:
    CO2 forcing is constrained by multiple (>5), independent lines of evidence. For what you are asserting to be true, all of these constraints would have to be wrong and all in the same way and by about the same amount. How likely do you think that is?

    You would also have to come up with a new mechansim for heating the planet that gave rise to a 30 year warming trend. And it needs to explain why the stratosphere is cooling etc.

    Comment by Ray Ladbury — 13 May 2008 @ 10:00 AM

  301. Martin (296), now we’re cookin’ with gas (pun intended!) You say, “…but it’s worth very little..”

    We just disagree. I say it explains the actual physical process of surface radiation heating the atmosphere; I think that important in climatology.

    You add, “CO2: crash-emit-absorb-emit to space: atmospheric temperature change (opposite direction, cooling)”

    Good point, and I agree as long as the collision imputes energy to vibration and not translation — which is unlikely but not as unlikely as I initial thought (per Phil and Ray — thanks)

    You say, “(I see you left out emit-to-earth-surface… )”

    Only because I was trying not to boil the ocean (no pun intended) and simply figure this out piece by piece. Though it is significant as it explains how the surface heats up through back-radiation, again a seemingly important (actually damn important) physical process in this GW stuff.

    Comment by Rod B — 13 May 2008 @ 10:44 AM

  302. Ray Labury (#298), that is the difference between a good science and ill science. When confronted with mathematically ill-posed problem, good scentist will evaluate necessary accuracy of measurements and find a limit of applicability of an inverse method (which would be pretty short), and will not draw daunting conclusions out of ill-conceived idea, while the junk scientist would persist in applying non-applicable to support his prejudicial conclusions. Temperature reconstructions from boreholes is one such example. Estimation of “Anthropogenic CO2″ from measurements of total DIC is another. And the role of CO2 in global climate change is another one. And, at no surprise, it never works.

    Comment by Al Tekhasski — 13 May 2008 @ 10:58 AM

  303. No, Al, a good scientist will figure out to pose the problem differently so that he or she can get something out of it. One reliable guide is not to rely on single sources/types of data. And climate science gives a coherent account of what is going on in a noisy system precisely because it follows such guides.

    Comment by Ray Ladbury — 13 May 2008 @ 11:20 AM

  304. Ray Ladbury, I am confused :-) The problem at hand is about detailed climate reconstruction from paleorecords. Does your reply mean that now you agree with me (#292) that the problem is ill-posed, and climate scientists need to find another, different way to validate GCMs?

    [Response: Is it ill-posed that there was an ice age 20,000 years ago? Or that rainfall was greater in the Sahara 6000 years ago? Of course not. Therefore your apparently absolutist point is nonsense. The only issue remaining is one of degree - how much detail can we recover from past climates? That will vary as you go back in time and on the size of signal. But blanket claims that this is impossible are just bogus. - gavin]

    Comment by Al Tekhasski — 13 May 2008 @ 11:38 AM

  305. One was review author for one of the working groups for the latest IPCC report, did not dispute the idea that the sun had greater effect on climate than in the models, but simply said that there was no known mechanism so they couldn’t model it

    Uh, pointing out that there is no known mechanism IS disputing the point.

    Politely.

    Apparently too politely, in your case, because you seem to be interpreting the fact that there’s no known mechanism as support for your position that ummm … the KNOWN mechanism by which CO2 forces temp increases is somehow wrong. Because you know in your heart that UNKNOWN mechanisms related to the sun must be the culprit.

    Comment by dhogaza — 13 May 2008 @ 11:41 AM

  306. Rod B #301, #296

    I say it explains the actual physical process of surface radiation
    heating the atmosphere; I think that important in climatology.

    Yes, but not until you get past this:

    … per se (NOTHING ELSE happening) has no effect on the temperature of the
    gaseous environment where the CO2 lives

    …so which will it be?

    … I agree as long as the collision imputes energy to
    vibration and not translation — which is unlikely but not as unlikely as
    I initial thought

    You can ballpark these probabilities, as Ray pointed out. LTE, the Maxwell distribution. And compute the population levels for the excited states, knowing only temperature. Under stationarity, the number of collisional excitations will match the number of collisional de-excitations… otherwise the CO2 would be exchanging net heat energy with the non-greenhouse gases.

    Only because I was trying … to … figure this out piece by piece.

    Sure, understood.

    Comment by Martin Vermeer — 13 May 2008 @ 12:34 PM

  307. Hey, Gavin, I absolutely would not mind if your model will reproduce glaciation-deglaciation cycles for the last 600,000 years, even if their quasi-period or rate of deglaciations would be slightly off. However, your approach to validation seems to follow the fine principle of climatology – it is good enough when you feel it is good enough. I know you do not believe that global climate is evolving around natural chaotic attractor, but let me ask this: do you have a model of atmosphere dynamics that calculates (not forces) cloud cover that matches current observations to 1% accuracy?

    [Response: The reason talking to you is pretty pointless is because of comments like this. You started above with an interesting point, you pushed it to a nonsensical extreme and then when called on it, you switch the topic to something completely different. This might be your idea of a dialog, but for most people it's a waste of time. Me included. - gavin]

    Comment by Al Tekhasski — 13 May 2008 @ 12:36 PM

  308. Gavin, the distinction between qualitative modeling (as you said, “ice age”, or “rainfall was greater”, all without any quantification) and quantitative prediction is not “nonsensical extreme”. Therefore, I was not “called on it”, nor did I switch to different topic. In case you forgot, various climate “administrations” are pushing _quantitative_ predictions of global temperatures on us, mentioning certain specific numbers. This is the quantitative modeling, and it must conform to quantitative standards. If you consider this distinction as a waste of time, then it just confirms my description of your methodology. Therefore, please respond to my question about cloud cover in #307. This question is critical for quantitative modeling of climate and the role of CO2.

    [Response: No it isn't. It's just some arbitrary metric you picked so that you don't have to address the other issues. This is a thread on paleo-climate, not cloud modelling, and so either stick to the topic or go elsewhere. - gavin]

    Comment by Al Tekhasski — 13 May 2008 @ 1:11 PM

  309. One issue often discussed regarding the climate in the far past is the so-called ordovician glaciation period. In the late Ordovicium, glaciation occured while CO2 levels are said to have been around 4000 ppm. Hearing this confused me. I tried searching for articles about this period, but was confused even further – many sources write about this period, but so far I didn’t find a satisfying explanation of this weird event.

    Could anyone please shed a light on this subject?

    Comment by Vincent van der Goes — 13 May 2008 @ 2:01 PM

  310. Is rainfall a proxy for cloudiness in paleo records?
    I’ve been trying to think of anything that would be.

    Comment by Hank Roberts — 13 May 2008 @ 2:01 PM

  311. No, Gavin, it is not an arbitrary metrics, and not off topic since cloud cover is an important part of a good climate model. How important? Look at the full-spectrum model of atmosphere offered here:
    http://forecast.uchicago.edu/Projects/full_spectrum.html
    You can see that 1.4% change in low cloud cover negates the whole effect of CO2 doubling. Clearly, changes in process of cloud formation are two orders of magnitude more important than changes in CO2. Do you have paleorecords of cloud cover 100,000 years ago, with about 1% accuracy?

    [Response: Of course not. Frankly, we don't even have that for today's climate. Thus we can know nothing. Brilliant. I'll just pack up and go home then..... Seriously though, the issue is to always take the information that you do have and do your best to make sense out of it, not to sit around wishing for perfection. I think there is useful information in paleo-records and I have written many papers exploiting it - it could be done better of course, but it is pretty amusing to hear over and over that something can't possibly be done when it already is being. You want examples? Try Otto-Bliesner et al (Science, 2006), or Legrande et al (PNAS, 2006) or Schneider von Deimling et al (2005). - gavin]

    Comment by Al Tekhasski — 13 May 2008 @ 2:31 PM

  312. Completely on-topic: I devised an unusual (for me) statistical test to look for fairly short period oscillations (quasi-periodic) in the record of the Holocene central Greenland GISP2 ice core temperatures by Alley. Short period means from 35 to 190 years.

    The method works, in effect, by cutting the entire record into shorter pieces and studying all of those. I found oscillations in the 45–90 year bands in 81–87% of the short periods. For the longer period bands, the oscillations never appear so often.

    I take this 45–90 year band as being the effect of PDO on the central Greenland temperatures.

    [Response: Why? You need to show a) that there is a signature of the PDO at Summit, and b) that this calibrates to the known variations over the instrumental period. Vague associations based on the frequency domain are very unsatisfactory unless you have a really good reason to expect it (i.e the annual cycle, milankovitch frequencies etc.) - gavin]

    Do note the wider band than is usually attributed to the PDO and the fact that about 15% of the time (by this method) it does not appear at all during the Holocene.

    Comment by David B. Benson — 13 May 2008 @ 4:48 PM

  313. Ah, I thought I remembered this. There’s paleo information suggesting increasing clouds and rain may be near the end of the last really fast CO2 increase — a sudden increase in large scale rapid erosion:
    http://ic.ucsc.edu/~jzachos/eart120/readings/Schmitz_Puljate_07.pdf

    Comment by Hank Roberts — 13 May 2008 @ 4:57 PM

  314. Gavin replied “You need to show a) that there is a signature of the PDO at Summit, and b) that this calibrates to the known variations over the instrumental period.” Summit? Does this mean the location of the GISP2 ice core? Is there a long enough surface temperature record there? (I had thought not.)

    Calibrates? I fear I don’t understand what is expected to calibrate with what.

    Yes, vague associations are indeed unsatisfactory. But since the PDO is supposed to modulate El Nino/La Nina, I thought it ought to show up. It does, sorta. A more thorough study would use a better statistical method and consider several ice cores, not just one.

    Comment by David B. Benson — 13 May 2008 @ 5:48 PM

  315. David, there’s a lot of reading available, e.g.:
    http://adsabs.harvard.edu/abs/2004AGUFMGC44A..04M

    Comment by Hank Roberts — 13 May 2008 @ 6:19 PM

  316. Ray Ladbury #300 said “Think about this in a logical, linear fashion: CO2 forcing is constrained by multiple (>5), independent lines of evidence. For what you are asserting to be true, all of these constraints would have to be wrong and all in the same way and by about the same amount. How likely do you think that is? … You would also have to come up with a new mechansim for heating the planet that gave rise to a 30 year warming trend. And it needs to explain why the stratosphere is cooling etc.”

    I have explained why they are all wrong in the same way and by the same amount. At some point they all work off the same basic assumption and tailor their parameters to the same observations.

    I personally don’t have to come up with a mechanism. The information given by the IPCC makes it clear that such a mechanism has to exist, but that they don’t know what it is. At some point, someone will no doubt work out what the mechanism is. In the meantime, the IPCC have some data on the effect that the unknown mechanism has, so it is unacceptable for them to simply ignore it.

    If you read my reply to Al Tekhasski below, you will see that a warming influence other than CO2 did indeed operate over the last part of the 20th century. Given that its behaviour tallies better with solar variation than with CO2 levels, it may be a clue to the missing mechanism.

    dhogaza #305 said “Pointing out that there is no known mechanism IS disputing the point”.

    Not so. The point not disputed was that the sun had a greater effect on climate than was allowed for in the models. The fact that there is no known mechanism is not disputed by them or me or anyone else. The statement shows that the sun’s effect was ignored. I discussed this with the last scientist that I referred to before – all the time I’m trying to test my ideas to make sure they are robust or to find the flaws in the them – and he had quite a lot to say about it, including “Just to say we can’t model something cuts no ice with me at all”.

    —–

    Al Tekhasski #311 said “You can see that 1.4% change in low cloud cover negates the whole effect of CO2 doubling”.

    This is confirmed in the IPCC report, para 1.5.2.

    Al and Gavin, we do have global albedo records for the last decade or so, and by proxy for a previous decade or so, and by much weaker proxy back to about 1900. They show that global albedo declined rapidly over the last part of the 20th century. They also show that global albedo has been increasing since then. See http://www.iac.es/galeria/epalle/reprints/Palle_etal_EOS_2006.pdf The final sentence is : “Accounting for such [global albedo] variations in global climate models is essential to understanding and predicting climate change.”.

    There are a number of papers on the subject, from the same stable, and they spell out some of the complexities in interpreting the information and relating it to global temperature etc. For example, high clouds and low clouds have different net climate effect.

    Comment by Mike — 13 May 2008 @ 6:27 PM

  317. OK, Mike, I have tried to at least get you to learn some of the physics before you take on your crusade against all 150 years of climate science, but you are bound and determined that you, with you, what, 6 months of experience can overturn what physics has built in 150 years. Good luck with that.

    Comment by Ray Ladbury — 13 May 2008 @ 6:46 PM

  318. Hank Roberts (315) — Thanks. Using hint you provided I found this graph

    http://www.ncdc.noaa.gov/paleo/pubs/biondi2001/fig4c-lg.gif

    which seems to give good-enough agreement with my amateur attempt. So this (and some other papers) indeed suggest that the PDO signal ought to show up in GISP2. That’s all I intend to do on this matter.

    It is enough to indicate, to me at least, that paleoclimate data might well be used to further constrain at least some aspects of computerized climate models.

    Comment by David B. Benson — 13 May 2008 @ 6:46 PM

  319. Mike, the Search box (top of page) will save much retyping; for example: http://www.realclimate.org/index.php/archives/2006/02/cloudy-outlook-for-albedo/

    Comment by Hank Roberts — 13 May 2008 @ 7:06 PM

  320. Not so. The point not disputed was that the sun had a greater effect on climate than was allowed for in the models.

    Meanwhile, Leif Svalgaard argues that the models allow for a greater effeoct on climate than is allowed for by real data (and he’s a solar physicist).

    So, apparently, the models must be wrong, because one of the two proofs given above must be true, right?

    Comment by dhogaza — 13 May 2008 @ 7:18 PM

  321. Martin (306), “……so which will it be?”

    I think we’re pretty close, so to be brief (and repetitive…, sorry): Looking at it stepwise, a photon of energy absorbed into a vibration state of CO2, as a self-contained and unique 1st step, does not alter the classic temperature of that molecule, or of the plethora of molecules in the local environment. This is an important piece of physics as it is one of the many critical individual molecular steps taken toward global atmospheric warming/cooling. Glossing over the pieces, especially if maybe not understood, to get to the end is not sufficiently rigorous and gives AGW a tinge of HPFM; or as the cartoon of the professor explaining his algorithm captions, “and now a miracle happens.” ;-)

    Comment by Rod B — 13 May 2008 @ 8:19 PM

  322. Ray Ladbury #317. My attention is focussed much more on the modelling error than on the physics. That is why I have been in contact with climate scientists, to make sure I’m getting the science bit right – but more importantly perhaps to make sure that solar variance really was left out of the models. I was at a symposium where scientist A said that the effect of solar variation on the Earth’s climate was heavily underestimated in the IPCC models, probably by a factor of about 4, and scientist B (the IPCC one) confirmed that the full effect had been left out of the IPCC models simply because there was no known mechanism. I emailed this to scientist C (another IPCC one) who replied “Precisely. How do you you include a mechanism that is unknown?”.

    It is not my intention to work out what the mechanism is, that is physics which is not my domain. But the fact that a possibly very important factor has been left out of the models is in the domain of modelling and is worth pursuing. I have pursued it using every avenue that I can think of, and received confirmation from several sources, including the IPCC report itself and scientists associated with the IPCC, that this factor was indeed left out of the models.

    Hank Roberts #319. Thx.

    Comment by Mike — 13 May 2008 @ 9:24 PM

  323. Hank Roberts #319. Thx. I had a look at the paper you linked. It certainly makes a case for treating ‘Earthshine’ results with caution.

    There is an interesting graph in the ISCCP paper, Part 1
    http://isccp.giss.nasa.gov/zD2BASICS/B8glbp.anomdevs_t.jpg

    It clearly shows the global cloud cover decreasing from 1995-98 to 1999-01, by what appears to be a climate-significant amount, then putting in and maintaining a small rise to 2007.

    I would suggest that it would be worth investigating this as a possible link to solar variance, since it maps much better to solar activity than to CO2 levels.

    There are three possible non-exclusive links between solar variance and climate that have been mooted, that I am aware of : magnetic field, ultraviolet and clouds. I also couldn’t help noticing that the loss of Arctic ice has been ascribed to winds, and that winds got a mention in I think a RealClimate article as possibly accelerating the paleo cycle when it turned down (I should be able to find the links if wanted).

    I would be interested in keeping an eye on these, if anyone has some relevant links.

    Comment by Mike — 13 May 2008 @ 10:08 PM

  324. Rod B #321:

    Looking at it stepwise, a photon of energy absorbed into a vibration
    state of CO2, as a self-contained and unique 1st step, does not alter
    the classic temperature of that molecule, …

    No Rod… you’re inventing your own physics here.

    Temperature is defined (simplifying a little) as the average amount of random motion energy per degree of freedom. Defined as an average, you are not really allowed to apply it to a single molecule, but if we apply it anyway, then, yes, absorption of a photon into a vibrational state, exciting that state in a CO2 molecule, raises the temperature of that molecule. Vibration is a form of random molecular motion. It is part of the definition of temperature. (Why do you think this degree of freedom contributes to he specific heat of the gas?)

    …, or of the plethora of
    molecules in the local environment.

    By the time the added energy is equipartitioned, through collisions, with the local environment, that temperature will have gone up too.

    Comment by Martin Vermeer — 13 May 2008 @ 11:00 PM

  325. Just saw a paper on the GISS website on aerosols. 85% of sulfates are in the NH. Hmmm. No significant warming in SH for the past 30 years where sulfate loading is low and relatively constant. There is warming in the NH, particularly between 1993 and 1998. Sulfate emissions in the NH likely decreased after the 1990 Clean Air Act and the fall of the Former Soviet Union. Coincidence? I think not!

    Comment by Chris N — 13 May 2008 @ 11:52 PM

  326. Mike writes:

    My point is that the models have taken away most of the Sun’s warming, by simply ignoring the evidence, and given it to the CO2 warming, by overestimating ECS.

    We understand your point thoroughly. We just think it’s wrong.

    First of all, for hindcasts the models use the known values of TSI from someone’s compilation (e.g. Lean 2000 or Wang et al. 2004), or the known sunspot counts. For forecasts they can simply impose a reasonable 11 and 22 year cycle.

    Second, and more importantly, the models do NOT take forcing from the solar column and add it into the CO2 column. The CO2 forcing is decided independently of the solar forcing or any other forcing. If you could prove solar had a larger effect, it would NOT prove that CO2 had a smaller effect. It would just mean that some third-party effect had to be countering the CO2.

    Comment by Barton Paul Levenson — 14 May 2008 @ 6:29 AM

  327. Vincent van der Goes posts:

    One issue often discussed regarding the climate in the far past is the so-called ordovician glaciation period. In the late Ordovicium, glaciation occured while CO2 levels are said to have been around 4000 ppm. Hearing this confused me. I tried searching for articles about this period, but was confused even further – many sources write about this period, but so far I didn’t find a satisfying explanation of this weird event.

    Could anyone please shed a light on this subject?

    The “carbonate-silicate cycle” acts as a very-long-term stabilizer of temperature on Earth. CO2 in the atmosphere is drawn down by weathering, but raised by volcanic and metamorphic emission. If the Earth heats up, weathering increases, which draws down CO2. If the Earth cools down, weathering decreases, and CO2 builds up in the atmosphere.

    The control mechanism is not perfect. There have been episodes of widespread glaciation in Earth’s history, some possibly so severe as to have created a “snowball Earth” situation (2300, 800, and 600 million years ago most noticeably). When that happens, the Earth’s albedo is very high, its temperature is very low, and CO2 has to build up to quite high levels before the ice begins to melt back again.

    Thus it is not sufficient to know what the levels of temperature and CO2 were at different times. You must also know the history. CO2 is not the only thing that affects temperature; albedo and cloud cover and solar input and a host of other factors matter as well. Temperature and CO2 correlate well now because none of the other factors are changing substantially.

    Comment by Barton Paul Levenson — 14 May 2008 @ 6:37 AM

  328. Mike posts:

    a warming influence other than CO2 did indeed operate over the last part of the 20th century. Given that its behaviour tallies better with solar variation than with CO2 levels…

    Except that it doesn’t. Sunlight has shown no trend for the past 50 years:

    http://members.aol.com/bpl1960/LeanTSI.html

    Comment by Barton Paul Levenson — 14 May 2008 @ 6:47 AM

  329. Rod B., If you want to apply a stick of dynamite to the log jam that has developed in your mind regarding temperature, energy, degrees of freedom and so on, consider this: In a laser, where you have a population inversion, with more molecules in a high-energy state than in the ground state, the temperature is negative! This makes absolutely no sense if you look on the temperature in terms of average kinetic energy. It makes perfect sense if you look at the physics in terms of the Maxwell distribution. It makes perfect sense if you look at the temperature in terms of the partial of energy wrt entropy. Add more energy, and you will promote even more molecules from the ground to the excited state, so the population is even more inverted. Cogitate on that awhile and hopefully you’ll see that the concept of temperature is a lot more general than simply average KE per dof.

    Comment by Ray Ladbury — 14 May 2008 @ 8:33 AM

  330. #325 Chris N,

    If the observed warming is due to a drop in “dimming” then how was it that by 2000 surface insolation had not recovered to 1960s levels according to BSRN & GEBA datasets? i.e ~0.6degC GW as the surface insolation dropped overall….
    Dimming is almost certainly hiding the full impact of CO2 driven warming. For a start check out Wild & Ohmura’s work.

    By the way, the warming carried right on through 1998 which was an outlier. So that doesn’t work either.

    #328 Barton Paul Levinson,
    Lest we forget…
    Neutron Counts (as you know, a proxy for GCRs) http://cr0.izmiran.rssi.ru/clmx/main.htm
    No trend.
    Nothing.
    Nada.
    Zilch.

    #309 Vincent van der Goes,
    It’s been a few years since I read about the Ordovician and CO2, but I’m not sure CO2 was actually high at the time of glaciation:
    http://www.sciencedaily.com/releases/2006/10/061025185539.htm
    If I remember rightly the time resolution of the CO2 proxy record is such that CO2 levels may have been a lot lower than most of the Ordovician, when the glaciation actually happened.

    Comment by CobblyWorlds — 14 May 2008 @ 11:15 AM

  331. Barton Paul Levenson #326 said “We understand your point thoroughly. We just think it’s wrong.”. I’m happy to leave it there. When the IPCC are proved to be absolutely right, I’ll write to you all if you’re still around, and concede.

    Barton Paul Levenson #328 said “Sunlight has shown no trend for the past 50 years”.
    \Well. I would have been happy to leave it there. I wrote it before reading this next post.

    From the Judith Lean data posted (I presume it was Judith Lean):
    1900 1364.458
    2000 1366.674

    That’s an increase of 2.2 W m-2 over the 20th century. (I checked that these years were in kilter with the years nearby, and you can see the trend clearly in the graph).

    The IPCC report Summary for Policymakers says that “the global average net effect of human activities since 1750 has been one of warming, with a radiative forcing of +1.6 [+0.6 to +2.4] W m–2″.

    So if I read the figures that you posted correctly, the increase in direct solar irradiance over the 20th century exceeded that claimed for AGGs since 1750.

    But we still haven’t allowed for the multiplier effect in the IPCC report para 1.4.3 : “There is increasingly reliable evidence” that a variation of about 0.2 W m-2 in solar irradiance “could cause surface temperature changes of the order of a few tenths of a degree celsius.”

    Now I know that this whole climate thing is complex. I know that there are disputes about the temperature measurements, that there are very significant time lags, that there are lots of things that don’t behave in a linear manner, that there are things whose effect changes in combination with other things, etc etc etc, so expected effects don’t necessarily show up straight away or at the exact amount expected. [BTW this answers CobblyWorlds #330 too]

    But I think the figures you posted provide extremely strong support for my case.

    The sentence at the end of the item you posted “Therefore, increased sunlight cannot be driving the sharp global warming of the last 30 years.” – was that your comment, or Judith Lean’s?

    PS. Given the importance of the Sun to climate, you would expect that there would be a number of solar physicists on the IPCC team. I have been told that Judith Lean was the only one. Is that right?

    [Response: The mistake you are making is a very common one, and you are in good company in making it, but it is still wrong (take you solar number and multiply by 0.7 and divide by 4. And look up Foukal et al (2006) for a more up-to-date assessment). - gavin]

    Comment by Mike — 14 May 2008 @ 6:01 PM

  332. #330 Cobblyguy,

    To make my contention clear, dimming is lower today than 1960, thus the reason for 2000 to be warmer in NH than in 1960. For some reason, most people assume dimming increases along with CO2 concentration. I contend it has been the opposite: lower CO2 concentration in the past with higher dimming; and higher CO2 concentration today with lower dimming. Also, it resulted in less sea ice reflecting irradiance as a compounding effect. Regarding your other point, the vast majority of the warming occurred between 1993 and 1998 (see post 237 above, and yes there were no error bars provided).

    Comment by Chris N — 14 May 2008 @ 9:53 PM

  333. Martin (324), I’ll admit to finding references that say or imply both sides, though most references just ignore the detail we’re discussing — why is anybody’s speculation. A relatively simple reference is in Wikipedia, though I can’t vouch for its authenticity — I presume it is good though less than peer reviewed. Go see http://en.wikipedia.org/wiki/Thermodynamic_temperature

    Or just buy the following brief excerpts: “The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the average—or “mean”—kinetic energy of a specific kind of particle motion known as translational motion.”

    “The kinetic energy stored internally in molecules does not contribute to the temperature of a substance (nor to the pressure or volume of gases).”

    Most refer to internal molecular energy as having “characteristic temperature”, which is not “real” (sensory) temperature, just a convenient construct, though can be confusing.

    So, unless and until the radiation absorbed into vibration or rotation modes gets partitioned out to translation, thermodynamic (real) temperature is not changed. And, this precisely (or close enough…) explains and justifies the variance in specific heat — some energy being added to a molecule(s) with zero change in temperature because of that specific delta-E.

    Comment by Rod B — 14 May 2008 @ 10:33 PM

  334. ps I think it is “characteristic temperature” that Ray is referencing (329). It is also used in reference to radiation and other stuff. And, BTW, it is a very helpful construct — and can mathematically give you negative temperature — so I’m not refuting it at all. I’m simply talking of “real” “thermodynamic” “sensory” temperature. This is all about global warming and “real temperature” is all that counts here. If the temperature can not be felt by anybody or anything, then global warming is a no-op!

    Comment by Rod B — 14 May 2008 @ 10:48 PM

  335. pps and guys who work with “characteristic temperature” just say “temperature” ’cause it’s easier and efficient — they all know of which they speak.

    Comment by Rod B — 14 May 2008 @ 10:51 PM

  336. Re: #331 (Mike)

    I’d really like an answer to this question: why is it that when BPL states that there’s been no solar increase for 50 years, you reply with estimates 100 years apart?

    Here are some more numbers from Lean:

    1950: 1366.022
    2000: 1366.674

    The difference is 0.652. And as Gavin (correctly) points out, that’s the change in solar irradiance, not climate forcing. You have to divide by 4 (for the sphericity of the earth), then multiply by 0.7 (to account for albedo), which gives 0.114 W/m^2 climate forcing change. That’s only one fourteenth the estimated anthropogenic forcing.

    Comment by tamino — 14 May 2008 @ 11:06 PM

  337. Mike, you’re confusing a slight change in _surface_temperature_ with heating the planet up.

    And you’re confusing effects of the very slight solar variation — a fraction of a watt out of the total some 1300 watts/square meter — with turning up the heat.

    This may affect how and where the winds move a bit. It likely changes the surface temperature. Not temperature of the whole planet. Mixing more cold ocean water with the surface water, for example, by changing wind patterns. Look for a possible connection that fits the magnitude of the energy transfer involved. A small difference can change _how_ the heat moves around on the planet.

    Aug 14, 2007 … north-south position of the atmospheric jet stream …. drives intraseasonal wind variations …
    http://www.pnas.org/cgi/reprint/104/33/13262.pdf

    Comment by Hank Roberts — 14 May 2008 @ 11:52 PM

  338. Rod B #333

    A relatively simple reference is in Wikipedia, though I can’t vouch for its authenticity — I presume it is good though less than peer reviewed. Go see http://en.wikipedia.org/wiki/Thermodynamic_temperature

    “The kinetic energy stored internally in molecules does not contribute to the temperature of a substance (nor to the pressure or volume of gases).”

    OK. This (if not a mistake) differs from the definition of temperature that I learned in school… but note that for LTE in a gas, it is inconsequential.

    Most refer to internal molecular energy as having “characteristic temperature”, which is not “real” (sensory) temperature, just a convenient construct, though can be confusing.

    Well, when using an old-fashioned thermometer, temperature reaches its mercury bob always through linear molecular motion… but such temperature measurement is based on LTE anyway, and meaningless without it.

    I assume that the differences you find between sources are simply due to starting from different definitions. The physics is the same.

    A problem I see with the Wikipedia definition here is that with solids, you’re in a bit of ambiguity trouble. In a molecular solid (ice) you could say that kinetic temperature is defined by the vibrations of the H2O molecules. Now look at salt, NaCl. No molecules. Do you have to look at Na and Cl separately? Or diamond, one gigantic molecule. Why treat the C’s in a diamond separately if you’re not allowed to do the same for the C’s in an organic molecule? This is arbitrary. Needless to say I am unhappy with the Wikipedia definition (if that’s what it is).

    BTW not even Wikipedia is completely unambiguous. It says

    “At its simplest, “temperature” arises from the kinetic energy of the vibrational motions of matter’s particle constituents (molecules, atoms, and subatomic particles).” Would seem to agree with me.

    “The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the average—or “mean”—kinetic energy of a specific kind of particle motion known as translational motion.” My emphasis. It does not say “is defined by”. For LTE the statement is correct.

    I like my definitions clean and general. Ray’s definition d(energy)/d(entropy) certainly is, but not very intuitive.

    Comment by Martin Vermeer — 15 May 2008 @ 1:50 AM

  339. Rod B #334:

    ps I think it is “characteristic temperature” that Ray is referencing (329).

    Yes. Note that a laser is far from LTE.

    It is also used in reference to radiation and other stuff.

    That’s again a bit different.

    And, BTW, it is a very helpful construct — and can mathematically give you negative temperature — so I’m not refuting it at all. I’m simply talking of “real” “thermodynamic” “sensory” temperature.

    Under LTE they are all the same. Up in the atmosphere, the “sensory” temperature is actually the least interesting of them all… it’s all about interaction with radiation, not with thermometers or with people’s skins :-)

    Comment by Martin Vermeer — 15 May 2008 @ 2:33 AM

  340. RE: 327 (Barton Paul Levenson)

    > The control mechanism is not perfect. There have been episodes of
    > widespread glaciation in Earth’s history, some possibly so severe
    > as to have created a “snowball Earth” situation (2300, 800, and 600
    > million years ago most noticeably). When that happens, the Earth’s
    > albedo is very high, its temperature is very low, and CO2 has to
    > build up to quite high levels before the ice begins to melt back again.

    So albedo is a strong positive feedback, in both directions. However, since the period prior to this glaciation was hot (and CO2 rich), something else than albedo must have initiated the ice age.

    > Thus it is not sufficient to know what the levels of temperature and
    > CO2 were at different times. You must also know the history. CO2 is
    > not the only thing that affects temperature; albedo and cloud cover
    > and solar input and a host of other factors matter as well. Temperature
    > and CO2 correlate well now because none of the other factors are changing
    > substantially.

    Solar input was a little lower back then. And one should also take the geographical position of continents into account. Still, I find it amazing that such factors could have initiated an ice age, when (if) CO2 levels were indeed this high. After all, 4000 ppm is almost four doublings above pre-industrial levels.

    RE: 330 (Cobblyworlds)

    Thanks for the link. Unfortunately, I could not find any quantative estimates in this article on the CO2 levels prior and during the ice age. The source I am relying on now is wikipedia: http://upload.wikimedia.org/wikipedia/commons/7/76/Phanerozoic_Carbon_Dioxide.png
    This information could be outdated or incomplete, of course.

    Comment by Vincent van der Goes — 15 May 2008 @ 3:32 AM

  341. Re: #336

    Here are some more numbers from Lean:

    1950: 1366.022
    2000: 1366.674

    Here are some numbers from GISS

    1944: 0.21
    2000: 0.33

    an increase of just over one tenth of a degree.

    Is it not possible that solar forcing actually has a cumulative effect over time. Now let me think where might it store heat …. what about the oceans??

    Comment by John Finn — 15 May 2008 @ 5:44 AM

  342. This is a good overview:

    http://hyperphysics.phy-astr.gsu.edu/Hbase/thermo/temper2.html

    Comment by Martin Vermeer — 15 May 2008 @ 6:30 AM

  343. Mike posts:

    Barton Paul Levenson #328 said “Sunlight has shown no trend for the past 50 years”.
    \Well. I would have been happy to leave it there. I wrote it before reading this next post.

    From the Judith Lean data posted (I presume it was Judith Lean):
    1900 1364.458
    2000 1366.674

    That’s an increase of 2.2 W m-2 over the 20th century.

    Do you not understand what “the last fifty years” means? What year is it now? What year was Lean’s article published in? How did you get from “the last fifty years” to “the 20th century?”

    Comment by Barton Paul Levenson — 15 May 2008 @ 7:38 AM

  344. Chris N., The data say you are wrong:

    http://www.realclimate.org/index.php/archives/2007/11/global-dimming-and-global-warming/

    Comment by Ray Ladbury — 15 May 2008 @ 8:02 AM

  345. John Finn, Gee, if the warming is coming from a source of heat within Earth, how do you get zero warming for 30 years, followed by 30 years of accelerating warming. That seems to me to be a rather tall order.
    The game you are playing here is explaining what we don’t know by what we don’t know. That isn’t what scientists do–they explain the unknown in terms of the known. However, you are free to prove me wrong. Construct a model that hides heat in the oceans for 30 years and then suddenly dumps it into the world we can see.

    Oh, and for extra credit, show me why the greenhouse effect due to CO2 magically stops at 280 ppmv.

    Comment by Ray Ladbury — 15 May 2008 @ 8:07 AM

  346. Gavin’s response to #331 : “take your solar number and multiply by 0.7 and divide by 4″. Of course. I should have checked. Apologies. Lets’s do the arithmetic with the correct numbers :

    1900 1364.458
    2000 1366.674

    That’s an increase of 2.2 W m-2 over the 20th century, multiply by 0.7 and divide by 4 gives 0.385 W m-2.

    Now let’s add in the IPCC report para 1.4.3 : “There is increasingly reliable evidence” that a variation of about 0.2 W m-2 in solar irradiance “could cause surface temperature changes of the order of a few tenths of a degree celsius.”. Now it might not be linear, but we’re probably looking at about double “a few tenths of a degree Celsius”. The total global warming for the 20th century was 0.74 deg C (IPCC).

    That was a lot more than I needed for my case, but I can accept that.

    tarnino #336 said “I’d really like an answer to this question: why is it that when BPL states that there’s been no solar increase for 50 years, you reply with estimates 100 years apart?”.

    I have been referring to the 20th century for quite a long time. Why did BPL suddenly switch to 50 years? Simple, just look at the graph he posted. 50 years was much more convenient for him than the full century. Given that we all agree that short periods are not very relevant in climate discussions, it seems to me that there is merit in working with longer periods when possible.

    Hank Roberts #337 said “Mike, you’re confusing a slight change in _surface_temperature_ with heating the planet up.”. Well, no. I’m referring to the same _global_ surface temperature that the IPCC reference for their ‘global warming’ figures. The slight solar variation you refer to is the same idea as the solar variation I did the sums for above. It looks slight, but as the IPCC points out, there is evidence that it can punch above its weight.

    Thanks for your comments on winds etc, I haven’t looked at the link yet, but I will.

    Vincent van der Goes #340 said “So albedo is a strong positive feedback, in both directions. However, since the period prior to this glaciation was hot (and CO2 rich), something else than albedo must have initiated the ice age.”.

    There is also the possibility that albedo is not a feedback at all. That it operates independently of CO2. That would mean that it definitely is possible that albedo did initiate the ice age. This idea opens up the whole of the paleoclimate cycle to a _much_ simpler explanation than James Hansen’s.

    Comment by Mike — 15 May 2008 @ 9:12 AM

  347. “There is also the possibility that albedo is not a feedback at all. That it operates independently of CO2.”

    News Flash: Albedo does operate independently of CO2.

    Albedo change is a feedback to warming or cooling, regardless of the initial cause of that warming or cooling.

    What ever made you think otherwise?

    Comment by Jim Eager — 15 May 2008 @ 9:48 AM

  348. Re: #346 (Mike)

    50 years was much more convenient for him than the full century.

    It’s not because 50 years is more convenient. It’s because it’s more relevant. You’re the one who’s choosing time periods because they’re convenient.

    Comment by tamino — 15 May 2008 @ 10:21 AM

  349. Re #340 where Vincent wrote:

    So albedo is a strong positive feedback, in both directions. However, since the period prior to this glaciation was hot (and CO2 rich), something else than albedo must have initiated the ice age.

    The Ordovician glaciation was resticted to the land near the south pole, which would have been cooler than the non-polar regions. This was also during the start of the Caledonian orogens, and the mountain tops could have become snow covered, since at high altitudes there is little water vapour and, although the concentration of CO2 was high, its density would have been much lower in the rarefied atmosphere at high altitude.

    The glaciation possibly coincided with the Trans African mountains passing over the pole, which could have become ice covered just as the Himalayas are today at a lower latitude.

    Albedo is raised, not only by the ice but also by clouds. An ice covered pole would have initiated a polar vortex that would have created a cloud band around the polar region. The cold air flowing out from the ice covered mountains would have under cut the warm marine air coming from the sub-tropics.

    The lesson here is that without the Arctic sea ice there will be no northern polar vortex. The resulting loss of clouds will cause a decrease in planetary albedo and a massive rise in global temperatures.

    HTH,

    Cheers, Alastair.

    Comment by Abbe Mac — 15 May 2008 @ 11:07 AM

  350. #340 Vincent van der Goes,

    I’m fairly sure I have nothing more than you have from Wikipedia.

    Paleo-climate is not my primary interest, and I’m not a scientist, just a former-sceptic turned hobbyist, so don’t take this as “gospel”:

    I noted your implied question “something else than albedo must have initiated the ice age.” Just to make sure, I hope I’m not pointing out the obvious (although obvious is subjective): The reason I posted that article was because the hypothesis discussed is that a tectonic accident (the rise of the Appalacians) exposed rock to weathering, and it was the chemical weathering that used up atmospheric CO2. So the CO2 in the atmosphere was much reduced allowing temperatures to reduce enough for an ice age to start, at which point ice albedo would become a factor. As to what ended it, probably volcanic CO2 possibly in some major event (flood basalt perhaps).

    I’d advise against following the curves in the Wikipedia image too closely. Note that the Royer compilation is a set of spot “samples” of CO2 levels (detail may be lost between samples), with large error bars. And the 30M year filtering just smooths that same data. Futhermore as you can see, the other readings are models. Royer (which is the only actual proxy-based dataset) starts just before the Ordovician so doesn’t help anyway.

    And yes ice-albedo feedback can be a very strong feedback: Over sea, exposing water gives a net gain of the order of +60% increased absorption of the incident solar radiation.
    .

    #332 Chris N,

    To make my contention clear, dimming is lower today than 1960, thus the reason for 2000 to be warmer in NH than in 1960.

    No need to clarify, I understand you.

    But GEBA/BSRN indicate that dimming in the 1960s was less than in 2000. As dimming reduces surface insolation, that’s why I said “insolation had not recovered to 1960s levels”, insolation is incoming solar radiation.

    So what you’re claiming is that the warming is due to an overall drop in solar radiation at the surface!

    Comment by CobblyWorlds — 15 May 2008 @ 12:04 PM

  351. Mike, 50 years, because up til the middle of the 20th Century, the forcings attributable to solar and CO2 were indeed comparable. Since then, not so.

    http://upload.wikimedia.org/wikipedia/en/a/a2/Climate_Change_Attribution.png

    Comment by Hank Roberts — 15 May 2008 @ 1:11 PM

  352. Mike, another useful chart
    http://www.scs.carleton.ca/~schriste/data/Carbon-Atmosphere-Mass_files/State%20of%20the%20World_28025_image001.gif

    Comment by Hank Roberts — 15 May 2008 @ 1:22 PM

  353. > without the Arctic sea ice there will
    > be no northern polar vortex.

    Alastair, you’re approaching a bettable statement. What about during the months when there’s the least sea ice — will there be less of a North polar vortex? Or are you saying after X years with zero sea ice summer and winter, the vortex will change by Z amount?

    Just trying to work up some betting activity, it’s fun to watch.

    Comment by Hank Roberts — 15 May 2008 @ 4:54 PM

  354. Jim Eager #347 said “News Flash: Albedo does operate independently of CO2. Albedo change is a feedback to warming or cooling, regardless of the initial cause of that warming or cooling. What ever made you think otherwise?”.

    Thanks for that. I trust and believe that you are correct in saying that albedo operates independently of CO2. However, the IPCC report shows that they believe that clouds (ie, albedo) are driven by CO2. See my post #211 :
    “You could start with the IPCC report, section 8.6.3.2. There, they say that the basic Equilibrium Climate Sensitivity (ECS) of 1.2 (this compares OK with the 1.1 in Stephen E Schwartz’s paper http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf). They then bump it up to 1.9 for “water vapor feedback”, and from there to 3.2 for “cloud feedback”. They also make it abundantly clear that they have no idea how clouds work (that permeates the whole IPCC report), and that the cloud “feedback” they use for ECS is a major uncertainty.”.

    tamino #348 said “It’s not because 50 years is more convenient. It’s because it’s more relevant. You’re the one who’s choosing time periods because they’re convenient.”.

    I have been referring to the 20th century consistently. As I said before, this climate thing is complex, and among other things there are some big time lags. The longer the periods we deal with, the more likely we are not to be confused by short term fluctuations. However, I’m prepared to look at both periods.

    The full 20th century supports my case.

    1950-2000 : The graph posted by BPL won’t now come up (BPL is it still on your website?), but from memory it showed an extended period of increasing sun strength from I _think_ well before 1900 but certainly from 1900, up to 1950. From 1950 to 2000 it showed significant fluctuations but no obvious overall trend. In searching for an alternative source, I came across this article in New Scientist vol 178 April 2003 page 14 http://www.restena.lu/meteo_lcd/globalwarming/sun/hyperactive_sun/sun_fuels_debate_on_climate_change.html
    It shows that there is or was a raging argument about whether there was in fact an increase over that period.
    Given the substantial time lags that can occur, it is possible that the sun’s increase before 1950 would take time to come through. Alternatively, given that during the whole of 1950-2000 there was unusually high sun strength, it would not be at all surprising if it had an overall warming effect that didn’t all show up straight away.

    Either way, my contention is only that the IPCC was not entitled to simply ignore it.

    CobblyWorlds #350 : I saw that bit about rocks weathering some time ago. I thought it rather unlikely that they would manage to weather regularly every 150,000(?) years. Albedo is such a powerful force, it seems a much more likely candidate for delivering such a strong change. And it has the potential to explain both ends of the cycle.

    Hank Roberts #351, 352. Those two charts that you posted should both have CO2 on a logarithmic scale not a linear scale, in order to make a meaningful comparison. It would make what looks like a spectacularly brilliant fit into something much more ho-hum.

    I am not trying to explain the whole world, here. I am simply saying that the IPCC has ignored something potentially significant, and they were not justified in doing so. The way the climate is behaving at the moment – several years of cooling – it is starting to look like the missing factor really does make a difference. The next few years could be interesting. The next few years could be interesting.

    Comment by Mike — 15 May 2008 @ 6:03 PM

  355. Hank,

    I am saying that when the multiyear ice is gone and the Arctic is clear of ice during the summer, the seasonal ice will not reform in the winter. An inversion will form over the Arctic and the clouds will keep the Arctic warm during its six month night.

    Comment by Alastair McDonald — 15 May 2008 @ 6:43 PM

  356. Alastair, for what value of “gone” — when will the observation be timely to look and see? It’s a solid prediction, if you define the terms clearly enough.

    Comment by Hank Roberts — 15 May 2008 @ 9:48 PM

  357. Martin, I think if NaCl is ionized as in solution that each atom counts; if not as in crystals or molten the molecule counts as one. Though I may be wrong… The solid’s molecular vibrations are just like a gasses center-of-mass molecular translation, just has a much shorter path (and never really collides) and more of them.

    The single step scenario (photon energy in a molecular vibration energy state) is not at LTE, to the best of my knowledge.

    Comment by Rod B — 15 May 2008 @ 10:05 PM

  358. Trying to follow the CO2/solar/albedo thread, I have a simple question I’d like to verify. Is the changes in polar ice really that significant viz-a-viz albedo? Less than 10% of incoming insolation reflects off the surface and I would think the effective watts/m^2 reflected from polar ice would be a very small portion of that — solar rays coming in at large obtuse angles and everything. (In some cases solar rays not hitting the ice at all for long periods.)

    [Response: Locally yes, though mostly in spring. - gavin]

    Comment by Rod B — 15 May 2008 @ 10:15 PM

  359. Re # 346 Mike, and responses by Tamino (348)and Hank Roberts (351, 352)

    Mike,
    This paper might help you understand why the past 50 years is considered more relevant than the past 100 years:

    David J. Karoly et al (2003) Detection of a Human Influence on North American Climate. Science 14 November 2003: Vol. 302. no. 5648, pp. 1200 – 1203
    Several indices of large-scale patterns of surface temperature variation were used to investigate climate change in North America over the 20th century. The observed variability of these indices was simulated well by a number of climate models. Comparison of index trends in observations and model simulations shows that North American temperature changes from 1950 to 1999 were unlikely to be due to natural climate variation alone. Observed trends over this period are consistent with simulations that include anthropogenic forcing from increasing atmospheric greenhouse gases and sulfate aerosols. However, most of the observed warming from 1900 to 1949 was likely due to natural climate variation.

    http://www.sciencemag.org/cgi/content/abstract/302/5648/1200

    Comment by Chuck Booth — 15 May 2008 @ 10:51 PM

  360. I am saying that when the multiyear ice is gone and the Arctic is clear of ice during the summer, the seasonal ice will not reform in the winter. An inversion will form over the Arctic and the clouds will keep the Arctic warm during its six month night.

    I disagree.

    The transition to seasonally ice-free state is likely within 10 years or so (in fact I’ll be stunned if there isn’t at least 1 such occurrence by 2018). After the first occurence of ice-free state in the late summer we may well see the next year with a bit more September ice(not ice-free). Because of winter (e.g. this last winter) there will be a degree of decoupling between each summer minima.

    The switch to a seasonal ice cap will be a rapid process, due to aggression of processes causing the rapid loss of perennial ice. But the next transition to a year-round ice free state will be more gradual. As the Arctic region warms, the period of ice free arctic ocean during the summer will start earlier and earlier. But we’ll still have a winter ice cap for decades to come, subject to the amount of IR blocking by the increasing levels of GHGs (mainly CO2/CH4).

    I don’t see reason to anticipate the sort of massive and abrupt (~1 year) climatic impact(transition) outside of the Arctic that your suggestion seems to imply.

    Comment by CobblyWorlds — 16 May 2008 @ 1:34 AM

  361. I would just like to thank you guys for the explanations – there is a saying one madman can ask more questions than ten wise men can answer. Thanks for your time.

    The ordovician glaciation period is rather ill-understood by many (including me, until very recently) and sometimes being (mis)used in discussions, so that’s why I asked.

    Comment by Vincent van der Goes — 16 May 2008 @ 3:59 AM

  362. Re: #345

    John Finn, Gee, if the warming is coming from a source of heat within Earth, how do you get zero warming for 30 years, followed by 30 years of accelerating warming. That seems to me to be a rather tall order.

    Cool PDO in the first 30 years.

    Ask yourself this question: based on the Lean TSI reconstruction (as is often quoted here)do you think temperatures should be higher at the end of the 20th century than in the mid-1940s? the same? or lower?

    Comment by John Finn — 16 May 2008 @ 7:03 AM

  363. If the AO were to be conducive for retention for several years, how could there not be a recovery of some sort for perennial ice?

    [Response: That already happened. But the ice did not recover. - gavin]

    Comment by JCH — 16 May 2008 @ 8:37 AM

  364. Rod B,
    Be very careful when transitioning from behavior of gasses to solids. Solids are a collective entity, vibrational energies are quantized as phonons. Likewise, the electronic levels interact and distort, giving rise to energy bands–e.g. valence and conduction bands. This gives rise to some very odd collective behavior. (Google “High-electron mobility transistor” or “quantum hall effect”). Likewise in solution, the solute interacts with the solvent, especially if both are ionic.

    The energy levels of an atom/molecule, etc. depend on what matter is around and how much and how strong the interactions between atom/molecule and surrounding matter are.

    Comment by Ray Ladbury — 16 May 2008 @ 9:01 AM

  365. John Finn, TSI is not changing much, and insolation is actually decreasing. Given the increase in ghgs, which is the dominant change, I would expect increasing temperatures. You are of course free to try construct a climate model that explains current climate without an enhanced greenhouse effect. Good luck with that. Of course, you’d still have to explain why the influence of CO2 magically stops at 280 ppmv, but I think the model task ought to keep you busy for now.

    Comment by Ray Ladbury — 16 May 2008 @ 9:12 AM

  366. Ray (364), I meant to offer just a simple comparison, and went a bit to far. Thanks for the correction.

    Comment by Rod B — 16 May 2008 @ 1:06 PM

  367. Hank,

    In #356 you wrote:

    Alastair, for what value of “gone” — when will the observation be timely to look and see? It’s a solid prediction, if you define the terms clearly enough.

    Earth science is fractal, so for every rule there is an exception. I am saying that the Arctic will be ice free but there may be some ice forming each year off the coast of Greenland. And what about calving glaciers and ice shelves? Do they count?

    Anyway I am not a betting man, and I see Gavin et al.’s betting challenge as trivialising the dangers of climate change, especially with the catastrophe that will unfold when the Arctic sea ice goes.

    More to follow,

    Cheers, Alastair.

    Comment by Alastair McDonald — 16 May 2008 @ 1:25 PM

  368. Rod B #357:

    The single step scenario (photon energy
    in a molecular vibration energy state)
    is not at LTE, to the best of my
    knowledge.

    Yes, I would agree, sort of. And certainly see your point. But you could meaningfully say so only about a large number of molecules, not one. LTE, or non-LTE, are meaningful concepts only statistically. But in the real Earth atmosphere (except very high up) you will have LTE and a mix of vibrational and translational states.

    What you’re trying to do is a bit like trying to study the moment between a cartoon character stepping off a cliff, and beginning to fall. Not necessarily very wrong, but not very helpful for understanding either :-)

    BTW there are no molecules in a salt crystal — it’s a 3-D checkerboard grid of Na+ and Cl- ions. You see my problem now?

    Comment by Martin Vermeer — 16 May 2008 @ 2:41 PM

  369. Rod,

    Re #386 where Gavin responded: “Locally yes, though mostly in spring.” I think he was referring to your comment:

    … I would think the effective watts/m^2 reflected from polar ice would be a very small portion of that — solar rays coming in at large obtuse angles and everything.
    >

    Your question was:

    Is the changes in polar ice really that significant viz-a-viz albedo? Less than 10% of incoming insolation reflects off the surface …

    The Arctic ocean covers an area of 14 M km^2. The area of the globe is about 510 M km^2. Taking the average albedo of the surface as 0.1 and the average albedo of the Arctic ice as 0.9 then the Arctic contibutes 0.9 * 14 = 12.6 compared with an overall surface albedo of 510 * 0.1 = 51. In other words about a quarter or 2.5% of incoming insolation. That is 2.5% * 348 = about 8 W/m-2, twice the forcing from doubling CO2! And it is going to happen much faster as well :-(

    As Gavin pointed out, the obtuse angle to the ice applies mainly in the spring, but that increases the albedo because the sun’s rays glance off. Without the ice in the autumn when much of the ice has melted the rough sea surface absorbs the solar rays reducing the albedo. Without the formation during winter of the seasonal ice, then in the spring any wind will roughen the sea surface and its albedo will be much lower.

    One point not often realised is that in the Arctic the sun shines for the same amount of the year as every where else – 50% of the time. In the Arctic it shines continuously for six months with no chance for the surface to cool down over night. This constant radiation will raise the sea surface to a high temperature, without the ice ot reflect the radiation or the melting ice to absorb the heat.

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 16 May 2008 @ 3:53 PM

  370. Re #360 where Cobblyworlds says:

    The transition to seasonally ice-free state is likely within 10 years or so (in fact I’ll be stunned if there isn’t at least 1 such occurrence by 2018).

    I won’t be surprised if the Arctic is ice free all year round by 2018, although the collapse of the Arctic sea ice has taken longer than I expected :-)

    After the first occurrence of an ice-free state in the late summer we may well see the next year with a bit more September ice (not ice-free). Because of winter (e.g. this last winter) there will be a degree of decoupling between each summer minima.

    There is no guarantee that in the winter following an ice free Arctic that the ice will grow to a greater extent than the previous year. There may well be a tipping point below which when the summer ice is less than a minimum the winter ice is always less than that of the previous year.

    For instance, the increase in winter ice this year over last may be due to the fresh water produced by the melting of the multi-year ice last sumer. When there is no multi-year ice left to melt, then the winter ice will not extend over that of the previous year.

    … the next transition to a year-round ice free state will be more gradual. As the Arctic region warms, the period of ice free arctic ocean during the summer will start earlier and earlier. But we’ll still have a winter ice cap for decades to come, subject to the amount of IR blocking by the increasing levels of GHGs (mainly CO2/CH4).

    At present there is enough solar radiation, despite the high albedo of ice, to melt the multi-year ice by about 0.1 m ( 6 inches) per year. Without the ice, the albedo will decrease and the solar flux will be warming the ocean surface rather than melting ice. That means instead of melting the top 6″ of ice, it will be able to raise the temperature of the top 6″ of the ocean by over 80 degrees Celcius or 144 F! Of course much of the heat will be lost to the latent heat of evaporation, but this will create water vapour which is a greenhouse gas. It is the runaway effect of water vapour we should be fearing, not that of CO2 or CH4.

    I don’t see reason to anticipate the sort of massive and abrupt (~1 year) climatic impact(transition) outside of the Arctic that your suggestion seems to imply.

    It has happened in the past, when the Younger Dryas ended in just three years. I suspect last year was year one of the end of the Holocene. Let’s hope I am wrong :-(

    Cheers, Alastair.

    Comment by Alastair McDonald — 16 May 2008 @ 5:14 PM

  371. Chuck Booth #359 your paper said “Comparison of index trends in observations and model simulations shows that North American temperature changes from 1950 to 1999 were unlikely to be due to natural climate variation alone.”.

    That’s fine. But that statement clearly suggests that at least some of the temperature change came from natural forces, otherwise it would have said something like “… it was likely that none of the temperature change was natural …”.

    All I am arguing is that there is likely a significant factor that the IPCC was unjustified in ignoring, and that if this factor was allowed for properly then the effect of CO2 on climate in the models would likely be reduced (because of the way the models were built). I don’t think I have stated it explicitly, but this would undoubtedly weaken the IPCC’s case.

    We have an interesting situation right now, with
    cooling surface http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt,
    cooling oceans http://www.agu.org/pubs/crossref/2006/2006GL027033.shtml http://www.npr.org/templates/story/story.php?storyId=88520025,
    (I don’t know what the troposphere is doing – anyone?),
    record high S.H. sea ice area http://nsidc.org/data/seaice_index/,
    winter N.H. ice this year beating 2004,5,6,7, and even matching 1996 http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/current.area.jpg,
    and the Sun resolutely refusing to give us another sunspot http://www.swpc.noaa.gov/ftpdir/forecasts/SRS/0516SRS.txt.

    The natural factors are clearly outweighing the dramatically increasing CO2 at least in the short term. If nothing else it tells us that maybe solar variation is more powerful than was thought and that we should look very very carefully to see if the IPCC really was justified in ignoring it for the computer models. There is support for this from many places, eg. http://www.sciencedaily.com/releases/2002/11/021113070418.htm http://www.abc.net.au/news/stories/2008/04/24/2225980.htm

    Hank Roberts #337. I read the attached paper on winds. Thx. The paper only dealt with intraseasonal fluctuations, so I don’t know that it throws much light on anything to do with long timescales???

    PS. http://www.spaceweather.com/ says that three new sunspots are emerging.

    Comment by Mike — 16 May 2008 @ 5:23 PM

  372. Alastair McDonald (370) wrote “I suspect last year was year one of the end of the Holocene. Let’s hope I am wrong”. Beyond hope. The only quibble is what year to pick as the last one of the Holocene and then the next is the first of the Anthropocene.

    Comment by David B. Benson — 16 May 2008 @ 5:51 PM

  373. Rod B. #357, Think about this. How do you get anything like thermal equilibrium with a single molecule–equilibrium is a purely statistical concept. So is temperature. It only makes sense when viewed in terms of a statistical mechanical state.

    Comment by Ray Ladbury — 16 May 2008 @ 6:22 PM

  374. Mike, What part of La Nina do you not understand? That is all the “natural factors” you need to explain what is going on. It is virtually certain that there are missing factors, and just as certain that they aren’t all that important. If you are going to propose another factor, you first have to explain why the constraints on CO2 forcing are wrong.

    Comment by Ray Ladbury — 16 May 2008 @ 6:26 PM

  375. I think it’s fair to say that the Holocene is long gone. The Anthropocene started in, what, 1850? 1900 at the latest.

    Re: the transition to an ice-free winter Arctic. I tend to agree with Cobblers. The Arctic heat budget includes a large (though decreasing as GHGs increase) negative term during winter. For every summer that’s ice-free, heat will accumulate in the upper ocean and it will take longer for the winter freeze-up to begin. That’s what will have the immediate impact on NH climate. Lots of heat and water vapour hanging around into early winter. The last NH winter perhaps gives an idea of what to expect…

    With virtually all the multi-year ice gone, the freeze-thaw cycle will probably look a lot more like Antarctica’s, and I wouldn’t be surprised if increased freshening of the Arctic from river run-off meant that for a few years we had unusually high winter extents as well as record summer lows.

    However, as long as the positive terms in the heat budget exceed the winter negative, the Arctic will warm. Over the last decade(s), much of that has gone into melting multi-year ice. When that’s all gone, the medium/long term outlook is for an ice-free Arctic all year round. The only question is how long will it take…

    Comment by The Tuatara — 16 May 2008 @ 6:40 PM

  376. Mike, you quote Lyman.

    That suggests a useful exercise.

    Put the name in the Search box at the top of the page here.

    And, put the reference into Google Scholar, like this:
    http://scholar.google.com/scholar?q=doi%3A10.1029%2F2006GL027033

    (Note the top three hits, read them, consider what difference it makes to what you posted above.)

    General advice — whenever you talk about a study, always look up the original, you can get at least the basic info, and usually the abstract. Then click the link usually available for ‘cited by’ or ‘related’ to see newer papers that reference the one you found.

    This is why you have to look at the original work, to track it forward in time.

    Comment by Hank Roberts — 16 May 2008 @ 6:52 PM

  377. Ray Ladbury #374 said “you first have to explain why the constraints on CO2 forcing are wrong”.

    I have done that, at some length, in some detail, and over many posts. I really doubt that anyone here wants me to go through it all again.

    Hank Roberts #376. Thx. I’ve not done it yet, but I will.

    Comment by Mike — 16 May 2008 @ 9:57 PM

  378. Arctic (my current obsession),

    Water vapour is a greenhouse gas, but it’s concentration in the atmosphere is limited by temperature. So when insolation disappears as the Arctic “night” begins, cooling will occur and atmospheric water vapour concentrations will be reduced. The reason I focus on CO2/CH4 levels as key to the winter ice’s fate is that their atmospheric levels will not be reduced by a reduction in temperature. Although of course water vapour will still be able to act as an amplifier.

    For me water vapour is the source of the primary climatic impact in summer because of baroclinicity and latent heat impacts being able to extend up into the arctic troposphere (and subsequently stratosphere). It seems to me that any wider climatic impact will come from this rather than (direct) sensible surface heat changes due to insolation. The risk of a wider climate impact outside the Arctic seems to me to be linked to impacts on the Polar Vortex, affecting the AO/NAO. Also the possibility of Rossby waves teleconnecting to pressure patterns in the Northern Pacific seems to me to suggest potential for influence on ENSO (and needless to say, US weather). The most dangerous impacts could be from changes in timing/amounts of precipitation. Manageable in themselves, but worrying in view of the current food-price situation and risks to political stability. In my post #174 I allude to what I see as the most worrying single implication, to which I would add that the Arctic Basin has become more stormy due to increased baroclinicity.

    All that said, I am on a very steep learning curve, so may have completely the wrong end of the stick. ;)

    Feel free to correct me if that’s the case. (I can provide references for the statements above – but it’s early Saturday morning and I’m too bushed right now to go trawling for them.)

    When did the Anthropocene begin?

    Good question: For me it’s about 1975, when the first key indicator (global average temperature) clearly started to deviate well outside natural variability).

    Comment by CobblyWorlds — 17 May 2008 @ 3:56 AM

  379. Hank Roberts #376. Thanks for your advice re finding papers etc. I do always try to access original papers, but sometimes can’t get access. I’ve looked at ‘Lyman’, and it appears that the oceans are not cooling, and they’re not warming either. That’s OK. It still suggests that something more powerful than CO2 is operating out there – surface cooling, oceans not warming, global ice growing, etc.

    I asked about the troposphere a while ago, and I don’t think anyone replied. I have gone looking for it, and found this page http://www.remss.com/msu/msu_data_description.html

    I read the site’s explanations, and it looks to me, from the 4 line graphs about 3/4 down the page, like all the layers of the troposphere and stratosphere have been cooling since about 2002. Have I misread it?

    Comment by Mike — 17 May 2008 @ 3:56 AM

  380. Alastair writes:

    It is the runaway effect of water vapour we should be fearing, not that of CO2 or CH4.

    Not really. Water vapor rains out quickly. The average molecule of water vapor stays in the atmosphere nine days.

    Comment by Barton Paul Levenson — 17 May 2008 @ 6:17 AM

  381. Mike posts:

    All I am arguing is that there is likely a significant factor that the IPCC was unjustified in ignoring, and that if this factor was allowed for properly then the effect of CO2 on climate in the models would likely be reduced (because of the way the models were built).

    We know. And we’ve been telling you that you’re wrong and that the models aren’t built the way you think they are. And you keep on repeating what your point is.

    We know what your point is. Your point is wrong.

    Comment by Barton Paul Levenson — 17 May 2008 @ 6:19 AM

  382. #377
    No, Mike, you have not explained (at least convincingly) why you think estimates of CO2 sensitivity are wrong. You have merely asserted that it is the case and suggested that there might be some climate scientists (unnamed) who say that is possible. How said factor would affect multiple independent constraints (from very different times, etc.) on CO2 forcing and somehow magically lead to the same range of values. You’ve posited no mechanism for the effect. If it is a feedback, you also need to explain why it reinforces solar forcing, but not greenhouse forcing. If it is an external source of energy, you need to identify it.

    Comment by Ray Ladbury — 17 May 2008 @ 7:17 AM

  383. Ray (373), isn’t molecular equipartition a sort of (or a bit like) local “thermal equilibrium”? Or is that stretching too far?

    Comment by Rod B — 17 May 2008 @ 10:39 AM

  384. Mike, look again, the trend line is drawn for each of the four channels. (Last letter of the acronym is T for Troposphere or S for Stratosphere, to explain for later readers.) Here’s the TMT:
    http://www.remss.com/data/msu/graphics/plots/sc_Rss_compare_TS_channel_tmt.png

    This and the links in the first few lines of it may help:
    http://julesandjames.blogspot.com/2008/04/has-global-warming-stopped.html

    There’s a basic step in a first course in statistics that is very hard to take — that’s letting go of the strong human ability to find patterns in anything whether they’re there or not, and instead relying on doing the math before claiming any pattern or what pattern is there in the data.

    Looking at charts without the numbers makes this pattern-finding ability even harder to let go of.

    We’re built to expect patterns. Our ancestors thought tigers were lurking in the jungle far more often than tigers were there. Our non-ancestors, early in life, failed to notice just one tiger.

    Comment by Hank Roberts — 17 May 2008 @ 11:39 AM

  385. Rod, yes, but how do you get equipartition without an large assembly of molecules?

    Comment by Ray Ladbury — 17 May 2008 @ 12:19 PM

  386. I’ll answer all your recent posts in one way rather than individually. I’ve obviously reached the point where I’ve said just about everything I can say. It doesn’t look like I’ve convinced anyone of anything, but it was worth a try.

    Hank Roberts #384 said “We’re built to expect patterns.”.

    The problem with the IPCC models is that they are built to expect the patterns already seen, and they are all built with the same underlying assumption. This is why they agree with each other so well, and why it is possible for them all to be wrong. If you look at the latest IPCC report (and I’m not, I’m relying on memory), it expects a certain result for the 1st decade of this century, followed by a temperature gradient as delivered by their calculations for AGGs.

    But at the time the report was printed, the result for at least half of that decade was already “in the bag”. The way the models are constructed, natural forces are so weakly represented that it doesn’t matter what recent pattern you feed in, the models must ALWAYS turn upwards again fairly soon.

    Under my logic, the downturn in recent years is expected, because there has been a decline in solar activity, which would probably only be partly offset by AGGs. (It is complicated by a La Nina too, but I don’t know how strong that is). Because of the time lags, certain doubts about temperature measurements, and lack of knowledge about the mechanism, I can’t possibly predict exactly when, where or how much temperatures will change. But temperatures falling below the model calcs for AGGs, at some time, is exactly what I would expect, and it is happening now. It won’t go in a straight line, so it will take a long time before it convinces you that it is real. No matter what it does, if you feed it into the models they will turn up again for future years.

    We all know that a decade is a short time in climate, and a decade’s events do not signify much. But if you check the IPCC report, you will find at least one place where natural forces are dismissed simply on the grounds that they do not match the last decade of the 20th century. I call that double standards, and bad science.

    This page began with the paleoclimate. I’m not quite sure how to put this, because Hansen is such a respected scientist, but, frankly, his paper on the paleoclimate cycle is not very impressive. He tries valiantly to explain everything in terms of CO2. He turns himself inside out trying to come up with extraordinarily unconvincing mechanisms that can explain the changes of direction of the cycle. Compare what he has written with papers that propose changes in insolation (that I believe includes both solar activity and albedo?) as the major mechanism, and simply by applying Occham’s razor he is in big trouble.

    Natural forces are clearly overriding AGGs in the short term, which makes it very likely that they really are more powerful than allowed in the models. Natural forces clearly overrode GGs quite comfortably in the paleoclimate cycle. I believe I am right in saying that GG concentrations were sometimes higher back then than they are now (?). GGs have only increased 40% since 1750, and the scale is logarithmic, so it is approaching the absurd to argue that AGGs will now overpower the natural forces.

    Comment by Mike — 17 May 2008 @ 3:09 PM

  387. Sorry, Mike, stuff like

    “… and the scale is logarithmic so …”
    “… explain everything in terms of CO2…”
    “… simply by applying Occham’s razor …”
    “… there has been a decline in solar activity …”

    is hodgepodge, not from any science journal. It’s PR stuff.

    Read Spencer Weart’s history. Read some primary sources.

    And cite sources for what you believe and tell us why you believe what you’re reading, and why you trust the places you’re reading it.

    Comment by Hank Roberts — 17 May 2008 @ 4:56 PM

  388. Hank Roberts #387 : “cite sources for what you believe and tell us why you believe what you’re reading, and why you trust the places you’re reading it”.

    My main source of information is and always has been the IPCC report, as I believe I have made clear in my posts. I have been through it many times, and it is a source of what appears to me to be reliable scientific information.

    Unfortunately, as a person with computer modelling experience, it appeared to me that there was a fundamental error in the way that they had used the scientific information in the computer models. In order to take this further, I had to work through the report to find out how much of it flowed from the science and from valid modelling and was therefore likely to be reliable, and how much flowed from the invalid modelling and was therefore likely to be unreliable.

    I found some places in the report, which I have quoted in previous posts, which gave me a handle on the likely scale of the error, and found that it was, to say the least, non-trivial. I then went looking for external information that could confirm or deny what I had found. That brought me to this page, because obviously the paleoclimate cycle was a very sensible place to look for information on the long term relationship between climate and GGs.

    I find that in all areas, debate is raging. That means that there are plenty of papers that support my case, and plenty of papers that support the IPCC. The latter reduces if I eliminate anything that starts with the assumption that the IPCC view of AGGs is correct – that is why I have generally avoided looking at any partisan website on either side. It is also why a lot of the external information that I have been relying on most has been the actual climate – temperatures, albedo, etc. This information is much less susceptible to manipulation and distortion than theoretical papers.

    There has been a decline in solar activity this century. There was an increase in solar activity over the last century, including a sustained period of high solar activity in the 2nd half. There was a decline in cloud cover from around 1980-2000, and there has been a recovery since. Lots of things add up, that are not driven by CO2.

    I’ll stick my neck out and make what I have said falsifiable : Given the recent decline in solar activity, then if the Sun remains relatively inactive I have to expect at the very least a decrease in the rate of warming to show up. If over say two decades this does not happen, then my findings are falsified and I can admit it and go away.

    I’ll go further, in the interest of trying to achieve a faster result : if the Sun remains at a low level of activity for one more year, then there will be a significant decline in Northern Hemisphere temperature within the next 2-3 years.

    I can’t be more specific than that, because the mechanism is not known.

    Comment by Mike — 17 May 2008 @ 6:50 PM

  389. Mike, if you’re submitting a paper for possible publication, I realize you need to keep it private til it’s considered-Just say so.

    But if not, citations, please?

    “external information that I have been relying on …”

    “Given the recent decline in solar activity …”
    Cite?

    “a decline in solar activity this century”
    (activity: sunspots? TSI? insolation? measured and counted where?)
    (century: the last 100 years? Last 7-1/2 years?)

    “falsifiable … a decrease in the rate of warming”

    Falsifiable would mean an unambiguous prediction that can distinguish — how are you distinguishing your prediction from the ‘Global Cooling studies discussed in threads here? Two different forward-looking climate models suggest a decrease in the rate of warming. What’s different about yours?

    “plenty of papers that support my case …”
    List, pointer to your website, something?

    Comment by Hank Roberts — 17 May 2008 @ 7:49 PM

  390. Mike,
    http://scholar.google.com/scholar?num=100&q=%22recent+decline+in+solar+activity%22&as_ylo=2003&btnG=Search

    What’s up? Where are you finding your information?

    Comment by Hank Roberts — 17 May 2008 @ 8:00 PM

  391. Hank Roberts #389. I’m not preparing a paper for publication, and I don’t have a website. I’m just an individual who has put a lot of time and effort into this topic.

    For most of the things I have referred to, I have provided a link at the time. However, I will go through all your questions and try to retrieve all the relevant links. This will take me a day or two.

    Hank Roberts #390. You won’t find anything I have said anywhere else on the web, except by coincidence, or if my comments have been posted on by someone. Whenever I use material from anywhere else I try to attribute it. I have posted comments on some other websites, but every post has been written individually – little or no cut-and-paste from post to post.

    Comment by Mike — 18 May 2008 @ 3:24 AM

  392. Mike writes:

    The problem with the IPCC models is that they are built to expect the patterns already seen

    No, Mike, you have no idea how the models are built. No idea at all. Why don’t you look up a book on climate modeling at your local university library? Ann Henderson-Sellers and coauthor’s (can’t remember the name) A Climate Modeling Primer is a good place to start. Or try here:

    http://en.wikipedia.org/wiki/Global_climate_model

    Comment by Barton Paul Levenson — 18 May 2008 @ 6:10 AM

  393. Mike (388) says, “….as a person with computer modelling experience,…”
    Barton (392) says, “…Mike, you have no idea how the models are built. No idea at all….”

    Gettin’ weird.

    Comment by Rod B — 18 May 2008 @ 2:28 PM

  394. Rod B., Parse the two phrases. Just because Mike has experience with computer modeling does not imply that he has any understanding of how the [climate] models are put together. Indeed, he has demonstrated quite the contrary.

    Comment by Ray Ladbury — 18 May 2008 @ 3:12 PM

  395. Mike, Until your “logic” becomes a “model,” all you have is handwaving. For someone with “computer modelling experience,” constructing a simple model should not be too taxing. Why don’t you see if you can match the climate record without assuming significant warming from CO2. What is not clear to me is what level of CO2 induced warming you assume from CO2 at pre-industrial levels and how you get it to simply stop increasing while increasing CO2 concentrations by 38%.

    Comment by Ray Ladbury — 18 May 2008 @ 3:18 PM

  396. Mike, I’m just another reader here.

    I don’t recommend you spend days documenting everything you’ve posted here and try to post it here — it’d go way off topic!

    These threads aren’t about us readers. The scientists posting here generally do cite their statements (and we can check their Publications pages easily to fill in any conceptual leaps).

    I mean to suggest that when you post a factual statement in a topic, including a cite helps us all to not only know where you got it, but to be able to check the cite ourselves and see who else has cited the study. Often, there are new studies worth reading to catch up on.

    Comment by Hank Roberts — 18 May 2008 @ 3:38 PM

  397. Ray (or Barton), I just thought at the fundemental level all computer modelling was quite similar, while maybe differing significantly in the implementation. Is climate modelling a whole different animal?

    Ray (385): a single molecule can not redistribute its energy among its own different stores (DoFs)?

    I don’t want to resurrect our debate on single molecule capabilities and characteristics (probably no one else does either :-) ), but I still can not accept your premise until you verify that a single molecule can not have either mass or velocity.

    Comment by Rod B — 18 May 2008 @ 9:58 PM

  398. Hank Roberts #396. I have tried to post a link to anything that I have got from elsewhere, unless I thought it was common knowledge.

    I’ll let a number of recent comments by others go through to the keeper, and concentrate on your #389:

    By “this century” I mean the 21st century.
    —–
    “The recent decline in solar activity” was not very clearly defined. We had been talking about TSI, so let’s start there. The last graph in ESA/NASA SOHO’s
    http://www.pmodwrc.ch/pmod.php?topic=tsi/virgo/proj_space_virgo#VIRGO_Radiometry
    shows TSI decreasing from about 2002 onwards.

    But it could equally well relate to sunspots, as there is a correlation. See the graph in
    http://www.global-greenhouse-warming.com/solar-irradiance-measurements.html

    There is also a link to a Judith Lean paper
    http://lasp.colorado.edu/sorce/news/other/SORCEwebsite_News_Solar_Cycle.pdf
    which spells out a bit more detail.
    —–
    “plenty of papers that support my case …” :
    My case is built on the premises that (a) the IPCC said that solar variation has more effect on climate than is built into the models, (b) they said that they ignored this in the models because they couldn’t model it.
    My case is that they were not justified in ignoring it, and there should be some allowance for it.

    There really are plenty of papers that provide support for the view that either the Sun has more effect and/or the models don’t reflect it. Apart from the IPCC report itself, see the Judith Lean paper above : “Comparisons of the empirical results with model simulations suggest that models are presently deficient in accounting for these pathways [of solar irradiance impacts from atmosphere to surface and climate].”…”Even relatively small changes in the Sun’s output could impact the Earth because of potential amplifying effects in how the atmosphere responds to those changes.”…”Before we can truly
    interpret the role that human’s are having in changing Earth’s climate we must first more accurately assess the role that natural “forcings” have on the climate system, and the Sun is by far the most significant of those natural “forcings”.”.
    or
    http://www.agu.org/pubs/crossref/2007/2006GL028764.shtml
    “If changing brightnesses and temperatures of two different planets are correlated, then some planetary climate changes may be due to variations in the solar system environment.”.
    or
    http://www.physorg.com/news129483836.html
    “absence of solar activity may have been partly responsible for the Little Ice Age” [NASA]

    I haven’t kept links to the various papers like these that have flowed past my eyes over the last few months. You can appreciate that finding them again will be time-consuming. Obviously, I have avoided citing anything that comes from a “skeptic” stable.
    —–
    Coming up with a falsifiable prediction is more difficult. We all acknowledge that the mechanism is not known so a detailed prediction is not possible. The Sun has to put in a period of lower activity in order for any prediction to be effective, but we are still necessarily working off a short period. Given the recent decrease in solar activity, and the delay in build-up of cycle #24, there probably is just such an opportunity right now. I’ll work a bit more on the detail, but basically I expect the N.H. over the next year or three to have significantly lower temperatures than the last decade.

    I can’t help it if two climate models make the same prediction, because I can’t dictate the Sun’s behaviour to the point that it can falsify everyone except me. However, if only two models predict this, then presumably all the others don’t? Getting a distinguishing prediction against all but two models would be a nice start.
    —–
    PS. I seem to remember Kevin Trenberth (IPCC) saying the models didn’t do predictions, they only gave “what-if” projections. Are you able to say on what basis the two models’ “predictions” were made?

    Comment by Mike — 18 May 2008 @ 10:43 PM

  399. Hank Roberts #387. About “the scale is logarithmic” being hodgepodge.

    I was referring to the warming effect of CO2 being proportional to Log(CO2 concentration). I understood that to be common knowledge so didn’t cite anything for it. It’s in the IPCC report, 2.3.1.

    Comment by Mike — 19 May 2008 @ 12:35 AM

  400. Ray Ladbury #395 – “see if you can match the climate record without assuming significant warming from CO2″.

    That’s not what I’m saying at all. I’m saying that the models don’t make enough allowance for natural factors.

    Comment by Mike — 19 May 2008 @ 12:39 AM

  401. Mike, it’s the “so” part of your statement following ‘logarithmic’ that ain’t so. Look this up, here and elsewhere. You wrote:

    > so it is approaching the absurd to argue that
    > AGGs will now overpower the natural forces.

    Where do you get that conclusion? Why are you relying on whoever told you that? You’re being fooled. It’s a bogus claim, it’s an old, familiar, bogus claim.

    Try putting the term in the search box, along with climate.

    One response you’ll find here, I’ll quote in full as an example of how you can check your beliefs before you post them:

    http://www.realclimate.org/index.php/archives/2006/10/attribution-of-20th-century-climate-change-to-cosub2sub/

    “[Response:If the temperature-CO2 relation were as simple as Lubos suggests, all would indeed be simple. But it isn’t, as he knows full well. That the T-CO2 relation is approximately logarithmic is no surprise, its why future T increases tend to be approximately linear when CO2 increases exponentially - see for example http://www.grida.no/climate/ipcc_tar/wg1/fig9-5.htm - William]“

    Comment by Hank Roberts — 19 May 2008 @ 4:45 AM

  402. Rod B, I tried, but I assure you it can’t be done as you describe.

    I created an empty universe. Okay, no mass, no velocity.

    I created a single particle in that universe. Hmmm, how can we measure its mass or velocity in relation to ….. um …. nothing.

    Try it yourself (grin).

    Comment by Hank Roberts — 19 May 2008 @ 4:49 AM

  403. Mike posts:

    That’s not what I’m saying at all. I’m saying that the models don’t make enough allowance for natural factors.

    How much would be enough? How much allowance are they actually making? Can you put numbers to those two questions? If not, your question may be incoherent.

    Comment by Barton Paul Levenson — 19 May 2008 @ 7:03 AM

  404. Rod B., OK, let’s think about equipartition. For a single molecule, we have 3 types of energy–potential energy (coulomb, vibrational, etc.), kinetic energy about the center of mass and kinetic energy of the center of mass. Kinetic energy about the center of mass and potential energy associated with a particular degree of freedom freely interchange. However, you can’t turn motion ABOUT the center of mass into motion OF the center of mass. Statistical mechanics and therefore thermodynamics really only work when you have a large assembly of molecules. How large? Well, it depends on how well you want thermo/stat. mech. to work–i.e. what sort of fluctuations can you tolerate. When you look at nonequilibrium stat mech, the fluctuation ARE the physics, but that is a much tougher nut to crack.

    Comment by Ray Ladbury — 19 May 2008 @ 8:34 AM

  405. Mike, you are still handwaving. Take a look at the magnitudes of the effects you are talking about. Take a look at the time dependence of each. There’s a reason why Raypierre said, (to paraphrase)–solar effects go up and down… Temperature trends go up. And yes, the trend is still up.

    Comment by Ray Ladbury — 19 May 2008 @ 8:36 AM

  406. Re #404 A single molecule has more than 3 types of energy.

    “Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of an electronic, vibrational, rotational, nuclear and translational component, such that:

    E = E{electronic}+ E{vibrational} + E{rotational} + E{nuclear} + E{translational},

    where E(electronic) is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.”

    See: http://en.wikipedia.org/wiki/Energy_level#Molecules

    The temperature (measured with a thermometer) of a gas is directly related to the average of the E(translational) energy of the molecules. It there is only one molecule then the average value corresponds to its value :-)

    Moreover, the other energies can also be represented by temperatures, which in thermal equilibrium are all equal to the E(translational) temperature due to the Principle of Equipartition of Energy.

    In that case, it does not makes sense to talk of thermodynamic equilibrium of only a single molecule, since the equipartition of energy is achieved by collisions and one molecule cannot collide with itself. In fact even local thermodynamic equilibrium breaks down at high altitudes where the gas is so rare that collisions no longer drive the other energy levels.

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 19 May 2008 @ 10:49 AM

  407. Re #380 where Barton Paul Levenson Says:


    Alastair writes:

    It is the runaway effect of water vapour we should be fearing, not that of CO2 or CH4.

    Not really. Water vapor rains out quickly. The average molecule of water vapor stays in the atmosphere nine days.”

    It is not the length of time that a molecule spends in the atmosphere which determines the greenhouse effect. It is the number of molecules in the atmosphere. The problem with the long atmospheric life time of CO2 molecules is not that they will cause more heating, but that even after we stop emitting them the heating will continue.

    It is the short life time of water vapour which creates the danger of a runaway. Using your figures, the water in the atmosphere is replaced every nine days. That means that any alteration to the water cycle could be complete withing nine days. Melting of the permafrost and the release CO2 and CH4 takes decades at least, but a change in global humidity patterns with alterations in the greenhouse effect of water vapour and clouds could happen with a timescale of weeks rather than decades.

    Comment by Alastair McDonald — 19 May 2008 @ 11:13 AM

  408. Alastair (369). To test your math (though mine is probably not that rigorous) I’ll start with about 30 w/m^2 reflecting off the surface. I would also use more like 0.75 (vs. 0.9) for Arctic albedo. Fresh snow is at best 0.8-0.85+; old snow is less; oddly ice is not that high, ~0.35. I would also use more like 0.15 (vs. 0.1) for the rest of the surface; there is a pile of surface 0.1-0.4; the ocean is probably less than 0.1, though there is vagueness and variability there that might play, particularly at high off the normal angle of incidence.

    Taking your 14M Arctic area + 496M all else = 510M tells me that “all else” accounts for ~0.875×30= 26+ watts, the Arctic ~4 watts. This would say losing all of Arctic ice and snow would create at best a 4 watt, not 8 (though still nothing to sneeze at, I guess) equivalent forcing. Even this doesn’t account for the much higher albedo from the now uncovered Arctic waters, at least in the winter with its very high angle, nor the ocean’s albedo vagueness due to wave motion.

    I don’t know if this is a quibble or not. And even if correct I don’t know if it changes Gavin’s “very important” assessment. But I’d be interested in your or Gavin’s thoughts.

    Comment by Rod B — 19 May 2008 @ 2:55 PM

  409. Hank, I’ve admitted before it’s velocity (or temperature) can not be measured with anything much better than 100% error. But that doesn’t mean it doesn’t exist!

    Comment by Rod B — 19 May 2008 @ 3:04 PM

  410. Alastair, I was trying to be quite general–electronic, nuclear and vibrational would all be associated with potential energy. My main point was that you can’t turn motion about the CM into motion of the CM without an external action.

    Comment by Ray Ladbury — 19 May 2008 @ 4:28 PM

  411. Ray and Alastair: one point you gentlemen seem to agree on is that in normal circumstances a single molecule will not shift its energy around its own stores (DoFs) to accomplish equipartition; the only way it can get to the even-ness it desires is through collision with another molecule. Correct? Is there any remote goofy quantum mechanical thing where it might… once in a great while? (not that this matters, though…)

    A related question: does the energy of the photon (hf) match one of the vibration energy levels of, say, CO2? I did a back of the envelope calc that told me the photon energy was much greater than vibration energy levels, which I assumed strongly incented the molecule to spread the energy around. (Ray, if not temperature, does a molecule have a soul? ;-) ) Though I couldn’t understand how/why it gets absorbed. Maybe my math was just off….

    Comment by Rod B — 19 May 2008 @ 10:01 PM

  412. Hank Roberts #401. The RealClimate link you posted is all about attribution within models and how complex it is.

    My starting point is that there is “increasingly reliable evidence” for a substantial effect from solar variance [IPCC 1.4.3], and that this has been left out of the models. The evidence is or appears to be empirical evidence, so the issue of model complexity does not arise. I believe that I have provided enough links and information that the part of my assertion concerning the omission of the full effect of solar variance from the models is beyond dispute. ie, I believe that you could validly try to argue that it is too small to be concerned about (evidence would be needed), but not that it was not omitted.

    My quotes from the Judith Lean (Woods and Lean) paper in #398 demonstrate that my view is very clearly supported. I don’t know about Woods, but Lean was a lead author for chapter 2 of the latest IPCC report. The paper is not an old paper, it appears to have been published in NASA’s ‘The Earth Observer’ Jul/Aug 2007 http://eospso.gsfc.nasa.gov/eos_observ/pdf/Jul_Aug07.pdf.
    (The link I gave originally appeared not to contain a date)

    What no-one can do at this moment is to quantify the missing mechanism. The empirical data does suggest that solar variation is several times more powerful than allowed for in the models. This was confirmed by BPL #279.

    I argue that the models lack credibility if significant empirical evidence has been ignored purely on the grounds that it was not possible to model it. That is quite simply not a valid excuse – if this was traditional science, not a modelling exercise, can you imagine a scientist ever getting away with the statement “we didn’t understand that part, so we ignored it”?

    [Response: Which models ignore this? (try reading Shindell et al, 2006 for instance). The implication that there 'must' be a missing mechanism is not clearly supported at all. Within the uncertainties in the solar forcing, climate sensitivity and reported impacts the model results fit. It would take substantial improvements in the accuracy of all three components to determine if there really is a mismatch. Finally, no physical model can include effects whose physical justification is unknown. How can they? If proponents of 'missing mechanisms' ever get round to quantitatively working out the implications of their mechanisms, they'll be included and their implications assessed. Until then, they are just hand waving. - gavin]

    Comment by Mike — 20 May 2008 @ 1:26 AM

  413. Re #408 where Rod B wrote “I don’t know if this is a quibble or not. And even if correct I don’t know if it changes Gavin’s “very important” assessment. But I’d be interested in your or Gavin’s thoughts.”

    [edit]

    Suggesting that my figures are out by 100% is just a quibble! That is because of something that I have not yet told you. Only about 10% of the radiation from the surface escapes directly to space, so when more incoming solar radiation is absorbed, the surface temperature has to increase considerably in order to bring the incoming and outgoing radiation back into balance. This problem is made worse because the greenhouse effect increases with temperature, applying a positive feedback to the surface temperature.

    It is this positive feedback which determines the new surface temperature, not the amount of the change in albedo.

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 3:49 AM

  414. Re #410 & 411

    I do not agree that a collision is required for a molecule to change state. All high energy states have half-lives, and eventually an excited molecule will return to its rest state even without any external activation. In the Old Quantum Theory, emissions could be either stimulated (as the result of a collision with a photon) or spontaneous due to to the half life being exceeded. Since then it has been discovered that radiationless transition can occur, so a vibrationally excited molecule will first relax into a rotationally excited molecule, and then its energy will collapse into translational energy.

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 4:03 AM

  415. #407 Alastair: why do you think the amount of water vapour would suddenly increase and keep on increasing? As soon as there’s a temperature dip, the holding capacity of the atmosphere drops. This happens overnight for example. If something else causes warming the average amount of water vapour would be expected to go up, amplifying the warming effect. If there is no external impulse, what would cause a runaway water-driven warming cycle as you are suggesting?

    Comment by Philip Machanick — 20 May 2008 @ 4:04 AM

  416. Re #411

    Only the photons with the energy of the vibration will be absorbed, so the easiest way to calculate the energy of the vibration is to find the energy of the photon that caused it.

    Otherwise you need to know the mass of the atoms and the strength of the atomic bonds. Where did you find those?

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 4:13 AM

  417. Alastair (413), yes, I understand that, but it isn’t my line of inquiry which is: Does the loss of Artic ice and snow cover really decrease the albedo of the solar rays striking the area and presumably then increase the solar absorption to the large level that has been assumed and stated. My question (not really a contention — I’m just asking and wondering) stems from the maybe inappropriateness of 1) using averages (1/4 of incoming to spread it evenly around the globe, or evenly spread over an intercepting disk when it’s really stronger at the peak of the intercepted hemisphere and weaker at the edges — where the Arctic is), and 2) the persistence of some albedo when all the ice/snow is gone by the reflection off the ocean water due to the angle of incidence or wave motion.

    Comment by Rod B — 20 May 2008 @ 8:36 AM

  418. Re #415

    Philip,

    The temperature does dip after 12 hours in the tropics, but that is only after the concentration of water vapour has become so great that there is a daily tropical storm.

    In the Arctic the day can last up to half a year. During the middle two months (seven nine day periods) of summer the solar flux at the pole is equal to that at the winter tropic. What that means is that during the summer the Arctic will become sub tropical. This is not as outlandish as it seems because there is geological evidence those conditions did exist there in the past. Fossils of crocodiles have been found on Ellesmere Island, and there are coal measures in Alaska which were formed when it was as close to the North Pole as it is now.

    Although that may allow the Inuit to become vegans, it bodes ill for the Pacific Islanders, Bangladeshi, Floridians and New Yorkers, since the Greenland ice sheet is unlikely to last long in those warmer conditions.

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 8:43 AM

  419. Alastair (416), Actually I don’t recall where I got that — turns out I didn’t. What I did was compare the energy of a 15micrometer photon with the expected translation energy ala 1/2mv^2 at some rule of thumb temperature of the molecule (Ray, relax; just talking here) of around 300K. I’ve lost my envelope but I recall the photon energy being much greater than the translation energy, which struck me as odd though I have no science to question it myself.. The photon matching a vibration level makes total sense.

    Comment by Rod B — 20 May 2008 @ 8:52 AM

  420. Rod, your math was off, the energy of the absorbed photon has to be very close to the energy difference of the excited and ground state or there is no absorption cross section. Also, Alastair’s allusion to a nonradiative transition is not particularly germane here, as such transitions are rare–rotational states rarely have the same energy as a vibrational state. What he says about half-life is spot on, but the radiative half-life of CO2′s excited vibrational state is sufficiently long that collisional relaxation is more likely than radiative relaxation.

    Comment by Ray Ladbury — 20 May 2008 @ 9:12 AM

  421. Re #417

    In my post I asked (perhaps rather clumsily) what you meant by

    Gavin’s “very important” assessment.

    but it was editted out. I am still not sure to which of Gavin’s very many important assessments you were referring. Rather than wait for your reply I will assume it was this one:

    [Response: Locally yes, though mostly in spring. - gavin]

    I still can’t work out what Gavin’s means. But if he means that the change in albedo will only be significant locally then he has a point but he is ignoring the tele-connections which drive the climate.

    For instance over the last few days I have pointed out that:
    1) A warmer Arctic will lead to a melting of the Greenland ice and a global rise in sea levels.
    2) The warmer ocean will take longer to freeze in winter leading to less seasonal ice and an earlier summer melt.
    3) The loss of the polar vortex will lead to less wind shear so there will be more hurricanes and less tornadoes.
    4) Without the cold polar air flowing south and lifting the moist subtopical air there will be no mid latitude clouds. Thus the sub tropical deserts will move northwards and the tropical jungles will expand, provided they are not destroyed by man for agricultual use.

    This disruption to the northern hemisphere is bound to alter the southern hemisphere climate, if only by changing the path of the ITCZ.

    The atmosphere is like a water bed. If you push down on one spot it will pop up somewhere else. If you stop pushing down at the north pole with the polar vortex and have rising air there powered by natural convection, then that air has to descend somewhere. And those changes could all happen in a period of nine days!

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 9:59 AM

  422. Re #419
    The energy of the photons absorbed by CO2 correspond to ~2kcal/mole so if that were converted to translational energy of those same molecules they would become rather hot! Since in our atmosphere they’re surrounded by a ‘bath’ of N2 & O2 molecules they share that energy with the whole atmosphere.

    Comment by Phil. Felton — 20 May 2008 @ 10:20 AM

  423. Re #419, 420 & 422

    Rod your calculations were correct. In the troposphere the air is too cold to fully excite the 15um band. The technical term is to say that the band is “frozen out.”

    Since the effective temperature of the Earth is 253 K it seems strange to see how the IPCC mechanism of the outgoing longwave radiation originating in the upper troposphere where that temperture occurs can be correct!

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 12:09 PM

  424. Alastair, even at 253 K, roughly 2% of molecules will have an energy equal or greater to the energy of the vibrational state of CO2 corresponding to the 15 micron line. Even at 200 K, its ~0.8% of molecules having that energy. I don’t call that frozen out. Rather it’s what gives rise to the blackbody (or in this case greybody) spectrum.

    Comment by Ray Ladbury — 20 May 2008 @ 1:10 PM

  425. Re #424

    Ray,

    Yes but …. If only 2% of the molecules are excited then the radiation will only have an intensity of 2% of B(T). The models seem to use 100% of B(T).

    Can you explain how you calculate the 2%, please. It is higher than I expected, but lower than the evidence from the heat capacity of CO2. One answer to that might be that it is 2% of the 1000 collision during the relaxation time, and so 20% of the molecules that are excited.

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 4:07 PM

  426. Gavin’s response to my #412 “no physical model can include effects whose physical justification is unknown. How can they?”.

    Q.E.D. – Not modelled.

    Comment by Mike — 20 May 2008 @ 5:06 PM

  427. Ray (420), et al, thanks. Does this also say that a molecule can not transition rotation or vibration energy to its own translation energy, which is not inhibited by discrete large-spaced energy bands/lines? I think this is what was stated earlier but I wished to verify. To re-summarize, this says a molecule with an absorbed photon (in say vibration) can relax in two and only two ways: 1) re-emit an equivalent energy photon, or 2) collide with another molecule, with the latter being far more likely in the troposphere. True?

    Comment by Rod B — 20 May 2008 @ 5:48 PM

  428. Ray and Alastair: I’m confused. 2% may not be frozen out but it sure sounds like considerably less than open-armed welcome! Does this say that at 253K (about mid Troposphere) the quantum probabilities say that 2% of CO2 molecules will absorb a photon and 98% will not?

    Comment by Rod B — 20 May 2008 @ 5:57 PM

  429. Alastair (421), Basically I was inquiring about the statement (paraphrasing) that if the Arctic ice (and snow) goes albedo goes to pot and all of that energy now gets absorbed. I basically asked Gavin if it really was that significant; he in essence said yes. I’m still trying to hone in on the degree with the question stemming from 1) the insolation hitting the Arctic surface (getting reflected or absorbed) is less than what one gets by using the model averages, and 2) the “lost” albedo will in fact not all be lost but some will remain by virtue of the water, angle of incidence, waves. (Though this gets a little messy in the summer with the reduced actual insolation going on all day or for very long days. On the other hand, in the winter there is zero reflection or absorption above the Circle whether there is ice and snow there or not.) I suspect this will still boil down to a level of degree.

    BTW, I am not (yet?) questioning all of the other (non-albedo stuff) results that you summarize.

    Comment by Rod B — 20 May 2008 @ 6:14 PM

  430. Re #428

    Rod,

    No, it says that only 2% of the molecules are excited and able to emit (Forget Kirchhoff’s Law e = a for the moment. It only applies if radiation is the sole flow of energy.) 100% of the CO2 molecules will still be able to absorb a photon.

    But the constant volume specific heat of CO2 at 250K is 26.5 J/mol K. This implies that there are 3 translational, 2 rotational, and 0.9 vibrational degrees of freedom operating. In other words only 10% of the vibrational DoF is frozen out.

    So how do we get from 2% to 80%?

    Comment by Alastair McDonald — 20 May 2008 @ 7:05 PM

  431. Rod and Alastair, the 2% comes from the Maxwell distribution–2% of which is above the vibrational energy. Not sure about the specific heat, but I’m not sure it is relevant.

    Comment by Ray Ladbury — 20 May 2008 @ 8:17 PM

  432. #425, #414 Alastair:

    If only 2% of the molecules are excited then the radiation
    will only have an intensity of 2% of B(T). The models seem to use 100%
    of B(T).

    No… it should be 100% if the gas is optically thick and absorbing. Which it very much is due to the resonance in the band.
    The Planck curve B(T) and the Maxwell distribution are directly related.

    …and then its energy will collapse into translational energy.

    Not without the help of another molecule, because of conservation of momentum (look at it in the rest frame of the molecule… where you gonna get the momentum to change the molecule’s translational motion?)

    Comment by Martin Vermeer — 20 May 2008 @ 10:13 PM

  433. Alastair #430:

    This implies that there are 3 translational, 2 rotational, and 0.9 vibrational degrees of freedom operating. In other words only 10% of the vibrational DoF is frozen out.

    Actually, cf. http://www.wag.caltech.edu/home/jang/genchem/infrared.htm , the vibrational (or bending) mode at 15µm is doubly degenerate: in the molecule O=C=O the C can move both up-down and out-of-plane forward-backward. So, not 10% but 55% of the vibrational DoF (for this mode) is frozen out.

    Comment by Martin Vermeer — 21 May 2008 @ 1:45 AM

  434. Rod,

    If you are determined that global warming is not a threat it is not possible for me to make a cast iron case proving I am correct. This is because earth science is fractal and for every rule there is an exception. For instance, the rule that there is no albedo effect at the poles regions in winter because night there lasts 6 months, can be contradicted by pointing out that there is weak sunshine near the Arctic Circle. So when I say there is an ice albedo effect which causes a positive feedback, that is a rule with the minor exceptions that you are pointing out.

    Here are a couple of press releases which seem relevant to the questions you are asking.

    1) the insolation hitting the Arctic surface (getting reflected or absorbed) is less than what one gets by using the model averages

    Models Underestimate Loss of Arctic Sea Ice
    http://nsidc.org/news/press/20070430_StroeveGRL.html

    2) the “lost” albedo will in fact not all be lost but some will remain by virtue of the water, angle of incidence, waves.

    Arctic ice more vulnerable to sunny weather
    http://www.agu.org/sci_soc/prrl/2008-13.html
    http://www.ucar.edu/news/releases/2008/arcticice.jsp

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 21 May 2008 @ 6:25 AM

  435. Re #432

    Thanks for the point about conservation of momentum. I had forgotten that.

    However, although the atmosphere will behave as a black body to the CO2 band, while the optical depth of absorption will depend on the concentration of CO2, the optical depth of emission will depend on the percentage of molecules that are excited. In other words, Kirchhoff’s Law of e = a will not hold close to the Earth’s surface as well as not holding at the TOP of the atmosphere where LTE breaks down.

    Cheers, Alastair.

    Comment by Alastair McDonald — 21 May 2008 @ 6:39 AM

  436. #435 Alastair McDonald:

    However, although the atmosphere will behave as a black body to the CO2 band, while the optical depth of absorption will depend on the concentration of CO2, the optical depth of emission will depend on the percentage of molecules that are excited. In other words, Kirchhoff’s Law of e = a will not hold close to the Earth’s surface as well as not holding at the TOP of the atmosphere where LTE breaks down.

    I beg to differ :-) . The absorbtivity a(f) of a layer of air will depend on frequency f, absolute concentration of CO2, and the thickness of the layer. The total emission coming from this same layer will be equal to a(f)*B(f), as Kirchhoff requires.

    A thin layer will also be optically thin (a(f) << 1.0) and emit less than Planck states. An optically thick layer (which in the core of the CO2 band doesn’t have to be geometrically very thick!) will emit 100% of Planck. Proximity to the Earth surface has nothing to do with it. The percentage of excited molecules indeed controls emission, but is itself tightly regulated through collisional transfer conform LTE, leading to this result.

    You are correct that when LTE breaks down (as it does in the thermosphere), all bets are off.

    Comment by Martin Vermeer — 21 May 2008 @ 8:45 AM

  437. Martin,

    You wrote:

    I beg to differ :-) . The absorbtivity a(f) of a layer of air will depend on frequency f, absolute concentration of CO2, and the thickness of the layer. The total emission coming from this same layer will be equal to a(f)*B(f), as Kirchhoff requires.

    I beg to agree :-) and differ :-(

    The absorbtivity a(f) of the lowest layer of air will depend on frequency f, absolute concentration of CO2, and the thickness of the layer. It will equal a(f)*B(f,T) where T is the surface temperature.

    The total emissions will come from those molecules which are excited. Unexcited molecules cannot emit. Since much of the molecular excitation is frozen out, the emission from the layer will be less than B(f,T) where T is the air temperature. (For illustrative purposes we can assume that the surface and the lowest layer air temperatures are the same.)

    This implies that the layer is receiving more radiation than it emits, and so warms. That is why you have to open the ventilation in a greenhouse on a sunny day. The air heats up.

    Cheers, Alastair.

    Comment by Alastair McDonald — 21 May 2008 @ 9:51 AM

  438. Martin, I think that what Alastair is saying is that at the boundaries, you will have a net flux of energy in the form of IR photons, so right there you will not have LTE. Near the surface, you will have more energy in IR photon flux than would be expected at LTE (and for blackbody) so the atmosphere must heat up in response. (actually, this applies at all levels of an adibatically cooling atmosphere, right?)
    At the edge of the inky darkness of space,you have a cooling gradient, so there is a paucity of IR photons. Correct me if I’m wrong.

    Comment by Ray Ladbury — 21 May 2008 @ 10:13 AM

  439. Re #437 Alastair:

    The total emissions will come from those molecules which are excited. Unexcited molecules cannot emit. Since much of the molecular excitation is frozen out, the emission from the layer will be less than B(f,T) where T is the air temperature. (For illustrative purposes we can assume that the surface and the lowest layer air temperatures are the same.)

    Emission will be a(f)*B(f,T). As Kirchhoff says. a(f) is the absorbtivity of the layer. If you choose the layer thin enough, a(f) < 1.0. Bringing up the “freezing out” of excited states is particularly unfruitful. It is not the reason emission is less than B(f,T) — on the contrary, the freezing out effect is implicit in B(f,T), namely its high-frequency turn-down.

    This implies that the layer is receiving more radiation than it emits, and so warms. That is why you have to open the ventilation in a greenhouse on a sunny day. The air heats up.

    This is a totally unrelated thing. That’s visible light, totally different f, and it gets absorbed by the ground and transferred to the air.

    Yes, the layer also absorbs IR coming from above — and from the ground. But Kirchhoff stands. I still don’t get why you want to disagree with me :-)

    Ray #438: Yes I disagree with you too :-) . Not with the net flux thing… sure there will be flux imbalances all the time; but, you know, it’s hard to create non-LTE conditions. It takes a well-equipped lab. It happens up at 100 km altitude, where the air is so thin that molecules can absorb photons and hang around in the excited state without meeting any other molecules for a non-trivial amount of time. Then you get population inversions and potentially even lasing action.

    Within 99% of the atmosphere, the mean free path is so short that even in small air parcels, all energy states will reach equilibrium populations on a time scale short wrt their natural decay times. That means LTE. And that means Kirchhoff.

    For clarity, be aware that LTE is defined only considering massive particles, not photons; a common misconception:

    http://en.wikipedia.org/wiki/Thermodynamic_equilibrium#Local_thermodynamic_equilibrium

    Comment by Martin Vermeer — 21 May 2008 @ 11:03 AM

  440. Martin,

    You write “I still don’t get why you want to disagree with me :-) ”

    The greenhouse effect operates by the carbon dioxide and water vapour absorbing the thermal radiation from the Earth’s surface and so heating the air. This was first shown by Horace de Saussure, before the discovery of infra red radiation, using what is now called a hot box. That is what Fourier refers to in his pioneering paper translated here by Raypierre. If the air in the atmosphere behaved as you described that effect would not be possible.

    So I am afraid that we must agree to differ :-(

    Cheers, Alastair.

    Comment by Alastair McDonald — 21 May 2008 @ 5:15 PM

  441. Martin, my language may have been imprecise–I’m considering equilibrium with the photon gas as well as the greenhouse.

    Alastair–the radiation is not just from the surface, but also from the rest of the atmosphere.

    Comment by Ray Ladbury — 21 May 2008 @ 6:24 PM

  442. my brain is starting to hurt… but I’ll catch up; this is great stuff!

    Comment by Rod B — 21 May 2008 @ 9:02 PM

  443. #441 Ray: but by that concept the whole atmosphere is in disequilibrium: sunlight at 6000K passing through it everywhere!
    #440 Alastair:

    If the air in the atmosphere behaved as you described that
    effect would not be possible.

    No, that’s a misconception… seems I’ve run out of easy ways to explain it… anybody else? These things are really well understood, but I don’t want to appeal to authority here.
    #442: Rod: never stop learning :-)

    Comment by Martin Vermeer — 21 May 2008 @ 10:58 PM

  444. Re #441,

    Ray, I agree that the thermal (infra-red)radiation is both from the surface and from the atmosphere. What I am saying is that the the intensity from the surface is greater than that from the atmosphere on a line by line basis. It is the IPCC who are ignoring the radiation from the surface because they assume that the surface and the surface air temperatures are the same, and that both the surface and the air emit as blackbody radiators.

    But obviously I have a little more research to do before I can write my paper :-)

    Cheers, Alastair

    Comment by Abbe Mac — 22 May 2008 @ 11:57 AM

  445. Alastair (434), I’m simply trying to get an accurate view of the increased solar absorption caused by the loss of Arctic ice and snow. It still might be significant, as Gavin stated, but I’m just trying to find out what it is really. The loosey-goosey ballpark gut estimates presented as gospel are a bit disconcerting, as is the little bit of glibness (and contradictions!) even in the professional studies you referenced. My poor choice of the term “model” is badly misleading and didn’t help my question. I was not referring to climate models, just simple mathematical algorithms. Simply that the standard process of taking the incoming insolation and dividing by 4 to spread it over the full surface of the globe, e.g., is not accurate when assessing absorption/reflection over a small region.

    It’s generally stated like (paraphrasing) “Arctic ice reflects 90% of solar, and when the ice is gone the water absorbs 95-100 of the solar.” For starters the angle off the normal at say 70 degrees N. Lat. (roughly the normal extent of Arctic ice/snow) varies from about 50 degrees to 90 degrees depending on the season. This says the regional insolation in watts/meter^2 has to be lowered by COS, or by a factor of from 1.0 to ~0.63. (Though the correct base has to be determined which might mitigate this adjustment.) Then you have to use an accurate reflectivity for ice/snow albedo which is likely more like 0.6+ average, not 0.9, and can be as low as 0.35 (and as high of 0.85+). Next a more accurate reflectivity for water which is close to zero, but in fact reaches near 0.4 at an angle of 10 degrees — halfway down the ice extent in spring and fall, and ~0.15 at 20 degrees covering all of the normal ice extent.

    I don’t know if any of this is going to much change the assertions. My hunch is probably not — much. But I’m bothered with the significant lack of rigor that goes into these assertions. Makes me wonder: were they (the assertions) scientifically and mathematically assessed? Or did they just sound pretty good and satisfy gut feel (not to mention the party line)?

    Comment by Rod B — 22 May 2008 @ 2:50 PM

  446. Taking just one bite at a time: I hadn’t thought of the momentum aspect, either; interesting. This says (to repeat and earlier question) that within a molecule there can never be an energy exchange between translation and vibration or rotation — in either direction. True? Problem: I could swear I have seen (but not read) studies that assessed just such transitions. Could you have these transitions and maintain momentum within a system of (many) molecules? Or is this idea just goofy. (Ray probably likes it :-) .)

    Comment by Rod B — 22 May 2008 @ 3:03 PM

  447. Rod B., It all depends on the interactions, but basically you need to conserve: 1)Energy (including mass, of course), 2)Momentum, 3)Angular momentum, 4)Charge, 5)Other conserved quantum numbers (e.g. spin, electron number, baryon number…).
    If you have a collection of many, many molecules, you can get some collective effects, but all these things are still conserved. If you bring the Weak nuclear force (which unifies with electromagnetism at some mass scale), you can violate conservation of electron number (e.g. eletron goes to muon, etc.), but you qlways have energy, momentum, angular momentum (including spin) and charge. Rod, there are whole textbooks dedicated to what kinds of interactions are possible under what conditions and how often, but unless you’re a laser jock, it’s a handfull of rules that govern transitions.

    Comment by Ray Ladbury — 22 May 2008 @ 4:11 PM

  448. #445 Rod,

    Like you I’ve been following the physics with interest (like you my brain hurts). But if I can interject on the ice issue…

    Why do you seek such exactitude?

    There are always going to be uncertainties, the ballpark figures I personally use for albedo feedback (80/20) still give a +60% gain in absorption of insolation, 60% or 80% (90/10) the message is: A substantial gain. I will have sourced that 80/20 from somewhere, and although I can’t remember where, I do recall choosing those as examples precisely because the source was trying to make a conservative estimate. Despite my using 80/20 I wouldn’t waste time arguing with someone for using 90/10.

    If we’re talking precision in this respect you’d have to be specific in both time and space (Lat/Long). For example that would allow even the factoring in of details like the angle between the Sun and the average direction of waves, not to mention weather (sea surface roughness).

    Furthermore there are plenty of factors in what’s going on in the Arctic, from Arctic Oscillation related outrafting through the Fram Strait (Zhang/Wallace), to the propensity of thicker ice to lose mass more rapidly than thin (Bitz/Roe). But if you want to see the impact of ice albedo feedback, check out the summer trend in extent as opposed to the other seasons: http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seasonal.extent.1900-2007.jpg

    I’m watching the coast off the Canadian Archipelago like a hawk right now. Because I suspect a prominent role for mechnical failure in the final stages of the perennial ice. i.e Today’s QuikSCAT: http://manati.orbit.nesdis.noaa.gov/ice_image21/D08142.NHEIMSK.GIF

    Check out this animated gif: http://ice-glaces.ec.gc.ca/content_contenu/SIE/Beaufort/ANIM-BE2007.gif
    Pay particular attention to just before 9 January 2008. Ice is not strong under tension, so with the right ocean/wind forcing it can go fast when it has an edge to open sea. Now imagine just an increase of just 50% of total available insolation in the summer in each of those cracks. Whatever the melt rate was there, it just doubled.

    Yet, all that said and I’m not even confident enough to bet against William Connelly that this years minimum will beat last years. Year to Year is the ice’s equivalent of “weather” in terms of the minima. But in the longer run, I’m a pretty well convinced that the ice cap will move rapidly into a seasonally ice free state within 10 years.

    If you’re interested feel free to ask for the bracketed references. Otherwise I’m back to reading, and lurking.

    Cobbly.

    Comment by CobblyWorlds — 22 May 2008 @ 4:57 PM

  449. Rod, here’s a bit of computer code to simulate the ice-albedo feedback. The language is Just Basic, but it should be easy to translate to C or Fortran or whatever you prefer to use:

    toRads = 3.14159265358979 / 180

    topLat = 90
    bottomLat = 60

    top = topLat * toRads
    bottom = bottomLat * toRads

    icePart = sin(top) – sin(bottom)
    groundPart = 1 – icePart

    Aice = 0.6
    Aground = 0.26

    iceContrib = icePart * Aice
    groundContrib = groundPart * Aground

    A = iceContrib + groundContrib

    F = (1366.1 / 4) * (1 – A)
    Te = (F / 5.6704e-08) ^ 0.25

    print
    print topLat, using(“#.###”, icePart), Aice, using(“#.####”, iceContrib)
    print bottomLat, using(“#.###”, groundPart), Aground, using(“#.####”, groundContrib)
    print “Earth albedo:”, , using(“#.####”, A)
    print “Te: “; using(“###.#”, Te)
    end

    If your ice and snow extend from 60 degrees North (or South) to 90 degrees N (S), and assuming albedo is 0.6 for polar caps but 0.26 everywhere else (I’ve conflated cloud albedo with ground albedo, of course), you get:

    90 0.134 0.6 0.0804
    60 0.866 0.26 0.2252
    Earth albedo: 0.3056
    Te: 254.3

    If you then make your lower polar boundary 61 degrees, due to melting at the edges due to global warming, you then get:

    90 0.125 0.6 0.0752
    61 0.875 0.26 0.2274
    Earth albedo: 0.3026
    Te: 254.6

    I.e., 1.1% of Earth’s surface has gotten darker, and the effective temperature of the Earth has gone up by 0.3 K.

    Comment by Barton Paul Levenson — 23 May 2008 @ 8:00 AM

  450. Re #448
    Looking at today’s Quikscat it looks like there’s a large lead opened up along the shore up towards Ellesmere Island.

    Comment by Phil. Felton — 23 May 2008 @ 8:24 AM

  451. CobblyWorlds (448): Certainly it doesn’t pay to get real precise, maybe like trying to estimate ocean turbulence’s effect on solar reflection, though it wouldn’t hurt to at least address it. It just seems that the level that I am questioning (even though within even that scientific estimates have to be made) is pretty rudimentary and doesn’t take a lot of effort. The scientists who simply blab their gut-feel to convince people the loss of ice is imminent (and bad) do themselves a disservice. There is no need for them to make themselves look stupid when it is so easy to do it otherwise, especially if the outcome might change little.

    I’m not addressing nor refuting (yet, and maybe not ever – unless and until I get smarter) all of the other impacts and effects on Arctic ice/snow.

    Comment by Rod B — 23 May 2008 @ 1:00 PM

  452. #446 Rod B:

    Problem: I could swear I have seen (but not read) studies that assessed just such transitions. Could you have these transitions and maintain momentum within a system of (many) molecules? Or is this idea just goofy.

    You could, but it would involve a collision with a second molecule. It can be understood classically: just imagine a molecule rapidly spinning or resonating like a church bell, and you throw another molecule against it. Can you see it come back with a greater speed than it went in with? At the expense of the rotational/vibational energy of the other molecule. Or coming out with less energy than going in, having spun up or started vibrating the other molecule?

    Any snooker experience?

    Comment by Martin Vermeer — 23 May 2008 @ 1:58 PM

  453. Barton (449), With my 8 years in the computer industry (coupled with a zillion yrs. In telecom) I successfully avoided knowing much about coding. If pressed I could put out a DOS Batch pgm, but that’s about it. None-the-less I pretty much followed it, and while I would have a hard time accepting a 0.3K worldwide temperature increase for an ice/snow mass that receded from 60 to 61 degrees without getting into the specifics, you are doing, correctly or not, what I’ve been saying should’ve been done all along – a rigorous assessment rather than what appears to be pure handwaving. (To be fair, I don’t know that the others didn’t use rigor – they just didn’t present it as such.)

    Looks neat. I’ll try to wade through it and see if I have any scientific or mathematical questions or disagreements.

    Comment by Rod B — 23 May 2008 @ 4:00 PM

  454. Martin, Ray, et al: In a long ago time I played just a little pool: rotation, 8-ball, 9-ball. a little less billiards, and hardly any snooker… though I have been snookered a time or two.

    Without prejudice to some esoteric low probabilities as Ray implied, for general purposes: ignoring angular momentum can it be said that a molecule’s momentum exists only by virtue of its translation velocity, and the only way that momentum can change is through a collision with another molecule. It can not change by shifting momentum within itself because it has no comparable momentum to share. Then it follows that neither can energy be transferred from rotation/vibration DoFs to translation (nor the reverse) because if it did the molecule’s velocity (and temperture… couldn’t resist ;-) ) would have to increase by virtue of 1/2mv^2, but it can not increase by virtue of conservation of momentum. True?

    The one fly in the ointment, as Martin hinted, is the ignoring of angular momentum. Is that a valid assumption? Or can a body (or self contained system) convert some of its angular momentum to tangential linear momentum and maintain momentum equilibrium? Is the angular momentum of say a molecule sufficient to matter (I have no idea what a normal molecular’s w (omega) is)? I would guess that coming from vibration the molecular momentum averages to zero — though would this depend on the mode of vibration and the precise moment of any transition?

    Comment by Rod B — 23 May 2008 @ 9:15 PM

  455. #454 Rod B:

    … True?

    False.

    Energy is a scalar, momentum is a vector, and they have different formulas: momentum is (mv_x, mv_y, mv_z), its magnitude mv. The momentum of a particle pair can be zero with both flying fast in opposite directions and kinetic energy being large.

    Think baseball: when the swinging batsman hits the ball, he gets pushed backward by the ball — and would be pushed over if not actively equilibrating. The ball, and the whole system, increases its linear kinetic energy from the batsman’s muscles, but total momentum doesn’t change.

    Comment by Martin Vermeer — 24 May 2008 @ 1:16 AM

  456. #450 Phil Felton,

    The initial fissuring happened previously in April, I did a video to illustrate a point I was making in another discussion.
    http://www.youtube.com/watch?v=l8vi–s1T8E
    It’s from day 90 to 106, and it implied to me that the whole of that dense white mass was “lifting off” the Canadian Archipelago.

    Looking at timelapses of QuikScat this “lifting” happens when the AO is in negative mode, though not all the time that it is. I think it’s from wind forcing off the Archipelago into the Arctic Ocean. But aside from IR/Visible HRPT satellite (cloud movement) and some spot reporting of winds off the net, I can’t find something like a traditional synoptic polar-centred pressure map. What I can find is way over my non-metereologist’s head.

    Fissures in this region have happened in previous years. For example there’s a dark line visible across the Archipelago on QuikScat around day 100 of 2006. Stressing is to be expected there, it’s the boundary between fast and ocean ice. The latter being mobile and more affected by tides. However the recent open water there is very early, and not at all typical for mid May from what I can see. QuikScat only goes back to 2006, but Cryosphere Today has shown the Polnya off Banks Island and that goes back for decades. So despite similar past behaviour, what is heppening now really looks more extreme and worthy of note.

    The Banks Island Polnya has frozen over in today’s QuikSCAT, but as we’re in May that ice will offer little resistance to melt. Even if the Polnya closes again from ocean currents or wind, the new ice over it will offer some resistance to compression as it ridges and compacts. So there will still be a region of new ice in there, and it will still be susceptible to melting.

    Above I said:

    I’m watching the coast off the Canadian Archipelago like a hawk right now. Because I suspect a prominent role for mechnical failure in the final stages of the perennial ice.

    Here is what is concerning me.

    With the ongoing melt to continue throughout June, July, August, and much of September. There’s now a spot that’s likely to be open water, in such a position that it could cause a propagation of melting to open water along the coast of the Archipelago. Not to mention inroads of open water into the perennial mass.

    And on the other side, the Pole could be ice free this year(Serreze) because it’s covered with thick first year ice over 1.2 metres thick (National Ice Service). First year ice melts at a lower temperature than perennial ice because it contains salt and brine (perennial doesn’t). In a regime where the temperature’s moving up from well below zero the lower melting point fist year ice preferentially melts before the perennial ice (think of road-gritting “in reverse” in terms of temperature).

    So it looks possible to me that a substantial chunk of the perennial mass could float “loose” in the Arctic Ocean. That’s why I bought up the gif of the Beaufort ice fracture as an example of the sort of thing that we could see, if such a thing were to happen then you could have an Arctic Ocean full of bergs, and effectively no ice cap. I am aware that there’s a great physical mass of ice, and that consequently (due to inertia) the movement of an entire ice pack drifing in the ocean would only be a small fraction of any wind speeds. So it would not necessarily “float out” as such. But smaller bergs are more mobile and rapid propagation of cracking as seen in the Beaufort ice sheet could turn much of what is presently behaving like a coherent mass off the Canadian Archipelago into a mass of bergs, each one (more or less) surrounded by waters warming in the sun (direct or through clouds).

    QuikSCAT:
    http://manati.orbit.nesdis.noaa.gov/cgi-bin/qscat_ice.pl
    Environment Canada’s Satellite page (scroll down to “HRPT NOAA (Polar Orbiting)”:
    http://www.weatheroffice.gc.ca/satellite/index_e.html
    National Ice Centre:
    http://www.natice.noaa.gov/products/arctic/index.htm
    And for the casual browsers here:
    Cryosphere Today:
    http://arctic.atmos.uiuc.edu/cryosphere/
    NSIDC’s “Official Blog” on this summer’s Arctic Melt:
    http://nsidc.org/arcticseaicenews/index.html

    #451, Rod B,

    As far as I can see the scientists are being conservative in the face of the evidence, as they should. There’s a real process happening here (as indeed with AGW), groundless overstatement will be exposed by reality, so such overstatement is pointless.

    Comment by CobblyWorlds — 24 May 2008 @ 3:55 AM

  457. Rod, Molecular collisions can become arbitrarily complicated, but when you are dealing with a large assemblage of molecules, what matters is how often they occur. Imagine that we come up with a game of billiards where the balls are connected via springs of varying rigidity. When ball 1 collides with ball 2, ball 2 may just take off, dragging any ball(s) it is attached to behind it. Or it may start oscillating wrt the balls it is attached to, or if we hit it at a glancing angle, we might start it rotating with its attached balls around their common center of mass.
    If we want another level of complication, we can simulate (sort of) emission of IR photons by imagining that one of the balls will spontaneously shoot off a smaller ball with a certain energy and momentum (remember photons have momentum h/wavelength). Maybe this will help you visualize things a bit. Quantum mechanics complicates things a bit, but not irreparably. At the very least, I think I’ve designed an interesting video game.

    Comment by Ray Ladbury — 24 May 2008 @ 8:25 AM

  458. Martin (455), but if you transfer scalar energy from, say, vibration to translation that increased translation energy has to be manifested by increased center-of-mass velocity (larger 1/2mv^2); then how can you get increased center-of-mass velocity but not increase the molecule’s vectored momentum? I see only two ways: 1) the energy can not be transferred because of the dilemma, or 2) the molecule somehow also transfers internal vibration (or rotation/angular) momentum to its translation momentum, though I’m having a tough time getting my arms around that.

    I understand a passive ball striking a passive bat (or another ball), but it seems a batter swinging the bat will increase the momentum of 1st the bat, then the ball by virtue of force = time derivative of momentum. True? False?

    Comment by Rod B — 24 May 2008 @ 9:46 AM

  459. PS to my above post:

    BBC’s David Schukman:

    Scientists travelling with troops found major new fractures in giant ice shelves in Canada’s far north. David Shukman reports.

    http://news.bbc.co.uk/1/hi/sci/tech/7418041.stm

    Comment by CobblyWorlds — 24 May 2008 @ 10:03 AM

  460. Ray (457), Thanks. I understand the conversion of translation momentum to angular momentum ala imparting spin to the struck billiard ball (or the standard: measuring the velocity of a bullet) but in your good example it is two separate bodies interacting. However the question is transferring momentum within a single body (molecule). Also the molecular analysis is further complicated, as you imply, by the quantum nature of the various “stores” of energy and momentum in the molecule. Some stores being near frozen out at various times, or some stores picking up (or losing) energy or momentum is only discrete quanta — none of which applies to billiards.

    Comment by Rod B — 24 May 2008 @ 10:09 AM

  461. ps interesting insight, Ray (which now seems totally obvious): the vibration DoFs must have a net positive momentum (though I don’t readily see how) if, from time to time, a photon is emitted.

    Comment by Rod B — 24 May 2008 @ 10:15 AM

  462. Rod B #458

    but if you transfer scalar energy from, say, vibration to translation that increased translation energy has to be manifested by increased center-of-mass velocity (larger 1/2mv^2); then how can you get increased center-of-mass velocity but not increase the molecule’s vectored momentum? I see only two ways: 1) the energy can not be transferred because of the dilemma, or 2) the molecule somehow also transfers internal vibration (or rotation/angular) momentum to its translation momentum, though I’m having a tough time getting my arms around that.

    (This is getting a bit of a private classical mechanics lesson, I hope there are other interested readers. Don’t take this badly, but I have a bit of a problem getting my head around your stated intent of meaningfully critiquing elements of climate science esp. physical modelling :-) )

    1) is not the case. 2) is closer, though angular momentum is not momentum. It is a whole separate quantity also to be conserved.

    Consider the following situation. A and B are two balls connected by a rod, rotating in the xy plane around the center point of the rod. At a certain point in time, ball A moves in the x direction with velocity v, while ball B moves in the -x direction with velocity -v. Total momentum of AB is mv – mv = 0, where m is the mass of one ball. AB has no net linear movement.

    No ball C comes from the right, velicity -v, and hits ball A. It bounces off with velocity +v (moving right), kinetic energy unchanged. Ball A comes off vith velocity -v (moving left, just like B). In fact the whole AB now moves left with velocity -v, kinetic energy 1/2*(2m)*(-v)^2 = mv^2. Its (linear) kinetic energy has increased. It no longer rotates though, that’s where the energy came from.

    Please check that total (linear) momentum both before and after collision is -mv.

    Comment by Martin Vermeer — 24 May 2008 @ 11:48 AM

  463. Martin (462) also Ray and Rod;

    I for one am lurking along and following with interest.

    I doubt I am the only one.

    Thank you all for your time and patience with the private lesson in classical mechanics. ;-)

    Comment by Arch Stanton — 24 May 2008 @ 3:44 PM

  464. #462 correction: in the last sentence, the momentum should be the vector (-mv, 0, 0), i.e., only the x component nonzero.

    Thank you for your kind words Arch.

    Comment by Martin Vermeer — 25 May 2008 @ 1:54 AM

  465. Martin (462), et al, My rationale for all of this questioning is understanding (some of) the heat and temperature transfer mechanism ala climate warming at the atomic/molecular level. I’m not sure if it really is all that important to anybody else (and if a few complain that I’m taking up valuable space, I’ll desist), but, what I find interesting is the actual atomic-level mechanism of heating the atmosphere is (or doesn’t seem to be…) not fully understood.

    To make matters worse, I find myself jumping from pillar to post as the onion is peeled back (I think mixed metaphors are greatly over criticized.), which must make it sound even more helter skelter. This new (to me) business of momentum is a case in point. To repeat: If a molecule can transfer energy between DoFs (the interest is in translation to/from one of the other two) how does the momentum also get transferred? Or must it transfer along with energy (still being intra-molecular, not inter)?

    How is vibrational momentum evidenced? I can understand rotation and translation, but, with my limited vision, can’t see vibrational momentum. Simply, if the two (for simplicity) atoms are oscillating in opposite directions along the atomic bond, it would seem the net instantaneous momentum must be zero. If oscillating in congruent direction, instantaneous momentum ought to be sinusoidal, with an average of zero, and what momentum gets transferred depends on the exact instant of transfer. Bending poses similar scenarios, but with goofy-seeming complications.

    When a CO2 molecule absorbs a photon and its energy is picked up in a vibration DoF, where is the (granted mucho smaller) momentum picked up, and, if in a vibration, how do it do dat? The same question in reverse: when CO2 emits a photon where does it get its momentum?

    Or is it impossible for molecules to transfer intra-molecular energy and/or momentum to/from translation? (I don’t have the same difficulty understanding inter-molecular (collision) energy and momentum transfer.) Somewhat related (and again…), if a CO2 molecule at 250K absorbs a photon into a vibration DoF, does that make it out of kilter re the quantum distribution of energy among DoFs and, as I asked earlier, does it want to equipartition things out with some sort of intra-molecular transfer? Or is a single molecule being out of equipartition kilter make any difference ala the quantum mechanical distribution over a large number of molecules – given it will return to normal soon enough through collision or emission?

    Comment by Rod B — 28 May 2008 @ 3:56 PM

  466. RodB, OK, think of the billiard balls on springs again. First, we can have motion where the oxygen on either end of the carbon oscillates longitudinally along the axis passing through all 3 molecules. We can also have a situation where the oxygen molecules oscillate back and forth transverse to that axis (2 such motions). So we have our oxygen atoms actually moving physically–they have real momentum that is minimum at the extremes of their range of motion and maximum when they reach what would be their equilibrium picture if they weren’t moving.
    Now let’s say we have a single billiard ball (call it Ar)sitting there minding its own business. Classically, the CO2 molecule could strike one of the oxygen atoms so that it transfers just enough momentum to stop the motion of the O2 mulecules with the Ar atom picking up the extra momentum. Quantum mechanically, it has to transfer the full quantum of motion–and given the large number of collisions during the relatively long vibrational life of the CO2 mode, such a transfer is likely.

    As to radiative transfer of momentum–when a molecule absorbs or emits a photon, you change the momentum of the molecule to compensate. The same thing happens in radioactive decay when you give off, say, and alpha particle. And again, the concept of equilibrium only applies to ensembles of molecules.

    Finally, all of this is understood in extremely gory detail–it’s how laser jocks put bread on the table. If my explication is imperfect, it’s because it ain’t my day job. I’m delving back over 20 years when I had to learn this stuff. I recommend the following site for visualization aids:

    http://www.ems.psu.edu/~bannon/moledyn.html

    I think it requires Quicktime.

    Comment by Ray Ladbury — 31 May 2008 @ 7:50 AM

  467. Rod, Ray’s link looks good.

    Some direct responses:

    When a CO2 molecule absorbs a photon and its energy is picked up in a vibration DoF, where is the (granted mucho smaller) momentum picked up, and, if in a vibration, how do it do dat?

    Like a bullet hitting a target. Target+stuck bullet recoil, now containing the same momentum as bullet before impact. Kinetic energy after impact is (much) less than before; the difference is dissipated in the target, causing damage (as bullets are supposed to do).

    Vibration or rotation of a molecule as a whole (not looking at individual atoms) contains no momentum, just energy. Only the linear motion of the whole molecule contains momentum.

    [And just to keep you out of trouble: angular momentum has nothing to do with momentum. Totally different quantity, separately conserved.]

    The same question in reverse: when CO2 emits a photon where does it get its momentuum?

    Like a bullet fired from a gun, causing the gun to recoil.

    Before firing, momentum of bullet+gun is zero; after firing, this is still the case, momenta of bullet and recoiling gun being equal and opposite.

    The kinetic energy of bullet + gun has increased from zero to a large value, most of it in the lighter, faster bullet (remember 1/2 mv^2). This energy comes from the explosive in the cartridge; for a molecule, from, e.g., a decaying vibrational state.

    if a CO2 molecule at 250K absorbs a photon into a vibration DoF, does that make it out of kilter re the quantum distribution of energy among DoFs and, as I asked earlier, does it want to equipartition things out with some sort of intra-molecular transfer? Or is a single molecule being out of equipartition kilter make any difference ala the quantum mechanical distribution over a large number of molecules – given it will return to normal soon enough through collision or emission?

    Yes it will be “out of kilter” in that way, but for a single molecule it means thermodynamically — nothing. Yes, it tries to equipartition the energy, both inter- as well as intra-molecularly.

    It does make a difference if it happens to a lot of molecules within a very short time span (compared to molecular collisions / excited state decays), like when firing at them with a nanosecond pulse laser. Not relevant for the (lower) Earth atmosphere.

    Comment by Martin Vermeer — 31 May 2008 @ 12:57 PM

  468. Martin,

    You wrote:

    Yes it will be “out of kilter” in that way, but for a single molecule it means thermodynamically — nothing. Yes, it tries to equipartition the energy, both inter- as well as intra-molecularly.

    It does make a difference if it happens to a lot of molecules within a very short time span (compared to molecular collisions / excited state decays), like when firing at them with a nanosecond pulse laser. Not relevant for the (lower) Earth atmosphere.

    In the lower atmosphere there are a lot of CO2 molecules being blasted with thermal radiation from the surface of the earth. They will lose their absorbed vibrational energy to the translational energy of the air molecules through collisions which preserve momentum but transfer energy.

    In other words, the surface air will be radiatively heated, just as food is heated in a microwave oven.

    Cheers, Alastair.

    Comment by Alastair McDonald — 1 Jun 2008 @ 4:51 AM

  469. Thanks for the help. I’ll try to summarize what I still believe and/or what I’ve learned and concluded.

    The transfer (or absorption) of a photon’s momentum is still a little loose, but doesn’t matter. It can’t manifest itself in vibration since it seems there is zero net momentum within vibrational movements. Plus the concept of momentum in one “body” be transferred in an non-elastic collision to just a part of another sounds odd. It might be converted to angular momentum and show up in molecular rotation, though I have no idea if the numbers make sense. (I still think linear momentum can be converted to angular momentum as in a bullet fired into a stationary rotatable absorbent mass off center.) Predominantly (always?) it gets picked up by the molecule’s linear momentum (translation), increasing the translation velocity, its energy, and temperature. When a molecule emits a photon its translation momentum (and energy) decreases. However, the photon’s momentum is so miniscule that none of this really matters one way or another. Which is the same conclusion I arrived at by default when contemplating (and getting a headache) a photon’s angular momentum — who cares?!?

    For all practical purposes the energy from a photon at 15microns is absorbed in a CO2 molecule’s vibration,. This energy is more than twice the kinetic energy of a molecule’s translation at a nominal 300K, and does not affect the temperature, other than from the likely trivial momentum transfer just mentioned. By equipartition quantum probabilities it will strongly want to relax This relaxation is predominately transferring energy to another atmospheric molecule’s translation via collision, at other than very low pressure (density). This does raise the atmosphere’s temperature. It can, though unlikely, transfer to its own translation; however this becomes trivial since the molecule will still quickly relax its new found translation energy again via collision. It can also re-emit a photon instead, the probability of which increases as pressure decreases (among other factors). Conversely, a CO2 can sometimes collide with another molecule, pick up some kinetic energy and immediately relaxing via photon emission, transiting through vibration, and provide a net cooling of the atmosphere

    Some of my calculations as a reference (and a check if anyone is so inclined):
    The energy of a 15micron photon is 1.325×10^-20 joules; its momentum is 4.4×10^-29 [you'd think by now physicists would have come up with momentum units; I suggest OOMPHS!]
    The kinetic energy of a CO2 molecule at 300K is 6.214×10^-21joules (3742 joules for a mol); its momentum is 3.0×10^-23; its velocity (or the average velocity for a mol) is 412.5 m/sec.
    One photon’s energy going into one molecule’s translation will raise its temperature from 300K to 630K; the mol’s avg. temp would increase to 300.038K.

    The standard relaxation process has a complex formula of probabilities. Of interest is the vibration to translation transfers usually require a large number of collisions before occurring. 10,000 to 100,000 is often quoted, but that is usually at high temperatures (500-1000K) where the molecule is more “comfortable” with its vibration. At atmospheric 200-300K (and normal pressure) it is less : 100-10,000 I would guess (haven’t done the cumbersome math), and more likely to make the transfer. The lower rotation energy takes 5-100 collisions to make a translation transfer and is highly likely — H2O to N2 or O2, e.g. Going the other direction — translation to vibration to emission — I would think (no math again) at low temp and pressure, e.g. stratosphere, somewhere around the 10,000-100,000 collisions range is more the normal and not as likely to occur.

    I haven’t sorted through the ramifications of all this yet. Maybe there is none. But it’s a small piece of the puzzle: a clearer understanding of the molecular process of energy transfers and temperature changes. I’m sure I have left something out, but I’ve already worn out my welcome. (Plus the other stuff like blackbody radiation….to do ;-) )

    Comment by Rod B — 8 Jun 2008 @ 12:41 AM

  470. Rod B., Your biggest problem is still conceptual–you still persist in equating temperature with kinetic energy of molecules–even of a single molecule. This will inevitably lead to contradictions and confusion. I’ve pointed out, for instance, that in a laser, the medium in which the population inversion occurs will have a negative temperature. The conceptual difficulties increase the more one departs from equilibrium or moves toward quantum mechanics.
    In addition, relaxation of the CO2 molecule has the same (long) lifetime regardless of how it is excited. And when it decays collisionally, the most likely outcome will be that the other molecule (probably N2) and the CO2 will both increase in kinetic energy.

    You’re getting there. The interpretation of thermodynamics is inherently statistical–and there are a lot of subtleties that even most physicists don’t appreciate.

    Comment by Ray Ladbury — 8 Jun 2008 @ 6:49 AM

  471. Alastair,

    In the lower atmosphere there are a lot of CO2 molecules being blasted with thermal radiation from the surface of the earth

    “Blasted” definitely conveys the wrong impression — “slowly roasted” more like it. Those molecules leasurely equipartition their received energy, LTE is never in danger. Same in a microwave oven.

    Comment by Martin Vermeer — 8 Jun 2008 @ 8:37 AM

  472. Rod,

    you’ve been working hard!

    > the mol’s avg. temp would increase to 300.038K

    Not reasonable… you just demonstrated that you know Avogadro’s number, and it’s huge :-)

    Otherwise numbers seem OK.

    > It can, though unlikely, transfer to its own translation;

    But what about momentum consevation? It would need help from another molecule.

    > (I still think linear momentum can be converted to angular
    > momentum as in a bullet fired into a stationary rotatable
    > absorbent mass off center

    What happens there is that the incoming bullet has angular momentum relative to the rotation axis (how AM is always defined!) by virtue of being off-center; this gets transferred to the absorbent. As does, separately, the linear momentum, producing the overall recoil.

    Otherwise seems OK. Remember with interaction probabilities that they are equal in opposite directions. This is why their precise values are immaterial under LTE.

    Comment by Martin Vermeer — 8 Jun 2008 @ 9:24 AM

  473. Thanks guys. Ray (470), we will always disagree on the temperature-molecule thing. I contend the translation kinetic energy is the only thing that manifests (real) temperature. The other “containers” have a construct called “characteristic temperature” which is useful, handy, easily understood, yet still a construct. And until you convince me that a single molecule can not have 1/2 mv^2 kinetic energy, I will contend it has (real) temperature, even if unmeasurable. Maybe we should just quietly maintain our respective beliefs ;-) .

    I agree that the relaxation of vibration energy through collision can (and does) distribute translation energy to both the CO2 and the N2 (e.g.) molecules.

    Martin (471 & 472): one minor clarification: the relaxation of CO2s vibration energy is not near as leisurely as H2O’s rotation energy.

    Avogadro’s number IS the difference between mol stuff and molecular stuff. If one CO2 molecule in a mol of CO2 at 300K absorbs one 15 micron photon and relaxes, the mol will then be 300.038K. ???

    “…But what about momentum conservation? It would need help from another molecule…”

    I hadn’t thought of that, but it makes sense ala Ray’s collision above.

    The linear moving bullet has angular momentum by virtue of the rotating blob it is about to hit but presently has no cognizance of its existence? What if the aim is bad and it misses? Does it still have the AM? Does the AM change as the bullet flies toward the blob by virtue of changing tangential velocity and “radius”? I’m having a hard time getting wrapped around this. Are you sure??

    Comment by Rod B — 8 Jun 2008 @ 3:03 PM

  474. Rod:

    You use one operational definition of temperature, which is not wrong but gets cumbersome as you dig deeper into statistical mechanics. Those “characteristic temperatures” are real temps too, as theory will tell you once you study it. If it barks like a dog… Same with the “temperature” of one molecule. You are inventing your own terminology there, differing from accepted usage. Not quite illegal, but expect confusion down the road.

    Avogadro’s number IS the difference between mol stuff and molecular stuff. If one CO2 molecule in a mol of CO2 at 300K absorbs one 15 micron photon and relaxes, the mol will then be 300.038K. ???

    Show me the computation… pretty sure it’s wrong. More like
    300K + 330K/(6.022×10^23), which is extremely close to 300K.

    The linear moving bullet has angular momentum by virtue of the rotating blob it is about to hit but presently has no cognizance of its existence?

    Huh? It has AM relative to the axis of rotation of the target, which we choose (for convenience) as the origin of reference to describe AM on. You always have to choose such an origin and use it consistently, and then AM will be conserved.

    What if the aim is bad and it misses? Does it still have the AM?

    Yes… it just continues with its AM (relative to the chosen reference origin) intact.

    Does the AM change as the bullet flies toward the blob by virtue of changing tangential velocity and “radius”?

    No, those changes precisely cancel each other for linear uniform motion (as you may easily prove for yourself). This even remains true in a central (relative to the reference origin) force field… you get Kepler’s Law of Areas, a geometric way to state conservation of AM. [Moving from a molecule to a bullet to a planet... welcome to the universality of physical law.]

    I’m having a hard time getting wrapped around this. Are you sure??

    Oh yes. These are not matters of opinion :-)

    Comment by Martin Vermeer — 8 Jun 2008 @ 9:59 PM

  475. Rod B., If you insist on equating temperature and translational kinetic energy, you are going to run into some pretty interesting philosophical problems. For instance, you definition of temperature will become temperature dependent. We’ve already talked about the problems that arise wrt a temperature inversion–where adding energy to the system makes the temperature more negative. You also wind up having some issues as temperature rises and more and more degrees of freedom become unfrozen. The idea of a single molecule having a temperature is even more problematic.

    As to angular momentum, the definition is the cross product of the momentum and position vectors (p and r, or L=p x r). Any system in which the elements have momenta perpendicular to lines joining the elements will have angular momentum.

    Comment by Ray Ladbury — 9 Jun 2008 @ 7:23 AM

  476. Re 475.

    Ray,

    Rod’s equating of temperature with translational kinetic energy is its normal use. It is the temperature measured by a (mercury or gas) thermometer. Because gaseous molecules have other forms of internal energy: rotational, vibrational and electronic then they also have three other temperatures: Tr, Tv, and Te.

    The way that the rotational temperature Tr is determined is explained here.

    Because of the Equipartition Theorem we would expect all four temperatures to be the same. However, in the Earth’s atmosphere the temperature is far too low for any atoms to become electronically excited hence Te is zero. Tv is also less than T, but Tr will be equal to T.

    Since LTE only applies when Kirchhoff’s Law of blackbody radiation Ilamda= k Blambda(T)is obeyed, then LTE does not exist in the Earth’s atmosphere see I = B(T) See Eric Weisstein’s World of Physics.

    HTH (but it probably won’t :-)

    Cheers, Alastair.

    Comment by Alastair McDonald — 9 Jun 2008 @ 10:57 AM

  477. If I may veer back onto the topic for a moment, away from the replay:

    This is an interesting scenario, I wonder if the modelers here have anything to add to it? The “Cited by” links suggest it’s had a bit of attention, but most of them are paywalled.

    A change in the slow geochemical sink for CO2 can, and did, cause a rise in CO2, followed by extreme warming. That was the fastest event we know of before the present excursion we’re causing now. It was far slower than what’s happening now as we burn fossil carbon:

    http://dx.doi.org/10.1016/S0031-0182(03)00667-9
    Palaeogeography, Palaeoclimatology, Palaeoecology, Volume 203, Issues 3-4, 15 February 2004, Pages 207-237

    Causes and consequences of extreme Permo-Triassic warming to globally equable climate and relation to the Permo-Triassic extinction and recovery

    David L. Kidder Corresponding Author
    and Thomas R. Worsley

    ——excerpt from abstract, breaks added for readability——-

    Permian waning of the low-latitude Alleghenian/Variscan/Hercynian orogenesis led to a long collisional orogeny gap that cut down the availability of chemically weatherable fresh silicate rock resulting in a high-CO2 atmosphere and global warming. The correspondingly reduced delivery of nutrients to the biosphere caused further increases in CO2 and warming.

    Melting of polar ice curtailed sinking of O2- and nutrient-rich cold brines while pole-to-equator thermal gradients weakened. Wind shear and associated wind-driven upwelling lessened, further diminishing productivity and carbon burial. As the Earth warmed, dry climates expanded to mid-latitudes, causing latitudinal expansion of the Ferrel circulation cell at the expense of the polar cell.

    Increased coastal evaporation generated O2- and nutrient-deficient warm saline bottom water (WSBW) and delivered it to a weakly circulating deep ocean. Warm, deep currents delivered ever more heat to high latitudes until polar sinking of cold water was replaced by upwelling WSBW. With the loss of polar sinking, the ocean was rapidly filled with WSBW that became increasingly anoxic and finally euxinic by the end of the Permian. Rapid incursion of WSBW could have produced [approx.] 20 m of thermal expansion of the oceans, generating the well-documented marine transgression that flooded embayments in dry, hot Pangaean mid-latitudes. The flooding further increased WSBW production and anoxia, and brought that anoxic water onto the shelves.

    Release of CO2 from the Siberian traps and methane from clathrates below the warming ocean bottom sharply enhanced the already strong greenhouse. Increasingly frequent and powerful cyclonic storms mined upwelling high-latitude heat and released it to the atmosphere. That heat, trapped by overlying clouds of its own making, suggests complete breakdown of the dry polar cell.

    Resulting rapid and intense polar warming caused or contributed to extinction of the remaining latest Permian coal forests that could not migrate any farther poleward because of light limitations. Loss of water stored by the forests led to aquifer drainage, adding another [approximately] 5 m to the transgression. Non-peat-forming vegetation survived at the newly moist poles.

    Climate feedback from the coal-forest extinction further intensified warmth, contributing to delayed biotic recovery that generally did not begin until mid-Triassic, but appears to have resumed first at high latitudes late in the Early Triassic.

    Current quantitative models fail to generate high-latitude warmth and so do not produce the chain of events we outline in this paper…..

    Comment by Hank Roberts — 9 Jun 2008 @ 11:24 AM

  478. Martin (474), I think we have a semantic difference (I thought we had resolved this already, but…). When I say “real temperature” I actually mean temperature as we sense it: hot water, cold ice, measurable (theoretically) with a thermometer. Rotational and vibration energy has “characteristic temperature”, commonly shortened to just “temperature” which is not “real” by the above definition, but is real in terms of a usable concept or construct for analyzing things and making accurate calculations. Take a mole of H2O vapor at 300K as measured by my thermometer. Now allow me to magically raise every molecule’s rotation energy to its first quantum level without any relaxation. The temperature of the mole is now…. tada…300K. Even though there is energy, oft-called heat, sometimes called kinetic, in the rotation of the molecule’s that one can define a “temperature” to that energy level that makes it easier to think about and to make some calculations and therefore be ‘real’, but it ain’t “warm”!

    Microwaves (the boxy type, not the wave type — don’t want to raise the latter ’cause it makes Ray mad :-) ) impart energy to the rotation levels of water molecules. But the food does not increase its “temperature” until the H2O relaxes and transfers rotation energy to translation/kinetic energy via molecular collisions.

    Ray, as more degrees of freedom free up as the temperature rises, more and more energy gets added that goes into those degrees of freedom and does not increase the “real temperature” as defined above.

    I second Alastair (476) with teeny clarification. The Tv and Tr are not only different numbers (can be), they are different physical entities, of a sort.

    Comment by Rod B — 9 Jun 2008 @ 12:09 PM

  479. Alastair,

    No it doesn’t :-) Your quoted abstract clearly refers to a non-LTE situation. Under LTE, all these temps are equal (and T_e is not zero but undefined/meaningless for low temps).

    As for Eric Weisstein, I noticed too that he uses a different definition of LTE that does include the radiation field. So do some astrophysicists. Babel :-(

    If you google for “local thermodynamic equilibrium” you’ll find many references agreeing with the definition I gave (which I find more fruitful).

    Comment by Martin Vermeer — 9 Jun 2008 @ 12:48 PM

  480. Alastair says, “Since LTE only applies when Kirchhoff’s Law of blackbody radiation Ilamda= k Blambda(T)is obeyed, then LTE does not exist in the Earth’s atmosphere”

    Kirchoff’s law is an idealization–it never applies perfectly in any real physical system. However, given the long lifetime of the CO2 vibrational state, energy transfer from the radiation to kinetic energy is pretty efficient–so it’s a pretty good approximation. What matters is the physical temperature–the derivative of energy wrt entropy. Stick with that and you won’t go wrong. The reason I object to equating temperature and translational kinetic energy is because it leads to absurd ideas like assigning temperature to individual molecules.

    Comment by Ray Ladbury — 9 Jun 2008 @ 1:04 PM

  481. Martin,

    I was only using the abstract to show that talking about vibrational and rotational temperatures was legitimate. I don’t think the abstract had any other relavanc to the current discussion, but then I haven’t even read it :-)

    LTE is an astrophysical term. The concept was first proposed by Karl Schwarzschild in 1906 for the interior of the sun, which does emit as a blackbody. It was his brother-in-law Robert Emden, best known for his book on stars “Gaskuglen”, who proposed that it could be applied to the earth’s atmosphere. AFAIK, it was the astrophysicist A. E. Milne who first named it, when describing gas nebulae. The great astrophysicist Chandrasekhar seems to have assumed it also applied to terrestrial atmospheres, and his prestige has decided the matter.

    So not only is it a term invented and used by astrophysicists, it seems to have been adopted by the climate modelers who were in awe of them.

    As you suggested I have goggled yet again for “local thermodynamic equilibrium” with the same result as before. Most results are for non-LTE. The rest, apart from Wikipeadia, are for its astrophysical use.

    One is interesting which is a paper by H Roos entitled “On the Problem of Defining Local Thermodynamic Equilibrium”. I will try to read it yet again, but perhaps in the mean time you could provide your definition of LTE?

    Cheers, Alastair.

    Comment by Alastair McDonald — 10 Jun 2008 @ 5:50 AM

  482. Alastair writes:

    Since LTE only applies when Kirchhoff’s Law of blackbody radiation Ilamda= k Blambda(T)is obeyed, then LTE does not exist in the Earth’s atmosphere see I = B(T) See Eric Weisstein’s World of Physics.

    Every authority in atmospheric studies says local thermal equilibrium applies to the troposphere and even the stratosphere, and that Kirchhoff’s law applies as well. Every computer model of the atmosphere uses it and gets the right answers.

    Readers should note that Alastair has his own unique view of radiation physics.

    Comment by Barton Paul Levenson — 10 Jun 2008 @ 8:44 AM

  483. Alastair:

    > but perhaps in the mean time you could provide your definition of LTE?

    The Wikipedia one. Energy in molecules equipartitioned over degrees of freedom, Maxwell-Bolzmann statistics, Saha law, etc. Radiation field being irrelevant to the definition. Every point in the distribution of matter has a single well defined temperature.

    Rod:

    > Now allow me to magically raise every molecule’s rotation
    > energy to its first quantum level without any relaxation.

    …meaning non-LTE. “Magic” is the operative word. What you’re doing is stepping off the cliff and pondering gravity for a moment before starting the journey down, like the cartoon figure. Doesn’t happen in the real Earth atmosphere (below 100 km or so). Or in the Solar atmosphere below the corona.

    For LTE, all those different temperatures are the same. That’s one definition of LTE I have seen (in the Roos paper?): a state in which every point (or small environment) has a single well defined number called temperature, which it will share with a thermometer if you stick one in at that point.

    The “thermometer” can be pretty much anything, including natural processes that are already there: observing doppler broadening of spectral lines will give you the kinetic temperature, observing the vibration spectrum the Tv, and so on. And under LTE they are all the same.

    The distinction you make between these different temperatures is completely artificial. They are all just as real. The origin of defining temperature in terms of the kinetic temperature is historical: the gas thermometer was the first operational way to produce an accurately linear temperature scale. Now we have many more. And as I earlier tried to point out, many solids don’t even have a well defined kinetic temperature.

    Comment by Martin Vermeer — 10 Jun 2008 @ 10:23 AM

  484. Ray, Martin, I’m wearing out my welcome with this momentum stuff but its racking my brain. While I continue my search, a couple of Qs: 1) If the bullet has angular momentum of r X P referenced to the axis of the spherical absorbent blob, it would seem the angular momentum of the bullet must vary with time as r and the angle between r and P vary. But there is no noticeable force/torque on the bullet which means momentum is not being conserved. You can define the system as the bullet, the bullet and blob, or the bullet, blob, and gun and get the same results. How is this ostensible non-conversation of angular momentum explained?

    2) Why not this: The bullet’s PB (=mv) goes from zero to something from the force of the gunpowder ala F = d(mv)/dt. (Gun gets same negative P by virtue of equal and opposite reaction.) When struck the blob applies an inertial force on the bullet, changing its linear momentum back to zero. The equal and opposite reactive force applies a torque of FR X r on the blob, which accelerates the blob in a rotation, giving it angular momentum L
    where L = d(PB)/dt X r.

    Maybe I’m dense, but to quote Red Skelton, “…it just don’t look right to me.”

    Comment by Rod B — 10 Jun 2008 @ 10:29 AM

  485. Re #477 Where Hank quotes the abstact to a paper which concludes:

    Current quantitative models fail to generate high-latitude warmth and so do not produce the chain of events we outline in this paper.

    I find it hard to reconcile that with Barton’s assertion in #482:

    Every computer model of the atmosphere uses it[LTE]and gets the right answers.

    Just because my views are unique does not make them wrong, and just because all computer models use LTE does not make them right. It is not as if all computer modelers have come to that conclusion independently. Those who are now involved in that code believe they need only copy what was written before.

    In the abstract for Santer et al. 2005, of which Gavin was a co-author, they conclude:

    “These results suggest that either different physical mechanisms control amplification processes on monthly and decadal timescales, and models fail to capture such behavior, or (more plausibly) that residual errors in several observational datasets used here affect their representation of long-term trends.”

    Despite the passage of three years and the publication of yet more scientific papers trying to find “residual errors in” all the relevant “observational datasets” , “The Tropical Lapse Rate Quandary” still exists.

    It seems that the climate modelers are just like any other computer programmers, and will never admit that there is a bug in their program :-(

    Returning to Hank’s problem, it should be possible to get the models to give the polar amplification needed to reproduce the paleoclimate data if the absurd theory based on LTE that greenhouse warming operates by affecting the lapse rate high in the troposphere. The greenhouse effect is due to the absorption of infrared radiation from the surface of the earth and it is the air there which warms. This raises the latitude and altitude of the snow line. It is the change in albedo from loss of snow and ice cover which drives the polar amplification. That would explain the Phillipona Dilemma see: A Busy Week for Water Vapour

    Cheers,Alastair.

    [Response: Alastair, please no more nonsense about LTE. Your notions about LTE are flatly incompatible with direct observations of the outgoing infrared spectrum of Earth and Mars, which confirm Kirchoff's law to a high degree of accuracy. That should convince you, even if the basic physics isn't good enough(see e.g. Chamberlain and Hunten's textbook on atmospheric physics, which has a particularly good explanation of why calculations based on Einstein transition coefficients give the same answer as straight thermodynamic equilibrium, even when the full photon spectrum is not blackbody) . As for polar amplification in paleoclimate data, please note that the revision of tropical temperatures eliminates a lot of the problem. The PETM warmth from the new polar core could put some of the problem back, but even if that relatively new data holds up, there are lots more plausible mechanisms than abandoning LTE -- things to do with cloud feedbacks, changes in aerosol concentration, hurricane-induced ocean mixing, and so forth. --raypierre]

    Comment by Alastair McDonald — 10 Jun 2008 @ 10:56 AM

  486. Rod–the cross product depends only on the component of the r that is perpendicular to p–in magnitude it is |r||p|sin(theta), where theta is the angle between r and p. L remains constant throughout the trajectory of the bullet and, indeed after collision. And you’ve almost got the collision part right–indeed, the bullet and blob are accelerated by the same force acting over the same distance, but the angular momentum of the system is conserved. Now think about the special case where the bullet passes through the center of mass of the system–there is no perpendicular component to r, so angular momentum is zero throughout–and although the blob will accelerate, it will not rotate wrt its center of mass.

    Hey, at least this is science–it’s better than talking about corporate ethics.

    Comment by Ray Ladbury — 10 Jun 2008 @ 11:39 AM

  487. Re: #484 (Rod B)

    There’s no conflict in the definition of angular momentum. As the bullet moves, the radius changes according to r = r(closest) / sin(angle), where “r(closest)” is the radius at closest approach and “angle” is the angle between the present direction and the direction in which the bullet is travelling. The vector cross product in “r x p” gives |r| times |p| times sin(angle). So the magnitude of the result is r(closest) / sin(angle) times |p| times sin(angle) = r(closest) times |p|, which is constant through time.

    Comment by tamino — 10 Jun 2008 @ 11:44 AM

  488. Rod:

    1) If the
    bullet has angular momentum of r X P referenced to the axis of the
    spherical absorbent blob, it would seem the angular momentum of the
    bullet must vary with time as r and the angle between r and P vary.

    No, Rod. It remains strictly constant. The variations in r and the sine of the angle cancel precisely.

    Look at it this way: the size of the bullet’s angular momentum is

    |r x P| = m |r x v| = m |r| |v| sin (angle r,v)

    perpendicular to the (r,v) plane. In a fixed unit of time dt this is

    |r x P| dt = m dt |v| |r| sin (angle r,v) = m |dr| |r| sin (angle r,dr) = 2 m * area,

    where “area” is the surface area swept out by the line connecting origin and bullet over the amount of time dt, and dr = v dt the travelled distance in that time. We have

    area = 1/2 |dr| |r| sin(angle r,dr).

    Now look at this surface area. Make the distance dr travelled by the bullet the base of the triangle, and the shortest distance between the origin and the bullet path, i.e.,

    |r| sin(angle r,dr),

    its height. [I'm trying to draw an ASCII picture here...] this is a constant. Also |dr| is a constant. It follows that “area”, and thus |r x P| dt, and thus |r x P|, is a constant. This is Kepler’s second law and conservation of angular momentum. At least, of its size. Its direction is also constant as it is the direction perpendicular to both r and v, as you will agree from geometric intuition.

    Phew.

    L = d(P[B])/dt X r.

    If you want L to be angular momentum, it should be

    L = P(B) x r and thus dL/dt = d(P(B)/dt x r = m dv(B)/dt x r.

    If you call the torque N = F(R) x r, you have

    dL/dt = N.

    (And linear and angular momentum are still separately conserved…)

    Hope this gets you a little further. I miss a blackboard.

    Comment by Martin Vermeer — 10 Jun 2008 @ 1:53 PM

  489. Alastair (#485): You say, “It seems that the climate modelers are just like any other computer programmers, and will never admit that there is a bug in their program.” First of all, I hardly think that 3 years is enough time to resolve all of the problems that exist with the data…and progress has been made on that front. As for admitting a bug, I think it is probably pretty challenging to come up with an intelligent way to change the climate models so that they do a better job of agreeing with the data on multidecadal timescales while not messing up the agreement on the shorter timescales (where we know the data is actually more trustworthy!) I’m not saying that no reasonable mechanism can be found that would do this but I haven’t seen any proposals along this line.

    And, by the way, as someone who does computational modeling for a living (not in the climate science field), I would say that contrary to your assertion about never admitting a bug, I do admit bugs…and when the experiments and models disagree, I try to give my best assessment of which is more likely to be incorrect. Furthermore, I can tell you that, if anything, previous history has shown that I have erred more often on the side of having too little confidence rather than too much confidence in my modeling. And, in the case of the tropical tropospheric amplification, I would say that there are some very good reasons to believe that most of the problem is likely with the data and not the models.

    Comment by Joel Shore — 11 Jun 2008 @ 10:58 AM

  490. Ray, tamino, Martin: Well, that’s a helluva note! I was responding to Ray with a curt calculation of the L bullet at two different locations as it speeds toward the blob. Problem: L came out exactly the same at all positions. I got to get my calculator fixed (a wonderful old HP with reverse Polish stuff, btw…).

    This blows my whole fundemental concept of angular momentum. It’s still on topic (I think) though pretty shaky, so I’ll be brief. All of this tells me that angular momentum as a numerical quantity of something, associated with a mass (or virtual mass) is virtual, pseudo. For the same individual unique bullet at the same instant I can calculate many (an infinite number!) angular momentums: it will have angular momentum relative to any fixed mass with an axis. One L for my blob; a different L for the bottle on a stick 10 meters distant from my blob; a different L for the globe sitting in the library; a different L for the moon, etc. Each of those Ls, themselves, will be conserved. Is this accurate?

    [Actually this might help my long-held secret doubts about angular momentum: "a mass moving in a straight line will want to continue moving along that line" is an intuitively easy concept; "a mass moving around in a circle will want to continue moving around the circle" ain't near that easy!]

    Comment by Rod B — 11 Jun 2008 @ 3:44 PM

  491. ps, still one tiny missing piece. I didn’t follow Martin’s stuff on how linear momentum is conserved. Is it that since linear momentum converts to a torque (F = d(mv)/dt) when stopped by the blob, but not into angular momentum… though the angular momentum of the bullet gets applied to mass of both the bullet and the blob at impact…

    Comment by Rod B — 11 Jun 2008 @ 3:55 PM

  492. Martin (483), I’m not sure if you agree or disagree. I claim that only the 1/2mv^2 translation energy has “real” temperature, and that the term temperature applied to a molecule’s molecular rotation or vibration energy, or E-M radiation (oops!!) is a construct that’s very useful and understandable but its not the same essence as translation temperature. These might also be “real” in the sense that “there they are! right there on the paper of my calculations!”, but not “real” as I’m using it — the old thermometer measured stuff. (btw, how would you stick your thermometer probe into the rotation of a molecule, or the vibration of the atomic bonds?) You said

    “The distinction you make between these different temperatures is completely artificial. They are all just as real. The origin of defining temperature in terms of the kinetic temperature is historical: the gas thermometer was the first operational way to produce an accurately linear temperature scale. Now we have many more….”

    which I think you meant to disagree with me but what you said in essence is that we used to have kinetic energy = temperature but then we went and came up with more “temperatures” — pretty much describes inventing a construct to me.

    In any event, you had realism problems with my entire mole of CO2 having excited rotation and no relaxation — despite relaxation being a quantum mechanical probability function. How about if the mole is completely excited for just an instant (and I say at 300.000000K). You pick the timing of molecules relaxing (let’s say all through collision for simplicity, but you can divvy it up between emission and collision), tell me how many are relaxed at any instant and I will calculate the “real” temperature of the mole. The energy that still resides in rotation (or that has been emitted) WILL NOT enter into the calculation of the “real” temperature.

    I thought we agreed once, ala the Wikipedia reference. I suspect you might just be hung up with the semantics (though maybe I’m wrong here). We call “A” temperature and we call “B” temperature, but A and B might be totally different. “Blind” can be something that covers a window; “blind” can be lack of eyesight. Would you insist that lack of eyesight is exactly the same as the thing on the window?

    Comment by Rod B — 11 Jun 2008 @ 4:37 PM

  493. Rod #492

    (btw, how would you stick your
    thermometer probe into the rotation of a molecule, or the vibration of
    the atomic bonds?)

    Simple. A “thermometer” is anything outputting a reading allowing you to derive a temperature value (cf. Einstein: time is what you read off a clock). And you don’t stick a thermometer into a single molecule but always into a macroscopic amount of molecules (as Ray argued too).

    The thermometer probe here is the molecules itself emitting radiation from transitions between vibration/rotation levels. Every intensity ratio between two spectral lines is one “thermometer”.

    So it doesn’t have to be a liquid-in-glass thingy. The principle of thermometry is, as you say “sticking it in” so temperature can be shared between sample and probe… but that can mean many things, like sample and probe being the same molecules. And the “reading” can travel even over interstellar distances.

    About LTE (in my, not Alastair’s meaning), it is really valid almost everywhere, except in extreme situations like the Solar corona. So there is a single well-defined temperature at every point you care to measure it, shared by linear molecular motion, vibrations, rotations, electronic excitations (and ionization where appropriate).

    The historical role of kinetic temperature is due to metrological considerations (the art of measuring). The gas thermometer based on the ideal gas law was the first thermometer known on theoretical grounds to measure correctly and linearly. You can then calibrate other thermometers (mercury…) to measure on the same scale, creating a traceability chain. Astrophysical measurements of temperature are similarly theoretically based and not part of the same traceability chain. If that’s what you mean by “artificial”, then yes, I suppose you have a point. But in science, most “thermometers” do not contain mercury, and you wouldn’t recognize them if you saw them… like also a mass spectrograph measuring oxygen isotope ratios in an ice core sample ;-)

    “Blind” is a homonym. Not a valid metaphor.

    Sorry if this is a bit rambling; preparing to travel.

    Comment by Martin Vermeer — 12 Jun 2008 @ 1:35 AM

  494. How about if the mole is completely
    excited for just an instant (and I say at 300.000000K). You pick the
    timing of molecules relaxing (let’s say all through collision for
    simplicity, but you can divvy it up between emission and collision),
    tell me how many are relaxed at any instant and I will calculate the
    “real” temperature of the mole. The energy that still resides in
    rotation (or that has been emitted) WILL NOT enter into the calculation
    of the “real” temperature.

    What you are calculating, called the “real” temperature, is apparently the kinetic temperature. OK. Highly nonstandard, but not as such wrong — for gases. It will get you into trouble down the line in your study though.

    What happens is that the moment the mole is excited, it moves from LTE into (serious!) non-LTE. In my temperature language, at that moment it ceases to have a well defined temperature. It has measurable kinetic, rotation and vibration temperatures, all different; and if you wish to call anything artificial here, I would these three temperatures including the kinetic temperature.

    Then, when all molecules relax, the sample is restored to LTE and has a real temperature again.

    That’s my take on it.

    Comment by Martin Vermeer — 12 Jun 2008 @ 2:12 AM

  495. Raypierre,

    Thank you for your response and the recommendation of Chamberlain and Hunten’s textbook. I have ordered it. Hopefully it will answer at least some of the questions I have.

    You responded “Alastair, please no more nonsense about LTE.”

    Should that not have been “Alastair, please no more about that nonsense LTE.”? :-)

    It is quite clear from the replies to my recent posts that LTE can be defined in various ways, and that defenders of the paradigm that is used to explain greenhouse gas induced climate change, have no clear idea of what is meant by it.

    So you will hear no more from me at present. It is obvious that I am not going to have any success in explaining that the mechanism of the greenhouse theory of climate change, which originated in the nineteenth century with Arrhenius, is wrong. Where that mechanism does give the correct results, they are used to ‘prove’ that it is correct, and when it gives the wrong results they are brushed aside or explained away. That is the way old invalid paradigms are defended.

    I realise that since you too are a believer in the old paradigm it is impossible to convince you that it is wrong. But if I do not reply to your disparaging comment, then it will be construed by others that I have conceded defeat. This is not correct, so I will briefly outline my arguments before resting my case.

    Kirchhoff’s Law in its simplest form is just restatement of the Law of the Conservation of Energy. In other words, in thermal equilibrium the radiation absorbed must equal radiation emitted, since they are the only energy flows. In the troposphere not only do we lack thermal equilibrium, we also have an additional flow of energy from latent heat. Therefore, a balance in absorption and emission is impossible there.

    It is only in the upper atmosphere above the stratopause that radiation is the sole energy transfer mechanism, so there Kirchhoff’s Law must be obeyed. Since this is where the radiation to space originates it is unsurprising that the spectra of outgoing longwave radiation measured by satellites agrees with the calculations of the current paradigm. This does not mean that similar calculations in the troposphere are correct!

    In the troposphere the lapse rate does not match the calculations as is shown by the MSU and radiosonde data. This is because the heating effect there is by the absorption of radiation by the surface air, not by surface heating from Kichhovian back emission of greenhouse gases as proposed by the current paradigm.

    The point to realise is that the match with the OLR does not prove that the current model is correct. The lack of match with the tropical lapse rate, and many other difficulties with the models, does prove that they are wrong.

    This error in the models has important consequences. They have been unable to predict the rapid melting of the Arctic sea ice. They are unable to explain rapid climate change, and so are unable to predict the rapid warming that will happen when the Arctic sea ice disappears.

    (That event could even happen this year. The Arctic sea ice extent is already less than it was at this time last year. See: NSIDC Arctic Sea Ice Extent. That year the ice hit a record minimum extent in September. It now seems extremely likely that there will be another record low in three months time.)

    So you see I really ought to be acquiring more evidence that my ideas are correct, rather than arguing with the upholders of the conventional wisdom here. They will never be convinced :-(

    Cheers, Alastair.

    Comment by Alastair McDonald — 12 Jun 2008 @ 7:30 AM

  496. Rod, Of course if you switch origins, you wind up with a different angular momenmtum. Likewise, if you change the velocity of your origin, you get different momenta and kinetic energies, too. The momentum, energy and angular momentum are not properties of the bullet, but of the bullet wrt the coordinate system in which they are calculated. This in no way decreases their utility, since I know how to transorm the momentum/energy/angular momentum for any coordinate system I need them in. Moreover, while you have different values in different reference frames, there are clearly some frames that make the physics easier.
    As to the issue with temperature–you need to be careful. What do you mean by “excited to a temperature of 300 million K”. If the gas is to be at equilibrium at that temperature, then equipartition will apply. OTOH, if you mean that somehow Maxwell’s demon has gotten busy with his tennis racket and accelerated the linear momentum of each molecule to the right distribution of velocities for a gas at 300 million K, then you are not in equilibrium, and there will be an exchange of energy flowing into the modes that were not excited (rotation, vibration, electrical excitation, ionization…). And eventually, you will come to equilibrium at some lower temperature. Until it is in equilibrium, the system will not even really have a well defined temperature. We really only know how to treat systems that are either in equilibrium or near equilibrium. There are a few systems where the energetics are sufficiently simple that we can treat them far from equilibrium (e.g. the population inversion in a lasing medium), but the thermodynamics yields some really odd results, such as negative temperature. Trust me on this–there’s a reason why physicists do things the way they do.
    The thing is that you definition of temperature purely in terms of kinetic degrees of freedon doesn’t buy you anything. For instance, what happens when you think of the temperature of a solid? Here, the motions of atoms about their lattice positions are restricted, so most of the energy is in vibrational modes. Do you contend that a thermometer brought into contact wouldn’t register a temperature?

    Comment by Ray Ladbury — 12 Jun 2008 @ 7:46 AM

  497. Martin , (sigh!), maybe we will just always disagree; but one more shot:
    To sum it up, I was claiming “temperature” a homonym, not using “blind” as a metaphor. My “real temperature”…wait, I’m going to change its name since the other “temperatures”, as I said, are also “real” (homonyms) in some sense. I’m calling it “sensory temperature” — what you measure with the standard thermometer; or if you touch it you get a sensation commonly known as “temperature” by the public at large though maybe not (all) physicists. The other characteristics of certain molecular phenomena and radiation are not the same at all; physicists just call them “temperature”, a homonym, because its easier, handy, shorter, aids the understanding of the phenomena, and is understood by physicists within the context.

    A far out but hopefully insightful (as opposed to inciteful) example. I have a 2000kg vehicle going 45MPH (20.115m/sec). If I want a descriptive parameter that would help me understand what’s going on physically, I could first calculate its “real” kinetic energy (as a unit body), which is 404,613.2 joules. Now I’m going to define a parameter analogous to temperature (the sensory kind) calculated as if it was a relaxed mole of gas: kinetic energy = 3/2kT. T = 1.955×10^18 degrees. I’m going to call this number the car’s “temperature” because the calculation is somewhat similar to other calculations of temperature, and also, well…, the units are degrees Kelvin. (btw, when the car is stopped its “T” is zero.) If this is helpful to me as, say, a crew chief of a race car because it gives me some measure that combines speed and mass, I can ask my timer, “what’s the temperature?” because it’s understood by the crew within its context (and easier and shorter than asking, “what’s the equivalent of what the temperature of a mole of gas at STP would be assuming the car has three degrees of freedom (though hopefully using only one at any instant!) at its current kinetic energy?” By the time I get the question out, the car has completed another lap!) The timer says “about 1-1/2×10^18 degrees.”, which in our vernacular is probably shortened to “1-1/2 – 18.” I can use this information. I know what it means. I also know what it doesn’t mean. It’s not THE (sensory) temperature, which is probably around 80 degrees Celsius or so (and which my driver is very happy about), even though I call it “temperature”.

    You can measure a molecule’s emission frequency and associate it with a temperature related to Planck’s function, and then refer to the energy that generated the photon as having “temperature”. Everybody in the circle knows what’s being talked about. But, 1) the emission is NOT ala blackbody/Planck, and 2) does NOT have (sensory) temperature. If you could somehow stick a regular thermometer, not just in the molecule, but in the area within the molecule just where the atoms are vibrating back and forth, it would read zero. You can stick a spectroscope (which you can even call a “temperature probe”) and in fact measure the frequency of the emission and relate that to an equivalent blackbody temperature, and call it temperature, and get a lot of understanding and figure stuff out about molecular energy levels, emission, equipartion, etc. But it is not really temperature, which this by itself should be a no-op and have no effect what-so-ever on your endeavors — until one starts to think of it as sensory temperature because it’s called “temperature”

    I haven’t thought the following through in detail, but I think LTE and Equipartition are two separate and different qualities. When a molecule gets excited either vibrationally or rotationally the molecule is, for a while, out of whack, equipartionally. But it is not off ala LTE; in fact I don’t think LTE applies at all to an individual molecule (and especially intramolecular). The temperature of individual molecules in a mole — call it kinetic energy to assuage Ray — vary all over the place yet the mole can be in LTE with its next door neighbor. Last, to repeat (sorry), since H2O relaxation is in part dependent on quantum mechanics probability, it is certainly mathematically possible to have all molecules in a mole excited and make my thought experiment reasonable.

    Lest we forget ala RC, all this means is that when infrared radiation is absorbed by CO2 or H2O into their vibration or rotation modes, the (sensory) temperature of the atmosphere does NOT change — until the absorbed energy is transferred to another molecule’s translation energy through a collision.

    The real problem we both might be facing is the law of science that says, “the longer the explanation, the less likely it’s true.” ;-)

    Comment by Rod B — 12 Jun 2008 @ 1:35 PM

  498. Ray (496), if you define a fixed frame or coordinate system then linear momentum can be explicitly and inherently associated with the bullet, pure and simple. The bullet has a fixed mass and a fixed velocity in the fixed coordinate system and has a definite momentum equal to mv. I understand the case of different or moving frames, and have no problem with it. But, if my blob, bottle, globe and the moon are in the same fixed coordinate system, my bullet, it seems, has different angular momentums which, different from the linear momentum, are not explicit or inherent but varies depending on what it is related to. I don’t think I disagree or misunderstand this; it’s just a new foreign concept that I have to get used to. Are we on the same page here?

    re the temperature stuff, you missed the (obscure) decimal point. My base case was a mole at a normal 300.(point)00000000K (or so) to emphasize the precision. In any case I think your first point agrees with my point that internal modes do not exhibit (sensory — see my reply to Martin) temperature, which is why the (sensory) temperature drops as the molecules start to spread the massive abnormal energy around into vibration and rotation, in part.

    Don’t get me wrong. When I say physicists invented a construct and called things “temperature” that weren’t really temperature, I’m not being critical in the least. It makes eminent sense and I’m all in favor of it and support it for the reasons mentioned. They’re just not real (sensory) temperature. But I do not care if the term is used; I’m not insisting, nay even suggesting, nay would even want, that translation kinetic energy have exclusive rights to the word.

    It seems that the vibration of molecules in a crystal lattice solid is not at all the same as vibrating atoms within a molecule, (other than there’s a whole lotta vibratin’ goin’ on).

    Please let me know if you wish to revise and amend given my misleading “300″ degrees.

    Comment by Rod B — 12 Jun 2008 @ 2:14 PM

  499. Re #495: Alastair says “So you see I really ought to be acquiring more evidence that my ideas are correct, rather than arguing with the upholders of the conventional wisdom here.”

    Yes…You really ought to. And, you also ought to remember that probably 99.9% of the people who believe they are overturning an “old invalid paradigm” have simply come to a wrong conclusion. So, you need to constantly be asking yourself, “Am I really so sure that I am in that special 0.1%.” Or…As Clint Eastwood would put it, “So, you gotta ask yourself kid, do you feel lucky?” ;)

    Comment by Joel Shore — 12 Jun 2008 @ 3:49 PM

  500. Rod, I think you must be making an error–the magnitude of the angular momentum is |r||p|sin(theta), where theta is the angle between them. This will not change during the bullet’s flight, and the angular momentum of the system is conserved even after the collision. You can’t keep the coordinate system fixed for the linear momentum and allow it to change for angular.
    Rod, until the modes start spreading energy around to acheive equipartition, the temperature of the system is not well defined because it is not in equilibrium. The temperature will seem to change even though energy is not changing in the system–you’re starting in a low-entropy state and temperature = partial of E wrt S. No, the vibration is exactly the same whether it’s in a solid a liquid or a gas–just the allowed modes are different. There is absolutely no advantage in the way you are thinking about this. When you say “sensory temperature,” what you are actually feeling is the flow of energy–which depends on temperature difference.

    Comment by Ray Ladbury — 12 Jun 2008 @ 5:04 PM

  501. > angular momentum … varies depending on
    > what it is related to.

    Rod, can you take a bucket of water, tie a rope onto it, start swinging it so it’s circling around you, then relate it to something else so the water falls out?

    Comment by Hank Roberts — 12 Jun 2008 @ 8:41 PM

  502. Rod,
    Temperature predates atomic interpretations of thermodynamics by hundreds of years. It was known to be a valid concept for gasses (where degrees of freedom are largely kinetic), liquids (where things are still largely kinetic, but motion is more restricted and solids (where motion is mostly vibrational). And the temperature was measured in the same way: insert a “thermometer” and allow it to come into thermal equilibrium with the material you want the temperature for and read how much the material in the thermometer has expanded. Alternatively, you can measure how much resistance in a thermistor changes. The thing is that it doesn’t matter whether the energy is kinetic, rotational, vibrational, electronic, etc. What matters is that the thermometer and the medium will exchange energy until they are at the same temperature–that is until they come into equilibrium. Note that the concept of equilibrium is critical here. Two bodies in contact and equilibrium will have the same temperature. Likewise, if a system is not in equilibrium–e.g. if equipartition does not apply–then you’ll get ambiguous answers for its temperature.
    I have emphasized that temperature is defined as the partial derivative of energy wrt entropy, holding all other variables (e.g. volume, numbers of particles, etc) constant. This definition actually predates the kinetic interpretation of temperature by several decades. In fact, the only reason the kinetic interpretation was introduced was because the physics of billiard balls is easier to interpret than that of particles with internal degrees of freedom. Where you are running into trouble is in trying to apply a kinetic interpretation of temperature to a single particle/molecule. This quite simply is not valid. Statistical mechanics only works in the limit of large numbers of particles–and that includes any definition of temperature. This is one reason why nanoparticles are such a hot topic–how small does the particle have to get before you start seeing significant departures from normal thermodynamic behavior. The other frontier in stat mech is nonequilibrium stat mech. This is a very difficult. Fortunately, planetary atmospheres are sufficiently close to equilibrium that the normal rules of stat mech apply.

    Comment by Ray Ladbury — 13 Jun 2008 @ 8:52 AM

  503. Ray, Hank re angular momentum: First, Hank, we’re discussing a mass with linear velocity and momentum and not physically tied to a disc-blob or its axis but with angular momentum about that axis. Getting angular momentum from a rotating (about an axis) mass relative to another different axis, ala the swinging bucket of water, is beyond my comprehension, so I need to go one step at a time.

    I think we concluded and agreed 1) the bullet has angular momentum ( L ) about its target (a fixed rotatable disc). 2) That L is constant through time and distance from where the bullet was shot (100m from the disc blob say) all the way to the disc. 3) When the bullet strikes the disc (off center) and its applied torque spins the disc, the combined L_subT of the spinning disc with its embedded bullet will be the same as the bullet’s initial L. 4) if the bullet is mis-aimed and misses the target disc, it still has L relative to the disc. And, 5) the mis-aimed L is slightly different from the original L by virtue of a slightly different angle between P and R. The latter two might be in contention….

    I contend: The bullet does not have to actually strike (and end up with a physical connection with) its rotatable target to have L about it. It can miss it big. If it can miss the original target and still have L about it, it ought to be able to also miss another different fixed rotatable target significantly distant from the first target (just to make it easier to draw), though in the same fixed unmoving coordinate system. The bullet then would have L_2 about the second target, since it is fired only once, at the same instant(ces) and from its specific position at any instant that it has L_1 about the first target, wouldn’t it? This L about the second “target” is numerically different from the first. This tells me that a single bullet with specific, explicit and inherent linear momentum can have two different numerical angular momentums depending on the relative location of two different targets. What is wrong with this? Can’t have L about target #2? Then how can it have L about target #1, at least if the aim is a bit off?

    Finally, if it has two Ls, its a no-op jump to 3, 4, 20, 87, 1,000,000, etc.

    Comment by Rod B — 13 Jun 2008 @ 4:00 PM

  504. Ray (502), I don’t think we are going to resolve this, but let me ask about just one point for now. You said “… Likewise, if a system is not in equilibrium–e.g. if equipartition does not apply–then you’ll get ambiguous answers for its temperature….”

    I thought equipartition was an intramolecular (one) property. If equipartition is off kilter, why would one get an ambiguous temperature, as (you claim — though as you know, I disagree) temperature for a single molecule is a no-op and doesn’t exist? Or did I misread your statement?

    Comment by Rod B — 13 Jun 2008 @ 11:04 PM

  505. ps to my #503: Angular momentum characteristics are actually more obvious that complicated stuff I got bogged down in. My bad. In a forceless constant frame/coordinate system containing bodies and a fixed observer, a body’s numerical linear momentum never varies no matter what. A body’s numerical (and vector) angular momentum can vary infinitely depending on what axis of rotation the observer chooses — my linearly moving bullet relative to any axis of all those other bodies in the example, or, say, a single cylinder rotating about 1) its centerline, or 2) an axis through it’s cross-section, or 3) an axis anywhere.

    Different thought: can I assume the angular momentum of a photon, as small as it is, has to appear only in the rotation of a molecule after absorption; and likewise has to be taken from rotation when a photon is emitted? Does this somehow (or noticeably) alter the rotational energy pickup of a photon’s energy?

    Comment by Rod B — 15 Jun 2008 @ 12:48 PM

  506. Re #505

    Rod,

    In classical physics momentum, not angular, is m * v, i.e. the product of mass and velocity. As Einstein showed, photons have no mass but they do have momentum, which is equal to h / wavelength, where h is Plank’s constant.

    Thus, when photons collide with molecules the law of conservation of momentum is conserved, and the molecules do recoil.

    If two mono-atomic Argon molecules collide, then the result is similar to that of a billiard balls. However, if two tri-atomic carbon dioxide molecules collide, their shape makes the result much more complicated. For instance, if they meet head on oxygen to oxygen, then the result will be a stretching oscillation, but if they meet head on oxygen atom to middle carbon atom, the result will be a bending oscillation. If it is a glancing blow, the the result will be a rotation, but in general is will be a combination of all three.

    However, when CO2 is only 300 parts per million of the atmosphere, then a CO2 – CO2 collisions are unlikely. CO2 molecules are more likely to receive collisions with diatomic molecules such as O2 and N2. The geometry of the collisions with these molecules will also affect the resulting excitation of the CO2 molecule. It is all very complicated. It is all a matter of quantum mechanics :-(

    HTH,

    Cheers, Alastair.

    Comment by Alastair McDonald — 15 Jun 2008 @ 4:11 PM

  507. Alastair, thanks for waking me up. I once knew photons have linear but not angular momentum; must be the senility setting in :-P . This means the photon momentum manifests in translation movement of the molecule, while photon energy appears in vibration or rotation molecular energy, except for a teeny amount that has to be shared/added to translation to cover the increased momentum. True?

    Thanks also for the geometry of collisions explanation.

    Comment by Rod B — 16 Jun 2008 @ 11:41 AM

  508. Re #507

    Hi Rod,

    The photon has very little momentum and so has very little direct effect on the translation (kinetic temperature) of the molecule. What it does have lots of is electro magnetic energy, and this goes into the bipolar vibrations of the greenhouse gas molecule. It is only molecular vibrations and rotations which produce an oscillating magnetic field that can absorb photons.

    For instance CO2 is a linear molecule, and rotation about that line as an axis does not produce an oscillating field, but because H2O is bent, then that rotation is IR active. Rotation of the CO2 molecule about the central carbon atom also produces a field, so CO2 has two IR active rotation modes, actually one – doubly degenerate.

    I better not add more as I am almost out of my depth :-)

    Cheers, Alastair.

    Comment by Alastair McDonald — 16 Jun 2008 @ 1:31 PM

  509. Typically, rotational bands are in the microwave, vibrational bands in IR

    Comment by sidd — 16 Jun 2008 @ 3:58 PM

  510. Rod, remember that the photon has momentum h/lambda and energy hc/lambda. c is a pretty big number. The photon will also have angular momentum, but since it will not interact much beyond its wavelength (small perpendicular distance), the angular momentum isn’t particularly relevant.
    As to equipartition–it does not apply to single molecules, but really to large assemblies of molecules, and if you want to get technical to ensembles (many repetitions) of such assemblies. I think somewhere in my collection of technical books, I have a history of statistical mechanics if you are interested. I’m visiting relatives for my niece’s graduation right now, so I don’t have the title, but it might provide you with a little bit of background on why physicists define things as they do. Energy (or kinetic energy) and temperature are not equivalent concepts. Energy is extensive. Temperature is intensive. Temperature tells us about the flow of energy. It is only when you are dealing with equilibrium systems that you can come up with unambiguous relations. Expecting those to hold for all systems is asking for trouble.

    Comment by Ray Ladbury — 16 Jun 2008 @ 4:13 PM

  511. So the massless photon which might or might not even exist has both angular and linear momentum. Who would’ve thunk it! ;-)

    Comment by Rod B — 16 Jun 2008 @ 10:08 PM

  512. So the free market global economy, which defeated world communism, has hit both Peak Oil and the climate change catastrophe. Who would’ve thunk it! :-(

    Comment by Alastair McDonald — 18 Jun 2008 @ 11:06 AM

  513. Free market global economy is efficient, not necessarily far sighted!

    Comment by Rod B — 18 Jun 2008 @ 1:31 PM

  514. So we are about to discover!

    Comment by Alastair McDonald — 18 Jun 2008 @ 3:20 PM

  515. Ray, since neither one will budge for technical/physics rationale, let me try philosophical: Why can a massless and possibly imaginary entity like a photon easily be ascribed physical characteristics like momentum, angular momentum, velocity, energy, and “characteristic” temperature (ala Planck), and polarization, but individual molecules (real entities with real mass and structure you know), you seem to argue, can not?

    Ray, Alastair, et al: A follow-up curious/interest question, re a photon’s angular momentum (L): What axis is its angular momentum about? Would it be like the equivalent of a single rotating sphere with L about its own axis of spin? Second, is it correct to say that an absorbed photon’s L is conserved and has to manifest itself only in molecular rotation? And if about the/a molecule’s central axis of rotation, is that consistent with the photon’s axis? And, since incredibly small, would the added rotational energy come anywhere near the allowable rotational energy levels (though it would seem to fall within the narrow band(s) of the levels..?? — is there a band around the zeroeth level?)? Or is the photon’s L more of a virtual, imaginary, or characteristic quality (none-the-less with effects) and not actually “real” (like electron spin??)? Or does anybody really care?

    Comment by Rod B — 20 Jun 2008 @ 10:25 AM

  516. Rod, relativistically, a photon has momentum becuase it has energy. Look at the relationship of momentum and energy relativistically–E=hc/lambda, p=h/lambda. And, no, a single photon does not have a temperature.
    Also, you are getting wrapped around the axle wrt coordinate transformation. It doesn’t matter where you put the axis. The physics (what happens when two bodies interact) won’t change. In your example of the photon and the molecule, the photon won’t interact unless it passes within a wavelength of the molecule–so the change in angular momentum will be within the quantum mechanical uncertainty of the excited state. Again, which axis you take is not the interesting part–the physics is invariant.

    Comment by Ray Ladbury — 20 Jun 2008 @ 11:25 AM

  517. Ray says, “…so the change in angular momentum will be within the quantum mechanical uncertainty of the excited state.”

    That makes sense. So does your comment re L’s axis. Thanks.

    Comment by Rod B — 20 Jun 2008 @ 12:54 PM

  518. To the main theme, read: “Tipping elements in the Earth’s climate system” (Timothy M. Lenton*†, Hermann Held‡, Elmar Kriegler‡§, Jim W. Hall¶, Wolfgang Lucht‡, Stefan Rahmstorf‡,
    and Hans Joachim Schellnhuber†‡): “Society may be lulled into a false sense of security by smooth
    projections of global change. Our synthesis of present knowledge
    suggests that a variety of tipping elements could reach their
    critical point within this century under anthropogenic climate
    change. The greatest threats are tipping the Arctic sea-ice and
    the Greenland ice sheet, and at least five other elements could
    surprise us by exhibiting a nearby tipping point. This knowledge
    should influence climate policy, but a full assessment of policy
    relevance would require that, for each potential tipping element,
    we answer the following questions: Mitigation: Can we stay clear
    of crit? Adaptation: Can Fˆ be tolerated?”

    Seems to me, that society “may be lulled into a false sense of security” not as much by “smooth projections” (which frankly most powerholders doesn’t even bother to understand beyond the point of politically correct gobbledegook as far as they think that’ll help them catch some voters extra) as by a very modern human tendency to automatically ignore what doesn’t fit into the megalomaniac expectations stemming from the dogmatic and scolastic exercise of economic “science” by the real priesthood of our times ex media cathedra.

    Re #1: Certainly mankind will never be able to do the job of
    “Laplace’s demon”:

    “In the history of science, Laplace’s demon is a hypothetical “demon” envisioned in 1814 by Pierre-Simon Laplace such that if it knew the precise location and momentum of every atom in the universe then it could use Newton’s laws to reveal the entire course of cosmic events, past and future.”

    http://en.wikipedia.org/wiki/Laplace's_demon

    But that doesn’t mean we can’t say anything about climate change past and present. There are lots of possible positions between total determinism and total uncertainty/scepticism.

    As for “the free market global economy” (#512): around 50 pct. of the global economy is internal trade/transactions within less than hundred transnational corporations. Not very much “free market” there, except in the media mythologies, but certainly lots of oligopoly/monopoly capitalism and centralistic planning (without democracy as in the Soviet Union).

    Comment by Karsten J — 26 Jun 2008 @ 2:21 PM

  519. > megalomaniac expectations stemming from the
    > dogmatic and scolastic exercise of economic
    > “science” by the real priesthood of our times
    > ex media cathedra.

    This deserves some sort of writing award, I think.

    Comment by Hank Roberts — 26 Jun 2008 @ 2:59 PM

  520. What Hank Roberts said in #519.

    Comment by David B. Benson — 26 Jun 2008 @ 4:07 PM

  521. Hank,

    I think he meant “conventional wisdom.” If so he is right :-(

    Cheers, Alastair.

    Comment by Alastair McDonald — 26 Jun 2008 @ 6:50 PM

  522. Dear Hank and David,

    He must have been practising for the Bulwer-Lytton Contest.

    Comment by Tenney Naumer — 26 Jun 2008 @ 6:52 PM

  523. Not exactly on topic, but relating to much of the above discussion:

    I have found your site is very informative despite the technical nature of much of the discussion, even for someone like me with no science since high school (I’m not counting math, economics and statistics).

    I would like you to commend you for your efforts in dealing with the distractions of the so-called “skeptics”.

    While you will clearly never convince them (there comes a point when the distinction between ignorance and wilful ignorance can no longer be overlooked), your patient response to their faux-naif questions and repetitive red-herring-dragging is not to no avail.

    It has been less than a month since I started trying to educate myself about climate change.

    In this short space of time I have reached the point where even I can spot almost all the inconsistencies and fudges in posts from the gusbobbs of this world, without having to wait for your expert replies. I am sure there are many others interested non-experts in my position.

    While dealing with such distractions have no doubt interfered with this site’s function as a clearing house for genuine ideas exchanged between experts in your field, it has been beneficial in highlighting the paucity of many of the wrong-headed arguments being bandied around.

    If nothing else, the quality of one point of view can often be judged by the quality of the arguments ranged against it.

    Thank you.

    Comment by Garry S-J — 21 Jul 2008 @ 4:03 AM

Sorry, the comment form is closed at this time.

Close this window.

1.284 Powered by WordPress