RealClimate logo


Technical Note: Sorry for any recent performance issues. We are working on it.

The tropical lapse rate quandary

Filed under: — gavin @ 11 August 2005

Guest commentary by Steve Sherwood

There are four independent instrumental records of sufficient length and potential accuracy to tell us about 20th-century climate change. The two longest ones are of temperature near the Earth’s surface: a vast network of weather stations over land areas, and ship data from the oceans. While land surface observations go back hundreds of years in a few places, data of sufficient coverage for estimating global temperature have been available only since the end of the 19th century. These have shown about a 0.7 C warming over land during the last century, with somewhat less increase indicated over oceans. The land records contain artifacts due to things like urbanization or tree growth around station locations, buildings or air conditioners being installed near stations, etc., but laborious data screening, correction procedures, and a-posteriori tests have convinced nearly all researchers that the reported land warming trend must be largely correct. Qualitative indicators like sea ice coverage, spring thaw dates, and melting permafrost provide strong additional evidence that trends have been positive at middle and high northern latitudes, while glacier retreat suggests warming aloft at lower latitudes.

The other two climate records, so-called “upper air” records, measure temperatures in Earth’s troposphere and stratosphere. The troposphere—that part of the atmosphere that is involved in weather, about 85% by mass—is expected to warm at roughly the same rate as the surface. In the tropics, simple thermodynamics (as covered in many undergraduate meteorology courses) dictates that it should actually warm faster, up to about 1.8 times faster by the time you get to 12 km or so; at higher latitudes this ratio is affected by other factors and decreases, but does not fall very far below 1. These theoretical expectations are echoed by all numerical climate models regardless of whether the surface temperature changes as part of a natural fluctuation, increased solar heating, or increased opacity of greenhouse gases.

It turns out that the upper-air records have not shown the warming that should accompany the reported increases at the surface. Both the Microwave Sounding Unit (MSU) satellite (analyzed by the University of Alabama in Huntsville by John Christy and Roy Spencer) and weather balloon data (trends reported by a number of researchers, notably Jim Angell at NOAA) have failed to show significant warming since the satellite record began in late 1978, even though the surface record has been rising at its fastest pace (~0.15 C/decade) since instrumental records began. On the other hand both records have shown dramatic cooling in the stratosphere, where cooling is indeed expected due to increasing greenhouse gases and decreasing ozone (which heats the stratosphere due to its absorption of solar ultraviolet radiation). The sondes in particular have shown a lot more cooling than the satellites, almost certainly too much, leading one to wonder whether their tropospheric trends are also too low.

The non-warming troposphere has been a thorn in the side of climate detection and attribution efforts to date. Some have used it to question the surface record (though that argument has won few adherents within the climate community), while others have used it to deny an anthropogenic role in surface warming (an illogical argument since the atmosphere should follow no matter what causes the surface to warm). The most favored explanation has been that the “lapse rate,” or decrease in temperature as you go up in the atmosphere, has actually been increasing. This would contradict all of our climate models and would spell trouble for our understanding of the atmosphere, especially in the tropics.

This assumes that the observed trends are all real, which is reasonable when two independent measurements agree. But both upper-air observing systems are poorly suited in many respects for extracting small, long-term changes. These problems are sufficiently serious that the US National Weather Service (NESDIS) adjusts satellite data every week to match radiosondes, in effect relying upon radiosondes as a reference instrument. This incidentally means that the NCEP/NCAR climate reanalysis products are ultimately calibrated to radiosonde temperatures. Recent developments concerning the MSU satellite data are discussed in a companion piece.

What can the Radiosonde data tell us?

Radiosondes themselves have significant problems and were also not designed for detection of small climate changes. These problems have been well documented anecdotally, and have been dutifully acknowledged by those who have published trends in radiosonde temperatures. The cautions urged by these researchers in interpreting the results have not always been taken on board by others however.

Few if any sites have used exactly the same technology for the entire length of their record, and large artifacts have been identified in association with changes from one manufacturer to another or design upgrades by the same manufacturer. Artifacts have even been caused by changing software and bug fixes, balloon technology, and tether lengths. Alas, many changes over time have not been recorded, and consistent corrections have proven elusive even for recorded changes. While all commonly used radiosondes have nominal temperature accuracy of 0.1 or 0.2 K, these accuracies are verified only in highly idealized laboratory conditions. Much larger errors are known to be possible in the real world. The most egregious example is when the temperature sensor becomes coated with ice in a rain cloud, in which case upper tropospheric temperatures can be as much as 20 C too warm. This particular scenario is fairly easy to spot and such soundings can be removed, but one can see the potential problems if many, less obvious errors are present or if the sensor had only a little bit of ice on it! Another potential problem is pressure readings; if these are off, the reported temperature will have been measured at the wrong level.

The Sherwood et al. study in Science Express concerns one particular type of long-recognized radiosonde error, that caused by the sun shining on the “thermistor” (basically, a cheap thermometer easily read by an electric circuit). This problem has been documented, notably by Luers and Eskridge (1995,1998), but correcting for it in the past has proven difficult and previously its magnitude was poorly known except under controlled conditions. The most popular radiosonde manufacturer worldwide today is the Vaisala corporation, whose strategy for coping with solar heating is to concede that it will happen and try to correct for it: the thermistor is mounted on a “boom” that sticks into the air flow where the sun can shine on it, but the heating error is estimated from the measured ascent rate and solar zenith angle and subtracted from the reported temperature. The magnitude of this correction can be several degrees, has varied with changing designs, and may not always have been properly applied in the past especially if time of day, station location, or instrument version were incorrectly coded. The US radiosonde, until recently made exclusively by the VIZ corporation and now under contract to two separate manufacturers, has followed the strategy of trying to insulate the thermistor from solar effects by ducting it inside a white plastic and cardboard housing. However, this strategy is unlikely to completely prevent solar heating. The first US radiosonde designs, which had less effective shielding and lacked the white coating subsequently applied to the sensor to limit is solar absorption, showed obvious signs of solar heating error. Many other radiosonde designs exist; larger countries historically designed and built their own sondes, but some countries have abandoned their national sondes and started buying from (usually) Vaisala.

The Sherwood et al. study is the first to try and quantify the solar-heating error over time. We recognized that the true difference between daytime and nighttime temperatures through the troposphere and lower stratosphere should, on average, be rather small, and moreover should have changed very little over the last few decades. We also recognized that this difference could be observed quite accurately by examining consecutive daytime and nighttime observations. Nighttime observations at many stations are much more rare than daytime ones, so this strategy means throwing out most of the daytime data; this is one reason why previous, less focused investigations did not detect this particular problem. This data-treatment technique revealed that, as you go back farther in time, the daytime observations become progressively warmer compared to nighttime observations. This is a clear indication that, back in the 1960′s and 1970′s especially, the sun shining on the instruments was making readings too high. This problem disappeared by the late 1990′s.

The key thing here is not simply the existence of this problem, but the change over time. It turns out that in the tropics the artificial boost in the early readings was just about equal, on average, to the increase in surface temperature over the 1979-97 period (the trend in solar heating bias was -0.16 K/decade averaged from 850-300 hPa). In other words, this effect by itself could explain why reported temperatures did not increase–the increases in actual air temperature were nearly balanced by decreases in the (uncorrected) heating of the instrument by the sun. This effect was large in the tropics because of heavy reliance on daytime data in previous climatologies, and because the daytime biases there changed the most. Correcting for this one effect does not bring trends into perfect agreement with those predicted based on the surface—they still fall slightly short in the tropics during the last two decades, and are too strong in the southern hemisphere extratropics when measured over the last four decades—but these remaining discrepancies are well within what would be expected based on other errors and the poor spatial sampling of the radiosonde network.

An important caveat is that, when instrument designs change, this can affect not only the daytime heating of the thermistor but can also affect the accuracy at night. Thus, correcting for this effect alone does not guarantee an accurate atmospheric trend. The other errors are, unfortunately, not as easy to quantify as the solar heating error. It is not clear what direction they may have pushed trends. Thus we are still in the dark as to the exact amount of warming that has occurred in the atmosphere. The one thing we do know is that we should not hang our hat on the trends in the reported observations until this, and all other problems, are sorted out.

Conclusion

The most likely resolution of the “lapse-rate conundrum,” in my view anyway, is that both upper-air records gave the wrong result. The instrument problems uncovered by these papers indicate that there is no longer any compelling reason to conclude that anything strange has happened to lapse rates. From the point of view of the scientific method, the data do not contradict or demand rejection of the hypotheses embodied by models that predict moist-adiabatic lapse rates, so these hypotheses still stand on the basis of their documented successes elsewhere. Further work with the data may lead us to more confident trends, and who knows, they might again disagree to some extent with what models predict and send us back to the “drawing board.” But not at the present time.

References:
J. K. Luers, R. E. Eskridge, J. Appl. Meteor. 34, 1241 (1995).
J. K. Luers, R. E. Eskridge, J. Climate 11, 1002 (1998).


64 Responses to “The tropical lapse rate quandary”

  1. 51
    Lynn Vincentnathan says:

    RE #50, a 40 year forecast may be all right for scientists, but for laypersons living in the world and concerned about it (even into the far distant future, as it relates to today’s activities), we want much more & are willing to sacrifice the sacred cow of .05 significance. This makes for a creative tension between scientists (who are doing their best) and concerned laypersons (who want more information — and quite frankly re GW, we have more than enough info now on which to base our actions).

    There should not, however, be this additional tension between scientists and selfish-motivated contrarians, who care little for life on earth beyond their own lives (&, it seems, don’t even want to reap savings & other benefits from efficiency and conservation). The .05 significance level is caution enough to protect scientific reputations and industrial profits. Protecting life (even into the distant future), I think, is a higher, more noble goal & should come before a life of exteme inefficiencies, excesses, and thoughtlessness that entails harm to others and oneself.

    From what I’ve seen, the “climateaudit.org” site is a contrarian site (not merely a science skeptic site) which falls short of bonafide science. So I would suggest you stick to this webpage & avoid that one.

  2. 52
    Bob Moore says:

    Sydney, Australia, has just had its warmest winter on record, with average maximum temperature 1.6 C above the long term average and average mean temperature about 1 C above. The last 5 years are the top 6 warmest years (one year was shared with 1988). What are the chances of that being purely by chance or by some natural cycle (as opposed to GW)? Is there anyway to put a figure on these? There may be some heat island effect, but the figures were similar at a number of quite different sites across a big city.

    Also it has been very dry and the question is, is the drought a result of the rising temperatures or the temperatures the result (partially at least) of the drought?

  3. 53
    Murray duffin says:

    Re # 46 I am not a modeller so can only comment on the available data. Again, assuming terrestrial uptake closely matches land use emissions, and knowing that as anthropogenic emissions rise the atmospheric fraction declines, the only place for the missing C to go is ocean uptake. The present ocean uptake is then 2x the rate presented by the IPCC in their illustration of C sources and sinks, and the ocean sink is increasing as noted by Hans Erren, regardless of what the ocean “budget” is assumed to be. I suspect there are major unknowns relative to ocean uptake of C. Murray

  4. 54
    Murray duffin says:

    Further on # 46
    I have tried for a day now to find consistently conclusive agreement on isotopic reconstructions, and failed, so I doubt if a model can be made to “fit all available data”. Also reviewed my past collection on ice cores. Law Dome takes about 68 years to close. Siple takes about 105 years to close. Vostok takes from 4000 years in warm periods to 6000 years in cold periods to close. therefore any data is a 70 year moving average for Law, 100+ yr for Siple and 4000-6000 for Vostok. Siple shows CO2 concentrations of 390 ppm at about 140k yrs bp, at about TerminationII. This is a very inconvenient number, and rather than being accepted as real is explained as evidence that CO2 has been imported from somewhere. Vostok shows 290 PPM for the same period, but with 4000 years of smearing a peak to 390 ppm lasting a few hundred years would disappear. Quite apart from probable losses of pressurized CO2 during coring, storage and preparation for analysis, this data shows us that ice cores are not a completely reliable proxy. Numerous studies show us that tree rings are not even a useful proxy for historic temperature, so the whole issue of building models that “fit all the data” is rather fraught. The data we do have clearly show that the models used for C modelling in the TAR are already very wrong. Have you tried comparing model projections from ca 1999/2000 of surface temperature for the early part of the 21st century with the published record of the last 7 years? The only ones I can find published on the web provide another “snicker test” Murray

  5. 55
    Tom Fiddaman says:

    Re #53 & #54

    “The data we do have clearly show that the models used for C modelling in the TAR are already very wrong” I keep seeing this kind of statement with nothing to back it up. The logic seems to run, “IPCC carbon cycle models predict sink saturation, but a linear trend fit to Mauna Loa data doesn’t show it, therefore IPCC models are gravely wrong.” There are problems with this logic.

    First, no one actually demonstrates that the IPCC models don’t fit the Mauna Loa data; in fact graphs on the Daly site show no divergence in model behavior through the present. I don’t know where definitive TAR Bern output is, but archived output from 1990 here fits the data pretty well. Certainly low-order box models with sink saturation and time constants >100yrs fit the Mauna Loa data extremely well. That sink saturation is not evident in Mauna Loa data is not surprising, given that it doesn’t become important in the nonlinear models until later in the century.

    Second, the observations of the airborn fraction are conditioned on “assuming terrestrial uptake closely matches land use emissions,” which is not necessarily a good assumption given the large error bounds (>50%) on land use emissions data.

    Third, “knowing that as anthropogenic emissions rise the atmospheric fraction declines” is a statistically weak observation given the short Mauna Loa record and the high emissions uncertainty. More importantly it neglects the stock flow distinction between emissions and atmospheric concentration and thus has no causal meaning. Consider the reverse – emissions fall tomorrow to 1 ton/year; this statement would lead us to expect a rise in the atmospheric fraction, when in fact it would become a large negative number. Uptake is a function of atmospheric and sink concentrations, not emissions.

    Fourth, the comments above are rather loose with terminology about sink flows and the airborn fraction. Sink saturation doesn’t imply decreasing sink flows. Increasing absolute sink flows reflecting a greater gap between atmospheric and oceanic CO2 don’t refute the notion of saturation. A constant or diminishing airborn fraction also doesn’t necessarily reflect rising sink potential; more likely it reflects the deceleration in emissions since the 70s.

    My comment about isotope ratios pertained to use of bomb C14 and other recent stuff (CFCs) for measuring ocean vertical structure and C inventories, not the very long term records. It’s absurd to suggest that “the ocean sink is increasing as noted by Hans Erren, regardless of what the ocean “budget” is assumed to be” The “budget” refers to measurements with fairly tight confidence bounds compared to the biosphere, not some fanciful target (see The Oceanic Sink for Anthropogenic CO2). And nothing about Hans’ data says anything about where the carbon is going.

    “the whole issue of building models that “fit all the data” is rather fraught” Instead let’s throw out all the inconvenient data and just work with things that have nice linear trends? Setting temperature arguments aside since I was talking about carbon, we should be building models that fit all available data in a broad sense, including C12/C13 ratios, oxygen, spatial patterns, known features of ocean chemistry (Revelle factor), etc. Bern and other models do this; skeptic models don’t even address the issues.

    “Have you tried comparing model projections from ca 1999/2000 of surface temperature for the early part of the 21st century with the published record of the last 7 years? The only ones I can find published on the web provide another “snicker test”" I’d love to see a link for this.

  6. 56
    Murray duffin says:

    “A constant or diminishing airborn fraction also doesn’t necessarily reflect rising sink potential; more likely it reflects the deceleration in emissions since the 70s.”
    Decade 1 2 3 4 5
    Years ’54-63 ’64-’73 ’74-’83 ’84-’93 ’94-`03
    Ave,annual fuel emissions (Gt/yr) 2.4 3.4 5.0 6.0 6.7
    Percent change decade to decade 42 47 20 12
    Ave.annual atmos. conc’n delta (ppm/yr) 0.8 1.1 1.4 1.5 1.8
    Atmos. conc’n delta per Gt emission (ppB) 333 324 280 250 270
    Implied atmospheric retention (Gt) 1.7 2.3 2.9 3.1 3.7
    Airborne fraction (%) 71 68 58 52 55
    Ocean uptake from fuel (Gt) 0.7 1.1 2.1 2.9 3.0
    Deforestation factor (%) guestimate* 1.03 1.06 1.09 1.12 1.15
    Total emissions (Gt) 2.5 3.6 5.5 6.7 7.7
    Airborne fraction of total (%) 68 64 53 46 48
    Ocean uptake total (Gt) 0.8 1.3 2.6 3.6 4.0
    Sorry about the way the table copies. What do you mean by “a deceleration in emissions”? Is this a deceleration in the % rate of increse? what would that prove?
    ) http://www.aoml.noaa.gov/ocd/gcc/co2research
    The key quote from this url is “The global oceanic CO uptake using different wind speed/gas transfer velocity parameterizations differs by a factor of three (Table 1)”. ie 3;1 change depending on model assumptions.
    http://www.hamburger-bildungsserver.de/welcome.phtml?unten=/klima/klimawandel/treibhausgase/carbondioxid/surfaceocean.html
    Here we find a nice description of atmosphere/ocean interchange mechanisms, with the diagram and values like the IPCC equivalent, and with the major fault that it gives the impression that the exchange magnitudes are well known. While this was published sometime after 2001, the net ocean uptake from the atmosphere shown would be roughly correct for about the mid `70s, and has since well more than doubled, (see table above) despite surface warming. This would suggest that a near surface increase in ocean carbon concentration
    considerably upsets the exchange between the surface and deeper ocean waters. It seems possible that carbon fertilization plus warming considerably accelerate growth of ocean biota. The IPCC
    downplay this possibility, but do not outright deny it, which suggests a fairly high degree of probability to me.

    “Second, the observations of the airborn fraction are conditioned on “assuming terrestrial uptake closely matches land use emissions,” which is not necessarily a good assumption given the large error bounds (>50%) on land use emissions data”
    It’s the IPCC assumption.

    Third, “knowing that as anthropogenic emissions rise the atmospheric fraction declines” is a statistically weak observation given the short Mauna Loa record and the high emissions uncertainty.
    See the table above. First I’ve heard about “emissions uncertainty”. I thought the AGW folks were quite certain.

    ” The “budget” refers to measurements with fairly tight confidence bounds compared to the biosphere, not some fanciful target (see The Oceanic Sink for Anthropogenic CO2)”
    I don’t have a subscription for your url. However:
    From the IPCC TAR we read snip â??In principle, there is sufficient uptake capacity (see Box 3.3) in the ocean to incorporate 70 to 80% of anthropogenic CO2 emissions to the atmosphere, even when total emissions of up to 4,500 PgC (4500 Gt) are consideredâ?? (Archer etal., 1997).snip. That’s a 3400 Gt sink capacity, and we are talking about sinking less than another 1000 Gt at a rate of about 4 Gt/yr peak, for a very few years at peak rate. However, the 3400 Gt additional capacity, which would add less than 10% to the ocean inventory seems like a very low value for 3 reasons. First the equilibrium concentration is more than 3x the present concentration. Second, atmospheric concentrations were at least 5 times higher 100 million years ago, so seawater concentrations can be that much higher also. Third, experiments to test CO2 clathrate hydrate formation show formation at dissolved CO2 concentrations two orders of magnitude higher than the present concentration.

    “Sink saturation doesn’t imply decreasing sink flows ” I don’t understand this assertion. what does saturation imply?

    “we should be building models that fit all available data in a broad sense, including C12/C13 ratios, oxygen, spatial patterns, known features of ocean chemistry (Revelle factor), etc”
    Just to provide one example of the uncertainties, consider the IPCC contention of slow mixing due to the thermocline and see: http://www.aip.org/pt/vol-55/iss-8/captions/p30cap4.html See fig. 4
    The first thing to note about Fig. 4 is that there is no evidence at all of a thermocline barrier at near 200 m depth. At 30 degrees S in the Pacific the 50 umol/kg concentration extends to beyond 400 m and at about 20 degrees N in the N Pacific the 40 umol/kg concentration gets to 400 m. The mid latitude Pacific is relatively warm, has relatively low saline concentration and can therefore be expected to have relatively low total CO2 concentration. Forty umol/ kg would be
    about 2% anthropogenic CO2. The surface share of anthropogenic CO2 is about 2.5% in this region. Even though this is the zone that should have the strongest permanent thermocline, the anthropogenic concentration is well mixed way below the expected thermocline depth. In the colder and saltier N Atlantic, in the region which should at least have seasonal thermoclines, (30 to 60 degrees N), we find the anthropogenic share at 1.7% (65% of surface share) at a
    depth of 1200 m.
    We didn’t get to an ocean uptake equal to 10% of the last decade until about 1900, and yet we find the anthropogenic share equal to 10% of the surface share at a depth of >5000 m in the N. Atlantic.

    I’d love to see a link for this.
    Sorry, I c an’t find the specific url again. It was a nice curve of a model temperature projection with the x axis time scale stretched all the way across the screen so one could pick out temp. changes vs time very well. However eyeballing 3 different model projections we find very near 0.3 degrees C increase from 1997 to 2005, when the actual is about 0.03 degrees C. Snicker. Murray

  7. 57
    Tom Fiddaman says:

    What do you mean by “a deceleration in emissions”? Is this a deceleration in the % rate of increse? what would that prove?
    Yes, a decline in the rate of increase, as in your table. If you drive a first-order system dCO2 = a*E – (CO2-CO2(0))/tau with growing emissions, then slow or reverse the growth rate, the observed airborn fraction declines because you’ve changed the relationship between E and (CO2-CO2(0)), which would otherwise be constant along a steady state growth path. I think that’s a general behavior that remains true even if you properly model the system as higher-order and nonlinear.

    http://www.aoml.noaa.gov/ocd/gcc/co2research
    The key quote from this url is “The global oceanic CO uptake using different wind speed/gas transfer velocity parameterizations differs by a factor of three …

    Based on a quick look, that level of uncertainty strikes me as a consequence of the openness of the model. As soon as you require closure of the global budget and constrain it with other measurements, the bounds have to close – either you get a good estimate of the flux, or you discover something else that has to covary negatively to offset the large uncertainty. The Sabine et al. 1800-1994 ocean budget measurement of 118+-19GtC is not 3x uncertain.

    The IPCC downplay this possibility, but do not outright deny it, which suggests a fairly high degree of probability to me.
    Would outright denial suggest certainty? :)

    “Second, the observations of the airborn fraction are conditioned on “assuming terrestrial uptake closely matches land use emissions,” which is not necessarily a good assumption given the large error bounds (>50%) on land use emissions data”
    It’s the IPCC assumption.

    Roughly constant airborn fraction may be an IPCC observation. The IPCC may also assume the terrestrial/land use match in constructing emissions trajectories, but the models are used to test that idea not to blindly instantiate it. Neither the IPCC models Bern/HILDA/ISAM nor their predecessors Oeschger/Goudriaan & Kettner/Siegenthaler etc. assume anything about the airborn fraction because it’s not a physical parameter of the models; it emerges from the interaction of other features. The skeptics on the Daly site by contrast treat it with the reverence usually reserved for Planck’s constant or pi.

    ” The “budget” refers to measurements with fairly tight confidence bounds compared to the biosphere, not some fanciful target … From the IPCC TAR we read snip â??In principle, there is sufficient uptake capacity (see Box 3.3) in the ocean to incorporate 70 to 80% of anthropogenic CO2 emissions to the atmosphere…
    True, but irrelevant because the time constant is so long. “Over the long term (millennial time scales) the ocean has the potential to absorb as much as 85% of the anthropogenic CO that is released into the atmosphere” (Feely et al.)

    Second, atmospheric concentrations were at least 5 times higher 100 million years ago, so seawater concentrations can be that much higher also.
    I don’t think we want to go there!

    “Sink saturation doesn’t imply decreasing sink flows ” I don’t understand this assertion. what does saturation imply?
    Decreasing marginal uptake, i.e. as concentration grows, the effective time constant for storage lengthens, e.g. due to buffer chemistry as in Feely cited above or to dissolution of biota at depth from acidification.

    Just to provide one example of the uncertainties, consider the IPCC contention of slow mixing due to the thermocline and see: http://www.aip.org/pt/vol-55/iss-8/captions/p30cap4.html See fig. 4
    The first thing to note about Fig. 4 is that there is no evidence at all of a thermocline barrier at near 200 m depth. … We didn’t get to an ocean uptake equal to 10% of the last decade until about 1900, and yet we find the anthropogenic share equal to 10% of the surface share at a depth of >5000 m in the N. Atlantic.

    Again, though, a critique of the IPCC wording is not as good as a critique of the models. The wording often abstracts to broad features that aren’t necessarily reflected in model structure, so identifying sub-thermocline C doesn’t overturn the models. For a good discussion see Cao & Jain.

    It was a nice curve of a model temperature projection with the x axis time scale stretched all the way across the screen so one could pick out temp. changes vs time very well. However eyeballing 3 different model projections we find very near 0.3 degrees C increase from 1997 to 2005, when the actual is about 0.03 degrees C. Snicker.
    A short time horizon like that seems like weather, not climate, to me. Over that time scale the correct question would be whether temperature remained within the envelope of natural variability around the mean of an ensemble of GCM runs. Seems premature to snicker.

  8. 58
    Joseph O'Sullivan says:

    “It seems possible that carbon fertilization plus warming considerably accelerate growth of ocean biota.” My short time in the scientific community was in marine biology so I will attempt to respond to this.

    Anthropogenic climate is very unlikely to have a positive affect on the ocean ecosystem.

    First, Carbon fertilization
    Because the majority of photosynthesis occurring in the oceans is done by phytoplankton so they would be the organisms fertilized. Additional CO2 would not have a fertilizing effect because CO2 is not a limiting factor in phytoplankton growth and reproduction. In most areas of the oceans there is an excess of CO2 to support phytoplankton but a lack of other nutrients. Iron is often the key one, thus leading to iron fertilization proposals.

    Even if CO2 did have a fertilizing effect, increasing primary productivity (plants, in this case mostly algae) could have negative effect on ecosystems. Anthropogenic phosphate and nitrogen have allowed algae to overgrow corals and have harmed coral reefs. In enclosed estuaries anthropogenic urban and agricultural runoff has caused explosive algae growth then a population crash (eutrophication) that has destroyed local ecosystems. In addition Carbon fertilization will not affect all species equally and this could lead to cascading effects through an ecosystem that are difficult to predict and could be harmful.

    The increasing amounts of CO2 in the oceans are changing the chemistry of the oceans and acting as a pollutant. There has been a drop in the ph of the oceans caused by anthropogenic CO2. The effects of this are only beginning to be understood but the preliminary data is not good. See the Acid Ocean Post here on RC.

    Second, Warming Accelerating Biota Growth
    There have been short-term local cyclic temperature changes in the oceans (NAO and ENSO) and ecosystems have not reacted to warming by the individual species growing faster. Instead there have been changes in the relative abundances of individual species due primarily to geographical shifts. Ecosystems and species vary greatly in their ability to move and have at times been unable to. This has resulted in widespread destruction of Indo-Pacific coral reefs during recent ENSO events.

    The predicted AGW changes differ from these natural changes because of AGW’s predicted greater geographical range, duration and amount of warming. The scope of these changes could, and in cases of some ecosystems like coral reefs probably will, overwhelm the ability to respond by shifting their geographically ranges. There is a very real possibility of species being driven to local or global extinction by AGW. There is a large body of fossil data of species that have become extinct due primarily to climate changes.

    Some of the warmest areas of the oceans are the biologically least productive, the ocean’s equivalent of deserts. As with fertilization warming temperatures will not affect all species equally and this could lead to cascading effects through an ecosystem that are difficult to predict and could be harmful.

    Good sources on this are the scientific literature summaries from Pew
    http://www.pewclimate.org/global-warming-in-depth/environmental_impacts/reports/

  9. 59
    Steve Latham says:

    Regarding 56 and 57. Wow, that looks like a good discussion. I don’t feel that I have time to understand it. Can (at least) one of you summarize your positions? I think 56 is saying that sinks are not approaching saturation so, despite the continually increasing CO2 concentrations, AGW won’t be so bad for very long. I think 57 says that the time before that extra anthropogenic CO2 gets taken up by the ocean will be very long, so AGW won’t be ameliorated very much on a meaningful time scale no matter if sinks are even essentially infinite.

    Just to support #58 (IMO a good summary, or at least I understand it) — carbon fertilization certainly doesn’t seem to be moving up any food web and helping fisheries thus far!

    And finally, apparently there’s a new Nature paper coming out or already out suggesting that a lot of CO2 has been released from soils (doesn’t say in this article how much has been taken up by terrestrial plants…)
    http://www.enn.com/today.html?id=8740
    I’ll be interested to read about how this study meshes with this earlier post by Corinne Le Quere on RC http://www.realclimate.org/index.php?p=160

  10. 60
    Murray duffin says:

    Re 58: How does the following factor in

    http://earthobservatory.nasa.gov/Newsroom/MediaAlerts/2005/2005081820000.html And now we find the most abundant microbe in the ocean â??plays a huge role in the cycling of carbonâ??, and is â??a major consumer of the organic carbon in the oceanâ??. It is almost certain that the full role of SAR11 is not well understood, and is not factored into GCMs, nor into the IPCC estimates of ocean uptake. It might be a major mechanism for the accelerated C uptake by the ocean already noted. Murray

  11. 61
    Joseph O'Sullivan says:

    Re #60
    Why do you think SAR 11 might be “a major mechanism for the accelerated C uptake by the ocean”?

    The role of bacteria especially very small ones in ocean ecosystems is something that really started to be examined in the late eighties so most of this science is fairly new.

    SAR 11 (Pelagibacter ubique, strain HTCC1062) is a heterotroph so consumes organic carbon and it is not able to use the inorganic carbon from CO2 that dissolves in ocean waters. These bacteria are still dependent on photosynthetic organisms to convert the inorganic carbon from CO2 into organic carbon.

    Since SAR 11 cannot take out the CO2 itself can you posit a mechanism or pathway from photosynthetic organisms to SAR 11 that would account for the increased uptake?

  12. 62
    J. Sperry says:

    Re #59:

    You mention a new Nature paper on CO2 release from soils. A summary can be found at
    http://www.silsoe.cranfield.ac.uk/nsri/research/carbonloss.htm

    The summary includes such unsupported statements as “The rate of loss increased linearly with soil carbon content, and this relationship held across all forms of land use as shown in the next figure, suggesting a link to climate change.” No such suggestion is actually shown. Maybe the full study is more revealing.

    Also, the author states that “with the present data, we cannot say where the carbon has gone, whether to air or drainage.” So in other words, this paper may or may not be relevant to climate change; we simply do not have enough information.

    Another consequence not mentioned in the summary is how this extra natural CO2 over the last 25 years (if it went to the atmosphere as he seems to suppose) means that anthropogenic CO2 had that much less of an impact. The rate of loss is estimated to be about 8.6% of industrial emissions.

  13. 63
    Steve Latham says:

    Thanks J. Sperry for the link and discussion. I hope to get to a library and see the full study sometime this month.

  14. 64
    Stephen Berg says:

    “Retreating glaciers, melting permafrost threaten Arctic lifestyle”:

    http://us.cnn.com/2005/TECH/science/09/12/greenland.arctic.thaw.ap/index.html


Switch to our mobile site