Part II: What Ångström didn’t know

The absorption factor, so defined, is given in the following figure, based on the thousands of measurements in the HITRAN spectroscopic archive. The "fuzz" on this graph is because the absorption actually takes the form of thousands of closely spaced partially overlapping spikes. If one were to zoom in on a very small portion of the wavelength axis, one would see the fuzz resolve into discrete spikes, like the pickets on a fence. At the coarse resolution of the graph, one only sees a dark band marking out the maximum and minimum values swept out by the spike. These absorption results were computed for typical laboratory conditions, at sea level pressure and a temperature of 20 Celsius. At lower pressures, the peaks of the spikes get higher and the valleys between them get deeper, leading to a broader "fuzzy band" on absorption curves like that shown below.

We see that for the pre-industrial CO2 concentration, it is only the wavelength range between about 13.5 and 17 microns (millionths of a meter) that can be considered to be saturated. Within this range, it is indeed true that adding more CO2 would not significantly increase the amount of absorption. All the red M&M’s are already eaten. But waiting in the wings, outside this wavelength region, there’s more goodies to be had. In fact, noting that the graph is on a logarithmic axis, the atmosphere still wouldn’t be saturated even if we increased the CO2 to ten thousand times the present level. What happens to the absorption if we quadruple the amount of CO2? That story is told in the next graph:

The horizontal blue lines give the threshold CO2 needed to make the atmosphere optically thick at 1x the preindustrial CO2 level and 4x that level. Quadrupling the CO2 makes the portions of the spectrum in the yellow bands optically thick, essentially adding new absorption there and reducing the transmission of infrared through the layer. One can relate this increase in the width of the optically thick region to the "thinning and cooling" argument determining infrared loss to space as follows. Roughly speaking, in the part of the spectrum where the atmosphere is optically thick, the radiation to space occurs at the temperature of the high, cold parts of the atmosphere. That’s practically zero compared to the radiation flux at temperatures comparable to the surface temperature; in the part of the spectrum which is optically thin, the planet radiates at near the surface temperature. Increasing CO2 then increases the width of the spectral region where the atmosphere is optically thick, which replaces more of the high-intensity surface radiation with low-intensity upper-atmosphere radiation, and thus reduces the rate of radiation loss to space.

Now let’s use the absorption properties described above to determine what we’d see in a typical laboratory experiment. Imagine that our experimenter fills a tube with pure CO2 at a pressure of one atmosphere and a temperature of 20C. She then shines a beam of infrared light in one end of the tube. To keep things simple, let’s assume that the beam of light has uniform intensity at all wavelengths shown in the absorption graph. She then measures the amount of light coming out the other end of the tube, and divides it by the amount of light being shone in. The ratio is the transmission. How does the transmission change as we make the tube longer?

To put the results in perspective, it is useful to keep in mind that at a CO2 concentration of 300ppm, the amount of CO2 in a column of the Earth’s atmosphere having cross section area equal to that of the tube is equal to the amount of CO2 in a tube of pure CO2 of length 2.5 meters, if the tube is at sea level pressure and a temperature of 20C. Thus a two and a half meter tube of pure CO2 in lab conditions is, loosely speaking, like "one atmosphere" of greenhouse effect. The following graph shows how the proportion of light transmitted through the tube goes down as the tube is made longer.

Page 2 of 4 | Previous page | Next page