On a related note, is a “runaway greenhouse” effect impossible, given the current data and understanding about global warming? In other words, is this something that you can “rule out” as being extremely unlikely, or impossible?
[Response: This is one of the few perils I think we can rule out with essentially 100% confidence. If you were to build glass walls and keep all the heat in the tropics, a saturated tropics would actually be over the limit for the runaway greenhouse. However, heat leakage would be inevitable, and if you allow that, you find that you don’t get a runaway even if you force the tropics to be saturated with water vapor. I did this calculation in the AGU Chapman volume water vapor article, which you can find on my publications site. I didn’t actually show the 8xCO2 calculation there, but that amount of CO2 isn’t enough to throw the system into a runaway. While you don’t get a runaway, the tropics does get very warm in a saturated scenario — around 50C. See also the saturated GCM calculation in my more recent water vapor article, from the Caltech general circulation book. I had to stop that calculation before it reached equilibrium, but not because of a runaway. the warm ocean next to Antarctic ice that hadn’t yet melted caused gale-force winds that crashed the model from numerical instabilities. –raypierre]
So what is actually being stated here ? Unless climate scientists can nail down with reasonable accuracy the implications of climate change for good or bad then who is really listening ? I have recently found out that climate scientists state that 60 % of existing carbon emissions must be cut but Kyoto only say 5 % I believe. Montreal last week probably stated another 5 % after 2012, what use is that to humankind in reality.
It’s about time that climate scientsis started going on the offensive with regard to the impact of climate change otherwise we are going to keep on burning the vested interest.
I mean look at the recent results and findings for the gulf stream around Europe, this is bad news but no climate scientist stuck their neck out and stated that this was even potentially catastrophic, just that more research needs to be done.
It is about time someone got serious about Climate Change and not just reported findings that do not make good reading.
You continue a pattern here at RC of confusing an opinion column that appears in the media with “science journalism.” This is not only a mischaracterization but a great insult to people who actually make their living reporting on science and issues involving science — a group which would not include Steve Milloy. RC has every right to call out cherry picking, but you will also better serve your readers by knowing what it is you are criticizing. Milloy’s work is not “propaganda masquerading as science journalism” it is just “propaganda” which is defined by askoxford.com com as follows:
“information, especially of a biased or misleading nature, used to promote a political cause or point of view. – ORIGIN originally denoting a committee of Roman Catholic cardinals responsible for foreign missions: from Latin congregatio de propaganda fide – congregation for propagation of the faith.”
[Response: Oh, come on, Roger. You’re the one who knows how to get a read on public perceptions. Ask 1000 Fox viewers if they can tell the difference between a piece like Milloy’s and what you consider journalism, and knock me over with a feather if as many as a half of them can. For that matter, even if Milloy’s piece should be considered “opinion,” like an op-ed, that doesn’t absolve him from using correct arguments. Propaganda, on the other hand, is a wilful distortion of truth made in order to advance an agenda. Milloy tries to set himself up as a source of information about what is good science, and in trying to put global warming in the same class as copper bracelets, crystals and pyramids he is doing a great disservice to public discourse. As for rights, of course Milloy has every right, in the constitutional sense, to publish what he does. That doesn’t mean that what he publishes is any less morally reprehensible than various kinds of hate speech, which are also constitutionally protected.–raypierre]
Comment by Roger Pielke Jr. — 15 Dec 2005 @ 8:21 AM
Well it’s good to know Roger doesn’t side with Milloy, but as a science background journalist myself, albeit unemployed, the difficulty is finding a journalist who knows the nuances of the field. Moreover, you could get an editor who vetoes certain quotes. Thata and in some circles Milloy represents the truth they want to hear. It’s very insideous and RC should be commended not scolded for what they are doing to absolve the so-called controversy. Real controversies please. The mainstream media often fail to know one from the other.
Raypierre: Excellent analogy. You raise a point about tropical temperature difference from the present to the LGM that has long been a sort of interest and uncertainty on my part. I had a chance in the mid-1980’s to work with Manabe, Broecker and Denton, regarding the CLIMAP temperatures and how to independently check them in the tropics. I proposed that we examine alpine glacier snowline data from today and the LGM from Tierra Del Fuego to the Brooks Range. In doing this I was able to complete a transect with more than 50 data points. In fact the changes with latitude were nearly identical along the entire cordillera. Precipitation could not be the culprit since the change was the same on Maritime and continental setting alpine glaciers. The change in snowline was almost always, 90%+ between 600 and 900 m. The LIA snowline lowering was 100-150 m. Thus, the LGM represents a five-six fold greater temperature change. And it occurred even in the tropical mountains of the Andes. This work then appeared in the Scientific American Conveyor belt article in the early 1990’s by Broecker and Denton and I published all the data in Paleo cubed. Since publication the ocean data has come around in support of this higher tropical latitude temperature change. My question is, how do expect to be able to maintain a much higher temperature gradient during the LGM than we have today between tropics and high latitudes, since this would tend to increase heat flux. Further the glacier data suggests we did not at least for a portion of the LGM, since there is no way to note how long the glaciers were at the depressed snowline location.
[Response:You’re absolutely right to point out that the mountain snowline data was a critical part of unraveling the tropical LGM puzzle. I think Dave Rind, Dorothy Peteet and Peter Webster had a big role in bringing its importance to the fore also. Essentially, Lindzen assumed that both CLIMAP and snowlines were right about temperature, and that demanded a mechanism which increased lapse rate in cold climates (decreasing it in warm climates). That wasn’t a bad assumption at the time, and Lindzen’s work was good science. What happened is that CLIMAP was wrong, and extrapolating snowline data to the surface was right. A case of a beautiful theory (Lindzen’s) shot down by an ugly fact. The fact that Lindzen’s ideas have always been considered seriously by the climate research community, despite going against the grain, makes me proud of the functioning of my field. Now, as for the implications for meridional gradient during LGM times, this is a really interesting subject. The story from Manabe and Broccoli is that sensible (“dry”) heat transport goes up with the temperature gradient but that is compensated by a reduction in latent (“moist”) heat transport. That’s certainly part of the story, but it’s still an active research field and while models agree pretty well on net flux out of the tropics the lack of a good theory of baroclinic eddy transport inhibits asking “why” questions. Inhibition of eddies by strong horizontal shears plays a role, as well as shifts in storm tracks. It’s something I’m working on myself, as well as a handful of other people. The more the merrier.–raypierre]
Ray–Thanks for linking to my post on Milloy’s quote mining. I just wanted to say that I think Roger Pielke’s comment is unfair.
Take a look at the Milloy piece. Is there any evidence on the page that it is an opinion column? All you see is a header, “Junk Science,” which would suggest that the piece is going to blow the lid off of some pseudoscientific myth. The piece claims to contrast claims in the media and “Kyoto believers” with scientific research, for which Milloy gives specific citations. I find it hard to see how one would get the impression that this article is presented as “just propaganda.” It is true that the Junk Science column is listed under the opinion section of the top banner (under the subheading of “views”) But that’s like burying a disclaimer in fine print. Most people who get their information online would probably just follow a link straight to the piece, and would assume that it accurately describing how real science goes against Kyoto, etc. So I think it’s over the top to say that RealClimate is doing a disservice to readers in this post.
That’s right Carl, but that’s what they all say. I’ve used RC in skeptic arguments and they were just called political by the naysayers. In other words they buy the naysayers claims like Dr. Gray, and Milloy’s political “science” but not the real thing. It’s sad but these propaganda wars will continue. That’s just the sort of game show world we live in now. Keep exposing them is the key as discouraging as that is.
It may be of interest to some that in our “perspective piece” in Science last year (Osborn and Briffa, The real color of climate change?, Science 306, 621-622), where we discussed the von Storch et al. paper, we concluded with:
It is already clear, however, that greater past climate variations imply greater future climate change.
which is a similar conclusion to Ray’s (we had no space to explain why we reached this conclusion, but the reasons are those explained above by Ray). Esper et al. had more space to explain how they reached their opposite conclusion, but didn’t do so – which is a shame!
Could someone provide references discussing the radiative forcing equivalence of things like CO2? It is touched on in this article, but I’ve never seen a discussion of how effective this approximation is. Obviously CO2 is not just a lid on the atmosphere so there must be some effect of the vertical gradient (even if small), and since it depends on the absorption and reemission of radiation, there must be some effect by latitude. I would also expect some non-linearity in response as one moves toward saturating the infrared absorption window with higher CO2.
I’ll admit to not having looked at the question in any detail, but for at least some GCMs one gets the impression that they are just turning the same linearly varying knob at the top of the atmosphere everywhere. Maybe that’s an okay approximation to be making, but I’d like to see some discussion of it. Thanks for your help.
[Response: You’ll be pleased to know it isn’t a linearly varying knob at the top of the atmosphere! For an in depth discussion, I suggest Hansen et al, 2005 which goes into some detail about how well the forcings concept works for the very different physics of all the main effects. The basic conclusion is that the concept does work well enough for these kinds of comparisons to be done. – gavin]
[Response: To amplify on Gavin’s comment, the main inaccuracy in using a CO2-equivalent is not from the vertical distribution, but rather from the fact that bands of one gas overlap with bands of others, so there is a little non-additivity. You can play with this yourself using the online modtran or ncar radiation models. (look here). The nonlinearity isn’t very strong. A much bigger issue in the often quoted “global warming potentials,” is the question of how the effect of relative lifetimes are figured in (see Archer’s piece on this). This goes beyond the translation into radiative forcing equivalents, and attempts to factor in that a long-lived greenhouse gas will be around to do things to the climate longer than a short lived one. It is also worth noting how and why such equivalents are used. It used to be (pre 1995) that most radiation models in GCM’s only had CO2 and water vapor, so one had to translate, say, methane, into CO2 radiative equivalent just to do the calculation. This is no longer true. Almost all radiation models now have the specific band structure of the individual greenhouse gases explicitly represented. However, for use with integrated assessment models, energy balance models, and — most importantly — for writing of treaties and legislation — it is still generally necessary to translate all GHG’s into some equivalent in terms of CO2. –raypierre]
[Response: A similar conclusion to the one cited by Gavin above was reached independently by a panel of scientists (of which I was a member) convened to report on these issues by the National Academy of Sciences last year, resulting in the NAS report “Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties (2005)”. The report also recognized the important role, for scientific understanding, in evaluating regional relationships involved in radiative forcing and response. – mike]
Comment by Dragons flight — 15 Dec 2005 @ 12:13 PM
Any possibility of providing a way to check what the many abbreviations stand for for us laymen?
Jim in Alaska
[Response: Our admittedly incomplete glossary should provide some assistance here. -mike]
“TUSK GROWTH INCREMENT AND STABLE ISOTOPE PROFILES OF LATE PLEISTOCENE AND HOLOCENE Mammuthus primigenius FROM SIBERIA AND WRANGEL ISLAND (L)
David L. FOX, Daniel C. FISHER, and Sergey VARTANYAN
To date, we have a full complement of analyses for about three years of growth for one tusk from the Taimir Peninsula of Siberia (the Jarkov mammoth; ca. 20,300 rybp) and growth increment and tusk apatite carbon and oxygen isotope ratios for two to three years of growth from two tusks from Wrangel Island (4,400 and 4,120 year bp). Our sampling focused on the last several years of life in these tusks, which are preserved in the dentin adjacent to the pulp cavity. The d18O values of structural carbonate in apatite from the Jarkov mammoth (15.3±1.3 permil VSMOW) are similar to published values for other Siberian mammoths and to mean values for high-latitude North American mammoths. The values for the two Wrangel specimens are higher (21.1±1.0 permil and 22.4±0.9 permil VSMOW) and more like values for mammoths from eastern Russia and Hot Springs, South Dakota. The higher d18O values in the Wrangel tusks relative to the Jarkov mammoth and others from Siberia suggest considerably warmer temperatures and/or major differences in moisture transport during the middle Holocene relative to the late Pleistocene.”
“The average annual temperature is ~11.3°C. Average July temperatures range from 2.4°C to 3.6°C on the south coast but notable differences in temperature occur with differences in terrain, and in the intermontane depressions, temperatures can reach 10°C. Fohn winds also occur. The frost-free period is about 2 to 3 weeks.”
You are right, Andy Revkin, Steve Milloy, what’s the difference? Similarly, in science there s probably no difference between peer-reviewed and not peer-reviewed? After all the public can’t tell the difference.
You claim a harm to public peceptions from opinions that you find “morally reprehensible” (hate speech and cherry picking quotes, all the same, huh? Amazing.), any evidence for this claim?
And among folks who study propoganda, in case you are interested in what people who actually study it think, it need not necessarily involve a willful distortion of the truth, see for a brief intro:
Comment by Roger Pielke Jr. — 15 Dec 2005 @ 4:34 PM
Whether Milloy is or isn’t a journalist is not a very interesting question to me, and I’m a little puzzled as to why Roger feels so strongly about it. The only issue we have any useful expertise to contribute here is whether his ‘scientific’ pronouncements have any validity or not (in this case, clearly not, as we all agree). The discussion might be better served if we refrain from over-extrapolating each others’ comments and avoid unnecessary distractions.
Note the ice in the picture. Everywhere. Obviously the Siberian climate was much warmer when the mammoth lived there. Quoting from the link above:
“[Dutch paleontologist Dick] Mol also notes that the mammoth lay atop clay soil filled with frozen prehistoric plants that “still had their original green color.” Mol says that these “smaller” clues, “this is very important, because it indicates a lake and pond 20,000 years ago, and might tell us about the climate and temperature at the time.””
Comment by nanny_govt_sucks — 15 Dec 2005 @ 5:31 PM
“You are right, Andy Revkin, Steve Milloy, what’s the difference? Similarly, in science there s probably no difference between peer-reviewed and not peer-reviewed? After all the public can’t tell the difference.”
Many can’t and reporting who is a political hack and who isn’t should be part of the story. It isn’t now, but I’m hopeful. To Milloy’s tribe Revkin is a liberal Times reporter and thus, anything he reports is false. This false impression must be countered somehow.
Just a couple of questions:
1. What could have been the mechanism that drove down the CO2 level at a LGM? We can see that anthopogenic causes are increasing CO2 but is there any research about what may cause the opposite? My guess would be a vast bloom of plants possibly algea or photosynthetic plankton or something like this. Mind you I am not advocating ocean seeding or anything just curious about what could do this.
[Response: This is the most critical outstanding question in the theory of ice ages. Many theories have come and go, with none actually standing up. Almost everybody agrees that it has to do with fluctuations in the carbon uptake by the oceans, with a number of theories relying on enhancement of the biological pump, much along the lines you suggest. Perhaps David Archer could be persuaded to do a post on the current state of the problem. –raypierre]
2. Is thermal runaway impossible only with CO2 alone? There is evidence that thermal events have happened in the past, such as in the Eocene, however then methane was in on the act.
[Response: What happened in the Eocene wouldn’t count as a runaway in the sense of the runaway greenhouse that brought Venus to its present toasty state. To get a runaway of that sort, you need a reservoir of greenhouse gas that goes into the atmosphere to ever greater extents as temperature increases. Oceans on Earth provide such a reservoir for water vapor, but there’s no corresponding reservoir for CO2. Clathrates potentially provide a reservoir for methane which could destabilize and dump into the atmosphere. That would be a kind of “mini-runaway,” I guess. Thinking more broadly, it is possible to get a pure CO2 runaway on a planet with a liquid CO2 ocean or with massive CO2 glaciers. Not an issue for Earth, though.–raypierre]
Thanks for that. Perhaps I should been more specific on what I think thermal runaway is. I was previously under the impression that thermal runaway was like the Eocene thermal event whereas it would seem that the term thermal runaway is a more accurately a Venus like event – fair enough.
In the light of this perhaps I should rephrase my question to – Do you think that a dangerous thermal event like the Eocene is probable with the degree of warming from anthropogenic causes?
[Response: When you say the “Eocene,” I presume you’re referring to the Paleocene-Eocene Thermal Maximum. For a while that event was attributed to methane release from clathrates, and that hypothesis still has its supporters. As to whether such a runaway could happen in today’s climate, I’ll refer you to Dave Archer’s recent post on the clathrate story. Regarding the Paleocene-Eocene event itself, the field seems to be coming around to the idea that it had something to do with a greatly accelerated oxidation of organic carbon stored on land, perhaps associated with the drying up of interior shallow seaways. Obviously, we don’t have interior shallow seaways to dry up today, but one could envision a feedback process where warming accelerates oxidation of soil carbon, which leads to more warming, and so forth. Something like this could rear up and exacerbate global warming but I’m not sure I’d classify it as a “runaway.” The question of whether accelerated carbon sinks on land can turn to accelerated carbon sources is something a lot of terrestrial carbon cycle modellers are interested in, but I couldn’t give you an accurate read on the state of the art there, except that some models do show the land sink turning into a land source given sufficient warming. –raypierre]
James Zanchos, in his article “Rapid Acidification of the Ocean During the Paleocene-Eocene Thermal Maximum”, concludes with a question. “What, if any, implications might this have for the future? If combustion of the entire fossil fuel reservoir (~4500 GtC) is assumed, the impacts on deep-sea pH and biota will likely be similar to those in the PETM. However, because the anthropenic carbon input will occur within just 300 years, which is less than the mixing time of the ocean(38), the impacts on surface ocean pH and biota will probably be more severe”. (James C. Zanchos, et. al. 10 June 2005 Science)
Timing of input for operational events may be different than timing of imput used in model calibration. Additionally, forecasting physical events often involves choosing if models are off track in timing or volume, then making model adjustments. If the forecasters choose the wrong kind of adjustment, a poor prediction can result. Unfortunately, no one knows for sure if models are off in timing or volume (or both) until after the peak has occurred, which is too late (can’t use hindsight).
Regarding “runaway greenhouse”, we’ve found parameter values in our AGCM+slab ocean model that seem to generate a runaway warming under 2xCO2. That is, we got a near-linear increase in temperature to +16C after 60 years which showed no signs of tailing off, and didn’t pursue it further.
Needless to say, I don’t actually think such a result is realistic. But I’m not sure that it could easily be ruled out as impossible based on elementary physical considerations. The model in question didn’t give a particularly good simulation of the present-day climate, but one could say the same about every model if one was picky enough…
[Response: That’s interesting, James. Do you know if that behavior is associated with cloud feedbacks? As I mentioned, I can get similar-looking behavior in a GCM if I over-ride the convection scheme and force water vapor to be saturated throughout the troposphere. However, I don’t know of any reasonable way the normal GCM physics could do that, given the role of subsidence regions in creating
some preliminary conclusions:
A distinction must be made between transient climate sensitivity, equilibrium sensitivity and the equilibrium time.
Equilibrium time is typically several centuries, so the equilibrium sensitivity does not matter for scenarios that only span a century.
Tom Rees in UKweatherworld:
I believe the CKO uses the Community Climate System model (CCSM) v2.0. This has quite a low equilibrium sensitivity (2.1) and hence quite a low transient senstivity. v3.0 has a higher equilibrium sensitivity (2.7 vs 2.2) and transient sensitivity. However, the transient climate response (1% annual increase at 70 yrs) is pretty much the same (about 1.0)! You only see a differential after 150 yrs plus.
Is it possible to conclude from the increasing rate of warming since 1990 (including this year, with neutral ENSO, being as hot as 1998 with an intense El Nino) that climate sensitivity must be higher than, say, the lower end of figures suggested by models?
Comment by Almuth Ernsting — 16 Dec 2005 @ 6:02 AM
Back to the essence of the discussion. If there was a larger natural variability in the past, you and other scientists presume that this points to a larger general sensitivity for all greenhouse gases. Scientists like Esper, Moberg, Luterbacher and others disagree, and expect that a larger sensitivity for natural (mainly solar and volcanic) goes at the cost of the sensitivity for natural and man-made greenhouse gases.
Or to make an analogy with your example: instead of one spring and one platform, there are many springs and platforms interconnected with different levers (“interactions”) in the system. The overall effect of a small mouse jumping on one of the “sensitive” platforms may be just as large as putting a heavy brick on a “less sensitive” platform. In that case, the discovery of a doubling of the mouse’s effect says something about the sensitivity for the mouse platform, but nothing about the other platforms…
And there are differences in platforms. Solar has its largest direct effect in the stratosphere and further top down, as more and more is absorbed/reflected in different layers. Volcanic dust also has most direct effect in the stratosphere. Both have proven (opposite) effects on stratospheric temperatures, the Jet Stream position, wind and cloud positions and cloud amount and precipitation.
Greenhouse gases like CO2 and water vapour and dust have their largest effect at lower altitudes and their effect is reducing bottom up. For methane and ozone, the maximum effect is somewhere in the higher troposphere/lower stratosphere. Thus all different effects at different levels.
The main problem with current GCM’s is cloud feedback. These are responsible for most of the 1.5-5 K range in projection for a CO2 doubling of the different models. Several models see a positive feedback of clouds when the temperatures increase, but this seems to be wrong, at least in the tropics and the Arctic, where clouds form a strong negative feedback. See also the comment of Wielicki and Chen at the NASA page and the next page about natural variability and the performance of the models in the tropics.
For volcanic, there may be some overestimating of historical influences, as the influence of temperature and reduced solar input (less insolation) on tree rings is hardly to separate. For solar, there is a clear correlation between the sun cycle and cloud cover (+/- 2% over a cycle, no matter what the underlying physics may be), such that the original ~1.2 W/m2 (TOA) variability in the sun’s radiation is enhanced, which is underestimated in most models. What is difficult to know, is how much change there was between e.g. the LIA and current solar strength, as we only have accurate measurements since the satellite age. Here we depend on the reconstructions, which give a wide range of 0.1-0.9 K for solar influences (with an average influence of 0.1 K for volcanic) for the period LIA-current, and thus a wide range for the real climate sensitivity for solar.
For sulphate aerosols, current models probably overestimate their influence, as there is no measurable effect of the large (over 60%) reduction in SO2 emissions in Europe at the places where the largest influence should be visible, according to the models. If the influence of aerosols is less than expected, then the influence of CO2 must be lower than expected, to fit the temperature trend of the 1945-1975 period. Unknown is what the overall effect of greenhouse gases/temperature was/is/will be on cloud cover. The measurements of cloud cover are much too short (and/or too coarse) to make any long-term correlation valid.
Thus in summary, a change in sensitivity of one of the primary actors in climate variation has only effect for the general sensitivity of climate, if all the feedbacks are essentially similar for all primary actors involved, which is highly probably not the case…
[Response: In order to conclude that higher past variability meant lower sensitivity, one would have to demonstrate two things. First, one would have to show that sensitivity to known non-GHG forcing mechanisms (solar variability and volcanic aerosols) was greater than sensitivity to the same radiative forcing applied via GHG changes. Second, one would have to show that those non-GHG forcing mechanisms are operating today in such a way as to allow the recent warming to be matched despite a reduction in climate sensitivity to GHG changes. These things are not outside the realm of physical possibility, but nobody has demonstrated a physical mechanism that makes this scenario work. In contrast, the more conventional view, which I put forth in my article, has been turned into equations and analyzed quantitatively. –raypierre]
I think the essence of Raypierre’s argument is that if
T = c*FORCING
then absent any new information about forcing, new information suggesting larger variability in T implies larger c.
In In the Esper et al. 2002 reconstruction paper, the authors conclude: Therefore, the large multicentennial differences between RCS and MBH are real and would seem to require a NH extratropical forcing to explain them, one that attenuates toward the equator. That sounds like a conclusion that more variable T implies more forcing, which is reasonable if you have adequate other information to constrain c.
Moberg et al. conclude: This large natural variability in the past suggests an important role of natural multicentennial variability that is likely to continue.
The argument that larger sensitivity for natural (mainly solar and volcanic) goes at the cost of the sensitivity for natural and man-made greenhouse gases, or enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, seems to rely on a model like the following:
T = a*ANTHRO + b*NAT
[Response: This indeed would seem to be the kind of thing Esper et al have in mind, but the problem is coming up with a physical explanation that would allow the system to behave this way. The difficulty with that is that both ANTHRO and NAT can be translated into equivalent radiative forcings, and you’d have to say why the system should respond more strongly to 1 W/m**2 from solar variability than 1 W/m**2 from greenhouse gas changes. Also, you’d have to show that your model was still able to fit the recent changes, where we know what the NAT forcing mechanisms are. I’m not saying that this is impossible, just that until somebody does it there’s no basis for concluding that higher past variability means lower climate senstivity. –raypierre]
It’s not clear to me why one should conclude that more variable T implies bigger b and smaller a, when it could just as well imply bigger b and a. One possible mental model is that a is constrained by the instrumental record, while b is constrained by reconstructions. Then, if reconstructions turn out to be more variable, b is bigger absent new information on natural forcing, and some of the variability in the instrumental record that was thought to be due to a is really due to b. However, that presumes either an unknown natural forcing, or that some combination of known natural forcings fits the instrumental record to permit substitution of b for a. The former goes out on a limb; the latter should be easy to demonstrate as a statistical exercise with a simple energy balance model.
In any case the statement that agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought isn’t the whole story. If sensitivity is lower, then it’s obvious that the deltaT created by GHGs at any level will be smaller, and thus that Kyoto will cause a smaller reduction in temperature from a lower baseline. However, if your definition of “effective” is staying below a given deltaT, low sensitivity could increase Kyoto’s chance of success – though it would be the sensitivity doing the heavy lifting, and the need for Kyoto would be less evident.
Tom, the Canadian model results are for proposed future CO2/SO2 emission/concentration levels. Compared to the past decades, the pattern (more emissions in South Asia) and the relative forcings are completely different, with much less relative influence of aerosols than today (due to faster increasing CO2 levels).
The huge change in SO2 emissions in Europe should be measurable, according to runs of the Hadcm3 model for the period 1990-1999, but it is not…
you said that climate can’t change w/o forcing. but what about el ninos … the global temperature increases, but it’s an entirely internal phenomenon. or is there some forcing going on? I think this is an important point, because a skeptic might argue that the MWP was warm because of internal variability, and it is this multi-century scale internal variability that’s driving the present day warming. what’s your take on this argument?
[Response: Hi Andy! I’m looking forward to coming down to see you at A&M sometime. I’m glad you brought up the point about what might be called “internal” variability. The possibility of things like El Nino is why I left myself an out with the rather cryptic phrase to the effect that SOMETIMES the reason for the climate change can be set off from the collective behavior of the system and considered as an external forcing. In the case of the global temperature change caused by El Nino, there’s still a “reason” for climate change, to be found in the coupled air-sea interaction.. It’s a reason that can be identified and studied, but the different links in the behavior are too intimately coupled to allow one to extract any part and call it a forcing. I didn’t bring this up in the context of the centennial Holocene or longer term LGM climate changes because nobody has yet put forth a viable mechanism accounting for such climate changes in terms of internally generated variability. All the quantified mechanisms involve forcings like volcanic and solar variability for the Holocene case, and CO2 and Milankovic (modified by the slow land-glacier response) for the LGM. It’s theoretically possible that some internal cycle in the ocean circulation could give Holocene temperature fluctuations as big as the LIA, but until one identifies such a mechanism, it’s essentially impossible to say what the consequences would be for climate sensitivity.
Just for the sake of illustration, though, here’s one scenario where higher Holocene variability could go along with lower climate sensitivity: Suppose that some unknown stabilizing mechanism makes the real world less sensitive to radiative forcing than our current models. Suppose also that — DESPITE THIS STABILIZING MECHANISM some as-yet unknown ocean circulation cycle operates that is the sole cause of the Holocene centennial scale fluctuations, and that this cycle has reversed and is operating today, yielding a temperature change that happens to mimic what models give in response to radiative forcing changes. In that case, you could have a consistent picture with lower climate sensitivity. Aside from the fact that there’s no physical support from such a picture, this state of affairs is highly unlikely because you’d still have to account for things like the way the system responds to CO2 at the LGM, the observed radiative imbalance of the planet at present, the observed penetration of heat into the upper ocean, and so forth. I suspect a scenario like I’ve given is what people have in mind when they think that higher “natural” variability would indicate reduced sensitivity, but until somebody puts specific mechanisms on the table, it’s just science fiction. –raypierre]
T = Fsolar x FBsolar + Fvolcanic x FBvolcanic + FCO2 x FBCO2 + Faero x FBaero + ….
To be added, other greenhouse gas forcings with their feedbacks and internal oscillations which may be – or not – enhanced by the primary forcings. Further the strength of the feedbacks depend on the initial conditions (like ice age – interglacial). And last but not least, FCO2 and Faero include natural and man-made CO2 and aerosol levels.
The basic problem thus is that we have (at least) four input variables and only one equation where the output for the past 1.5 century is more or less exactly known. The constraint of the temperature trend thus is for the sum of all components + their feedbacks. That means that it is impossible to solve the equation without further information. Further information comes from proxies (ice cores, tree rings,…), which give (less exact) information about temperature and some of the primary actors of the past.
What Raypierre and others expect is that the feedbacks of the different forcings are essentially the same for the same change in forcing and the same starting conditions. Esper, Moberg and others expect different feedbacks for each individual forcing.
As there is only one temperature record, any change in forcing or feedbacks of one of the actors is at the cost (or enhancement) of one or more of the others. If the influence of aerosols is less than expected, then the influence of CO2 must be decreased too, or it is impossible to explain the cooling period 1945-1975 with increasing CO2 levels. The graph of temperature vs. aerosol forcing on RealClimate makes that clear: if the aerosols forcing is near zero, then a CO2 doubling gives an 1.2 K increase in temperature. If the aerosol forcing is -1.5 W/m2, then the increase in temperature can reach 6 K! In both cases, you need to adjust the solar factor to fit the temperature trend of the past century.
The same applies for variations in solar output (radiation) and/or insolation (Milankovitch cycles). Here too, one can expect that an increase of solar influence, based on historical variations of the last millennium, will lead to a decrease of the influence of CO2, again necessary to fit the temperature trend of the last century in the temperature equation. Stott ea. have made variations with the Hadcm3 model to get a “best fit”, be it within the constraints of the model (like a fixed minimum influence of aerosols). It turned out that a doubling of solar at the cost of 20% influence of greenhouse gases did make an optimum…
In summary: there is a large uncertainty about the relative influences of the four main forcings (including their feedbacks), and it is quite certain that the feedbacks are different for the different forcings. Any change in the strength of natural (volcanic, solar) influences based on historical variations will have an opposite effect on the influence of greenhouse gases, and thus on man-made emissions.
About Kyoto: based on the effect (a few years delay before a CO2 doubling is reached) and the costs, I would prefer an enormous effort to search for and promote fossil fuel alternatives. That will have more effect on CO2 emissions in middle-long term than several Kyoto’s…
[Response: There’s some good thinking here, but I think you may have confused Gavin’s discussion of the attempts by Andrae et al to infer climate sensitivity from recent warming with the question of whether there’s a different sensitivity coefficient for aerosol vs GHG radiative forcing. There’s nothing in the material cited in Gavin’s post to support the latter. What Andrae et al do is very much in the spirit of the discussion given in my article. They, too, assume an equivalence in radiative forcing between GHG and aerosol, What they do is add different estimates of the aerosol radiative forcing to the GHG forcing, while keeping the temperature response fixed at the observed recent warming. That gives them various estimates of the climate sensitivity. In this case, there’s no uncertainty about the magnitude of climate variation, but uncertainty about the forcing. In the spirit of my analogy, they are talking about changing the estimated weight of the rats rather than the estimated displacement of the platform. Now, with regard to possibilities for different sensitivity coefficients, what we should really be thinking hard about is the implication of Shindell et al’s finding that the changes in Solar UV can give an amplified stratospheric response, which can work its way into an amplified regional NH tropospheric response. One earlier comment tangentially alluded to this, but there are a lot of gaps that need to be filled in to say what such a result might mean for attempts at estimating climate sensitivity. Certainly, it would say that energy balance models are too crude a tool. One could build similar stories around the possibility that the solar effect is via a cosmic-ray and cloud connection, but I don’t think this is considered to be a viable hypothesis anymore, given the sloppiness uncovered in the way Svensmark et al analyzed their data.–raypierre]
There’s been discussion of RUNAWAY GW, and I think I can see the semantic problem. From a geologist’s view, it may mean “runaway from any earthly controls” (or negative feedback processes) – like what has happened on Venus. From a layperson’s (my) view, it may mean “runaway from any human controls.” That is what I mean when I use it (but the geologist’s view would be an extreme subset of that).
So, what I would propose, since “runaway” is a useful term, indicating a positive feedback scenario more succinctly, is to distinguish between “permanent runaway” and “temporary or limited runaway” GW (temporary on the geological time scheme of thousands or millions of years).
I had a horse when I was a kid. It would “runaway” with me. That is, when I got it in a fast cantor or gallop, it would “take the bit” and run as fast as it could (we actually beat several Del Mar race horses on the beach that way). Whatever I would do, however hard I would pull on the reins, I could not stop or control that horse. However, eventually it would tire out on its own and stop. So, after that time it ran away with me into a forest and I ended up with broken ribs, I would never get it into a gallop, unless I was at the (empty) beach or on the race track.
[Response:Interestingly enough, though the Academie Française doesn’t have an approved term for “runaway greenhouse,” the term that has gained some currency in France is “Effet serre gallopant.”–raypierre]
I think GW is sort of like that. If we can reduce GHGs enough (slow down GW), we may be able to avoid triggering positive feedbacks that we may not have any control over. If not, those feedbacks may kick in, taking us up to a higher level of GW & other nasty effects — and we will have no ability to control it, even by reducing our GHGs to near zero. Then after much damage from a human (and animal & plant) perspective is done, the warming will level out (stabilize) and eventually come back down again over eons.
Sort of like the end-Permian GW & extinction period, though perhaps not quite so severe (but who really knows).
Comment by Lynn Vincentnathan — 17 Dec 2005 @ 11:34 AM
RE #24, Ferdinand you state, “Several models see a positive feedback of clouds when the temperatures increase, but this seems to be wrong, at least in the tropics and the Arctic, where clouds form a strong negative feedback.”
I’m not even an amateur climate scientist, but my logic tells me that if clouds have a stronger negative feedback in the Arctic, and I know (from news) the Arctic is warming faster than other areas, then it seems “forcing GHGs” (CO2, etc) may have a strong sensitivity than suggested, but this is suppressed by the cloud effect. Then what if we got to another “quantum-type” level where the cloud effect disappeared or reversed (I don’t know what I’m talking about here – skating on really thin ice), and all we had left wast the unsuppressed forcing GHGs effect, then it would really really get hot.
RE the main points made in the post, I think I have also used the same logic to suggest if natural variability is greater than thought, then our A-GHGs should also have a higher sensitivity.
I know I have made the argument that more info about natural forcings being really strong, all the more makes it a matter of prudence to totally reduce as much A-GHGs as possible & pronto, since we wouldn’t want at situation in which both the natural forcings (a bunch of volcanos or greater solar output in the near future) to piggy-back with our anthropogenic greenhouse forcings. That would really be bad. Since we can’t control the sun or volcanos, then it behooves us all the more to do what we can do & reduce our own GHGs.
Comment by Lynn Vincentnathan — 17 Dec 2005 @ 11:57 AM
Re #31: You say, “About Kyoto: based on the effect (a few years delay before a CO2 doubling is reached) and the costs, I would prefer an enormous effort to search for and promote fossil fuel alternatives.”
However, I don’t understand how you expect this search and promotion to happen. Some would recommend crash government programs but others, who believe more in markets, argue that government is not good at choosing the winning technologies. And, the best way to get the market involved is to internalize the cost associated with greenhouse gas emissions rather than making the earth’s atmosphere a free sewer for these gases. This is exactly what Kyoto does.
[Response: Yes, indeed. The stumbling block right now is that coal is cheap and is likely to remain so. It is cheap because the environmental damage caused by coal burning isn’t factored into the price. A profit making private company not only has no reason to avoid burning coal, it in some sense has an obligation to burn coal if that produces the greatest profit without breaking any laws. There’s no reason to expect a company to behave any way else. The Kyoto protocol helps to address this by imposing a kind of extra cost on burning coal, but there is the problem that it this cost is applied non-uniformly. It doesn’t affect the US or the developing world. Naturally, I would vastly prefer a global tax on coal burning, with some kind of mechanism to plow back revenues into developing world aid. The argument for Kyoto isn’t that it’s the best that can be done, but it’s all we have right now, and sets at least a few countries moving in the right direction. –raypierre]
To my mind, the main benefit of Kyoto is not the emission cuts per se but the technologies that will be developed in order to make these cuts. The supposed dichotomy between Kyoto and technology is a completely false one because in market economies, technologies are not developed to solve problems whose costs are externalized. If I can offload the cost on to everyone else, why should I bear it myself?
Thanks for the several responses (on my and other’s comments). Here a reaction on the main points about the natural (solar, volcanic) vs. man-made (GHGs, aerosols) sensitivity:
– If there was a larger temperature variation in the past millennium, the mathematical evidence is that an increase of one of the terms of the temperature trend equation must go at the cost of one or more other terms of the equation. There is only one temperature trend which is the result of all individual terms (forcings and feedbacks) and against which all proxies are calibrated, and a larger influence of solar in the past equals a larger influence at present. The same reasoning is used by Andreae and Gavin for aerosols vs. CO2 alone (Andreae) and for aerosols vs. all other sensitivities (Gavin). The same reasoning is used by Stott ea. in a search for the relative strength of the different sensitivities in the Hadcm3 model. Both sulphate aerosols and CO2 have their influence in the (lower) troposphere, while solar and volcanic have their highest influence in the stratosphere, this is essential in the discussion.
– It is practically proven that tropospheric aerosols have (far) less influence on temperature than expected by current models, see my comment on aerosols here and the lack of increase in insolation, despite a huge reduction of aerosols in Europe, according to Philipona ea. This necessitates a reduction of the sensitivity for the CO2 forcing.
– Climate probably has a higher sensitivity for solar than for CO2, for the same change in forcing. This is based on the fact that, while the change in total energy is only 0.1% during a sun cycle, the change in UV is over 10%, which has its largest effect in the stratosphere. From Stott ea.:
“We find that climatic processes could act to amplify the near-surface temperature response to (non enhanced) solar forcing by between 1.34 and 4.21 for LBB [Lean ea.] and 0.70 to 3.32 for HS [Hoyt & Schatten], although degeneracy between the greenhouse and solar signals (especially HSâ??see earlier in this paper) could spuriously increase this upper limit.”
Note that the last remark can go either way, as the solar signal can even be more enhanced at the cost of the sensitivity for the greenhouse signal…
And from Hansen ea.:
“Solar irradiance change has a strong spectral dependence [Lean, 2000], and resulting climate changes may include indirect effects of induced ozone change [RFCR; Haigh, 1999; Shindell et al., 1999a] and conceivably even cosmic ray effects on clouds [Dickinson, 1975].
Furthermore, it has been suggested that an important mechanism for solar influence on climate is via dynamical effects on the Arctic Oscillation [Shindell et al., 2001, 2003b]. Our understanding of these phenomena and our ability to model them are primitive,…”
While there are doubts about the link between cosmic rays and cloud cover, there is an observed significant link between (low) cloud cover and solar radiation within the last two sun cycles. I don’t see any reason why this shouldn’t be included in current models (including a long-term factor for changes in solar radiation since the Maunder Minimum). After all, the (secundary) influence of aerosols (on clouds) is included in models too, and its sensitivity is far from certain…
About the models reproducing past temperature trends:
It is known that multivariable processes can fit trends with different sets of parameters. Climate is not different, as can be seen in the fact that a broad range of cloud feedbacks (compensated by other parameters…) or a range of combined aerosol/CO2 sensitivities is able to fit the temperature of the past century. Even an unrealistic tenfold increase of (H&S) solar (see Fig.1 in Stott ea.) does fit the temperature trend to an acceptable level, if one should reduce the sensitivity for CO2/aerosols far enough…
Current models also can reproduce other transitions (LGM-Holocene) with a reasonable accuracy, but this is mainly in periods where there is a huge overlap between temperature (as initiator) and CO2/CH4 levels (as feedback). I am very curious if the same models with the same paramaters also reproduce the Eemian-110,000 years before present period where there is an almost total separation of temperature and CO2 trends…
Lynn, the increase of temperatures in the Arctic, is mainly the result of an inflow of warmer air from lower latitudes (with the current AO) and the change in albedo (mainly in summer). The influence of greenhouse gases is one order of magnitude lower in this case. The interesting part is that more clouds in summer as well as less clouds in winter both act as negative feedbacks: less warming in summer with more clouds reflecting the sunlight and more cooling in winter from less clouds allowing more heat to escape to space. Even so much that there is a cooling temperature trend in winter, large enough to refreeze almost all ice that was melted in the other seasons.
What will happen if the AO changes is an open question, at one side there may be less inflow of warmer air, at the other side, this may result in opposite changes in cloud cover…
About natural variability and sensitivity for man-made GHGs, here I disagree with Raypierre in another (large) comment…
[Response: It is not at all established that the Arctic warming is due to the AO. For that matter, even if the AO is part of the Arctic climate change, one has to face the possibility that changes in GHG are affecting the AO, a point made by Palmer and Molteni. As for the points Ferdinand makes in his (large) comment, I still contend that Ferdinand is misinterpreting the work on climate sensitivity to various forcings, and the need to make the sensitivity inference consistent with what we know about the physics of the system. Even if it could be shown that climate is more sensitive to solar variability than the strict radiative forcing would suggest (along the lines of Shindell et al) one would still have to contend with the fact that we know the solar variability for the past fifty years quite well, and it does not do the kind of things necessary to give the present warming pattern. This is why Stott et al conclude that “Nevertheless the results confirm previous analyses showing that greenhouse
gas increases explain most of the global warming observed in the second half of the twentieth century,” DESPITE their indications that HadCM3 underestimates the observed response to solar forcing. (Note also that Stott et al isn’t the final word on solar sensitivity, since their method doesn’t guarantee that what they are calling “solar response” is actually solar response and not simply something else that happens to be correlated with the solar cycle.) I also dispute the claim that there is a significant association between low clouds and cosmic rays. The analysis purporting to show this correlation is so highly suspect as to border on worthless (see Damon and Laut Eos,Vol. 85, No. 39, 28 September 2004). –raypierre]
Joel, if you are convinced that there is a huge influence of GHGs on temperature, then Kyoto indeed is peanuts and one need to reduce CO2/CH4 emissions to near zero within a few decades to prevent disaster. That will not be obtained by buying a Prius or other low-to-medium cost measures in factories (many energy intensive factories have learned to be economical in the seventies – as a matter of survival). Promoting (if you wish, crash) research into all alternatives for generation (solar, geothermal,…) and cost effective storage of energy and subsidies for (private) installations will have far more effect.
With the current Kyoto, any energy intensive factory simply will move out to developing countries if the cost for energy, due to taxes or carbon credits is too high. Because of the difference in efficiency and emissions, the net effect will be more CO2 and more pollution…
Btw, how much of the current and emerging (European) energy taxes is/will be used for alternatives research or subsidies for installations?
where x is multiplication and * is convolution.
However, as H is unknown and derived from T (attribution), we are faced with the wellknown problem of identifiability in closed loop systems.
Identification of Closed Loop Systems – Identifiability, Recursive Algoritms and Application to a Power Plant, Henk Aling, 1990, Dissertation Delft University.
This highly mathematical study tries to find constraints when an event in a power plant, say a pressure wave, can be traced to a source fluctuation (fuel or oxygen).
One of his conclusions:
â??In practise the estimated covariance function of the joint output/input signal obtained by a closed loop experiment will [b]never[/b] have the structural properties associated with the feedback system. This is due to the finiteness of the dataset, model structure mismatch and other circumstances by which the ideal assumptions, used in the derivation of the identifiablity results are violated.â??
In other words, closed loop systems contain signals that cannot be attributed to a given forcing.
Ferdinand: The Boer & Yu, 2003 paper shows that the correlation between the pattern of aerosol forcing and the pattern of temperature response has only 20% covariance, and that the covariance of the response to GHG and aerosol forcing is >60%. On your page, you show the results for HADCM3 aerosol and ozone (actually the difference between total forcing and GHG forcing, but should be approximately the same). The similarities with GHG forcing are clear – mostly over N hemisphere, mostly over land, and with polar amplification. The biggest effect is in Barent’s Sea. This shows that the temperature response, even to a geographically defined forcing such as aerosols, shows little overlap with the spatial pattern of forcing itself – although of course there is some overlap.
You say that “The huge change in SO2 emissions in Europe should be measurable, according to runs of the Hadcm3 model for the period 1990-1999, but it is not…”. However, the real world is not aerosol and ozone only. To check whether the model is accurate, you need to either strip out the other forcing effects (GHG etc) from the real world data (which we can’t), or add the other forcings into the model. When you do this, there is no major anomaly – see Stott et al, 2000. The model does not, in fact, predict the major negative anomaly that you say it does.
Regarding the feedbacks to different forcings: the models can and do show different responses to equivalent forcings from different sources. For example, the response to solar forcing in HADCM3 is 50% of the response to an equivalent GHG forcing (see Lambert et al, 2004 – last page). This is at the low end of the range. So, when Stott et al use HADCM3 to show that solar forcing may be underestimated, is this a revelation about models in general or just HADCM3? Another point:, although it’s true to a first approximation that forcings are linearly additive, it does not always hold true – e.g. see Meehl et al 2003. Finally, although there is uncertainty about solar forcing, this is also true for GHG and other forcing. Furthermore, the pattern of solar and volcanic forcing is uncertain (e.g. Hoyt vs Lean for solar, robertson vs crowley for volcanic). The temperature responses in reconstructions from the past
millenium can be reproduced (approximately) without inferring novel solar mechanisms: therefore, they cannot be used as evidence for novel solar mechnisms.
Raypierre, according to Wang and Key in Science (unfortunately under subscription):
“Are these changes due to large-scale advective processes rather than to local radiative effects? The correlation between surface temperature and the Arctic Oscillation (AO) index (18), which can be used to represent large-scale circulation patterns, is shown in Fig. 5. The correlations are as expected: positive in northern Europe and northern Russia but negative over Greenland and northern Canada. Given the increasing cooling effect of clouds found here, the rise in surface temperature is clearly related to large-scale circulation.”
Of course, there may be a change in AO index due to GHGs, maybe even as good as the influence of solar on the AO… Remains to be seen what will happen with temperatures and cloud cover if the AO index changes.
For cloud cover and the solar cycle, this is not about cosmic rays and cloud cover, but solar radiation (in general) and cloud cover. From Kristjanson ea. (capture of Fig. 2):
“Significance level of correlations: 67% for cosmic rays and low clouds, 98% for solar irradiance and low clouds… …30% for cosmic rays and [daytime] low clouds, 90% for solar irradiance and [daytime] low clouds.”
Further discussion about sensitivity in a response to several interesting points made by Tom Rees…
the amazing thing about the eemian is that winter temperatures in europe were comparable to 20th century values but summer temperatures were 4 degrees higher, yet rivers kept flowing during summer.
By using mutual climatic range methods, the thermal climate of the early phase of the Eemian Interglacial has been estimated quantitatively, showing that mean July temperatures were about 4Â°C above those of southern England today. Mean winter temperatures were not much different from those nowadays. This phase was probably the thermal maximum of the Eemian Interglacial. Precipitation levels are difficult to quantify but were adequate to maintain flowing rivers in England throughout the year.These results are in agreement with the presence of other fossils, both plants and animals, in the same deposits.
Sorry, this is a long response, but a rather fundamental discussion of the validity of sensitivities and forcings used in current climate models…
Tom, if you compare different models for the regional distribution of the anthropogenic aerosol forcing and/or temperature response, there are not two models which agree with each other. See the Hadcm3 model response to aerosols here, and compare that to the Canadian model Fig. 2 fa and Ta and fig. 3, the Japanese model href=”http://cfors.riam.kyushu-u.ac.jp/~toshi/research.html”>Fig. 3, Hansen ea. Fig. 3 for sulphate aerosols and last but not least, what the IPCC expects in Fig. 6.7(d) and (h). Not directly convincing for the reliability of the regional resolution of the models…
[Response: Or a clear demonstration that regional climate is not controlled by purely regional forcing. This is indeed a statement about the predicitability of regional climate (still a cutting edge topic), but your feeling that there must be a strong link is not based on any actual studies. – gavin]
The expected global average direct + indirect forcings for aerosols vary between -1.0 (Japan) and -1.4 W/m2 (Hansen, IPCC) for the past centuries and -0.9 to -1.3 W/m2 for future (2050, 2100) emissions (Canada). The Canadian model suppresses the influence of aerosols in the regional distribution far more, as the direct forcing of GHGs increases to 3.3 and 5.8 W/m2 for resp. 2050 and 2100 against 2.3 W/m2 in the other models which use past and current emissions.
The Hadcm3 model has calculated the largest increase in temperature which may be attributed to the reduction of aerosol load (40%) over the period 1990-1999 somewhere in NE Europe, other models do that more in Southern Europe. Anyway, the sum of aerosol decrease, GHG increase and positive NAO (all with a warming effect for W, NW and NE Europe) in the same period is only visible in the West and North European temperature trends with a stepwise change in 1990 and no trend thereafter. This clearly points to the stepwise change in NAO. As far as I remember: acid rain (acids formed from SO2 emissions in rainwater) in Scandinavia wasn’t that caused by the industry in England, thanks to the prevailing SW winds? And as tropospheric aerosols have an average lifetime of only 4 days before raining out, the influence must be at and near the sources… Thus what is the real influence of aerosols?
This all points to a very low sensitivity for or a low forcing of aerosols. Consequently a lower sensitivity for CO2 forcing…
[Response: Illogical. The model you cite has similar sensitivity to both aerosols and CO2, how you can conclude its results are right for one, and wrong for the other makes no sense.]
Stott ea. 2003 is similar to Stott ea. 2000, except that they used a large forcing for solar (10x) and volcanic (5x) in separate runs to see if the relative influence of both may need to be adjusted, as the Hadcm3 model possibly underestimates the – relative – weaker forcings. Which was what they discovered for solar. The problem with this test is exactly the restraint of a fixed aerosol forcing trend and a fixed sensitivity… without that, the adjustment for solar at one side and GHGs/aerosols at the other side might have been much larger, while maintaining the same (or better) result.
Climate sensitivity of solar (for the same forcing) in the Hadcm3 model indeed is only 50% of other forcings in the same model. This is in contrast to the model that Hansen ea. 1997 used, where the general variation in sensitivity is within 20% (but what do other models?) and contrary to what Raypierre expected (all forcings have the same sensitivity). Moreover, from the Hansen ea. 1997 abstract:
We show that, in general, the climate response, specifically the global mean temperature change, is sensitive to the altitude, latitude, and nature of the forcing; that is, the response to a given forcing can vary by 50% or more depending upon characteristics of the forcing other than its magnitude measured in Watts per square meter.
The principle mechanisms involve alterations of lapse rate and decrease (increase) of large-scale cloud cover in layers that are preferentially heated (cooled).
Sounds like differentiated solar influences (in how far were these included in the GCM that Hansen used?)…
[Response: Different forcings can have different impacts (which can be measured by the efficacity – Hansen et al, 2005), and to some extent that is model dependent. But the differences are not by orders of magnitude, more like a few 10’s of percent at max.]
By comparing precipitation observations to shortwave and long wave forcing time series, we find that most of the forced variation in precipitation appears to be driven by natural shortwave forcings.
Again solar influences, linked to clouds and precipitation…
[Response: What? He is talking about short wave surface forcing not ‘solar’ forcing at the TOA. ]
Further, I totally agree with you that there are a lot of unknowns in forcings as well as in sensitivities. With the current uncertainty, one can fit the past with different sets of forcings and sensitivities, making any prediction of the future rather questionable. Therefore we urgently need a more accurate reconstruction of climate in the pre-industrial millennium, to get rid of the large historical variance of solar forcing/sensitivity of about 1:9, depending of the chosen reconstruction. That has nothing to do with the invention of some novel solar mechanism (although the exact mechanism is not known), but with the implementation of the observed changes in clouds, as result of solar changes, in the models. The discovery of the exact mechanism (probably along the lines mentioned by Hansen) may be just a question of time.
[Response: The chances that models are underestimating solar forcing by an order of magnitude is very very slim. What actual evidence is there for this? The ‘argument from personal incredulity’ is not a sound basis. – gavin]
Wow! Nice to see that the post-Eemian cooling indeed was possible without the help of CO2. It is a pity that they stopped the simulation before the CO2 decrease (111,000-106,000 BP), to see what the model produces for further cooling, compared to reality…
I thought that some models predict a reduced THC, even a shutdown, as result of higher temperatures? But have a look at Kaspar and Cubasch simulation and the reconstructed European Eemian temperature distribution (link thanks to Tom Rees), looks more like a very strong NAO (AO/AMO?), be it more to the East.
Re: #39 (Hans Erren):
> For more on identifiablity see the work of Kitoguro Akaike.
It seems that you mixed up names of three mathematicians of the same group, Hirotugu Akaike, Genshiro Kitagawa and Makio Ishiguro. Prof. Kitagawa keeps records of the retired Prof. Akaike. The subject of Akaike’s works is essentially linear autoregressive models of time series. He found applications in controlling chemical plants. Autoregressive models are also found useful in studies of oscillations of the solid earth, but (in my opinion) not so much in atmospheric science. (I once hoped to apply them but gave up.) It is probably because oscillations of solids have discrete power spectra in the frequency domain while atmospheric phenomena have continuous spectra.
I agree that the issue of estimating climate sensitivity is conceptually something like “identifying” H from F and T in your formula. And, since the system function (H) is likely to be dependent on time scales, thinking in the frequency domain seems a good idea (especially in modeling studies where F can be speficied in known forms), though we must also anticipate complication due to essential nonlinearity of the system. I feel, as you do, that it might not yield useful information from observations where F is not known precisely.
Now let’s go back to the original thread, everyone, equipped with Hans Erren’s formula (in a generalized sense). If the situation is as simple as T = F H, and F does not change and T turns out to be larger, then H also turns out to be larger. This is a paraphrase of Raypierre’s story. In a similarly simplified way, Esper’s story may be like this: T_1 = F_1 H and T_2 = (F_1 + F_2) H, and T_1 turns out to be larger but T_2 and F_2 do not change. Then it is likely (though also dependent on actual numerical values) that F_1 turns out to be larger and H smaller.
One claim that I have come across frequently is that volcanic eruptions influence the climate much more than human activity. So what is the impact of volcanic activity on the climate in relation to human activity? I searched this site, but couldn’t find any satisfactory answers…
Gavin, here follows some reactions and clarifications on your comment…
– If there is a substantial forcing by sulphate aerosols, this is concentrated in three main areas (at least for the direct forcing). The global change of >1 W/m2 thus is much higher in smaller areas. Forcing changes of similar magnitude, due to water vapour variations, are measurable as regional temperature changes in Europe, see Philipona, but aerosol changes are not…
– I agree that both CO2 and sulphate aerosols have similar sensitivity, but with opposite sign. Thus if one of them has a lower sensitivity (or in the case of aerosols, a lower forcing), the other one must follow, even if it is only to match the 1945-1975 temperature trend. See your own work and that of Andreae…
– That different forcings can have different impacts is exactly the origin of the discussion. Some don’t believe that there may be different climate responses for equal forcings. The GISS model finds an efficacity for solar of 0.92, but doesn’t consider the secondary responses to solar changes. In the Hadcm3 model, the doubling of the (too low) sensitivity of solar leads to a 20% decrease of sensitivity for GHGs. Thus a manifold increase of the smaller forcings is not unthinkable…
– From Lambert ea. (page 4):
“The solar forced run exhibits a larger precipitation response per degree of warming than the CO2 forced run, as expected from the theory outlined earlier in this section, even though the precipitation response [note: this must be the temperature response] per unit forcing is smaller than for CO2.”
The run was done with TOA solar changes. And again with the (uncorrected!) Hadcm3 model which has halve the sensitivity for solar forcing than for CO2 forcing…
– I don’t think that the influence of solar is an order of magnitude larger than incorporated in current models. The variation in effect of 1:9 for solar (0.1 K in MBH, 0.9 K in bore hole reconstructions, 0.1 K for volcanic in both) is simply the result of different millennium reconstructions. Thus it is very important to know what the real impact of historical solar changes is, as 0.1 K in the past, results in climate sensitivity for anthropogenic at the high end, while 0.9 K results in a very low effect of anthropogenic, if the instrumental temperature trend of the last 1.5 century is used as reference. I expect that reality is somewhere in between those two extremes, towards the lower end…
There is an aspect that hasn’t been addressed until now: Esper et al. are talking about “a redistribution of weight towards the role of natural factors in forcing temperature changes”. This conclusion seems not meaningful as a general statement. The weight of the different factors on temperature changes always depends on the specific time period and timescale you are observing. A general time-independent statement would only be reasonable if you are comparing the largest possible influence on temperature, i.e. the largest possible temperature change a factor can produce, and these are neither known nor discussed to my knowledge (even then, it would be timescale dependant).
An active forcing through a certain forcing factor at a given period and on a given time scale can only be expected if this factor changes its influence at that time and on that timescale. The temporal behaviour of most factors is very unsteady.
When drawing conclusions from a possibly higher temperature variability during the last millenium about the weight of different forcing factors we therefore have to consider the time period (and ev. timescale) we are talking about and the corresponding forcings of the different factors.
An enhanced variability of temperature during the last millenium suggested by the work of Esper, Moberg, etc. is mainly related to the time frame 1000 – 1900 and the centennial time-scale. The anthropogenic influence, the Kyoto protocol and future projections are mainly in the period 1950 – 2100 and on a decadal to multi-decadal timescale. The difference on the time-scale is minor, but it’s a different time period.
I think we agree that the forcing from anthropogenic GHG concentrations starts at about 1800 and then increases, getting to its strongest influence after about 1950 (more or less steady GHG increase since 1980). Thus the GHG forcing before 1800, for the most part of the Esper/Moberg time-frame, is near zero on the multi-decadal time scale, because the concentrations hardly have changed. The GHG forcing only started to get important at the very end of the Esper/Moberg period. The weight of man-made GHG forcing therefore is very low (a few percent at maximum) and the weight of natural forcings is near to 100 percent for this period.
However, for the period of 1950-2000, there seems to be a consensus that natural forcing (solar, volcano) as a whole is near zero or slightly negative (IPCC 2001) on the multi-decadal timescale. The weight of natural factors therefore also is near zero whereas the weight of anthropogenic forcing (GHG minus aerosols) is very high. If the forcing due to a certain change of solar and/or volcanic activity should have been higher than previously assumed, this wouldn’t change the weight of these factors much, neither for the Esper/Moberg period nor the period since 1950. This is independent of the question if the sensitivity to natural and anthropogenic forcings might be different or not. If the factor doesn’t change its properties, there is no forcing.
Moreover if the natural forcing since 1950 should be slightly negative, an enhanced natural forcing would mean that anthropogenic forcing must be greater than expected to explain the observed warming. And this would mean that future warming has been underestimated. The opposite, an overestimation of future warming, could only be suggested, if the natural forcing since 1950 has been positive. However, the work of Esper, Moberg, von Storch, etc. has nothing to do with changes of natural forcing factors during the last decades and therefore do not alter the corresponding IPCC findings.
As long as the latter findings are uphold, I can’t see any logical reasoning that the detection of enhanced variability before the instrumental period might lower the projected future warming.
I agree that time scale is important in the whole discussion. And I agree that there is very little influence from anthropogenic GHGs in the full Holocene period until app. 1900. But I disagree that the influence of solar + volcanic in the period 1950-2000 is near zero. For the very reason that you forget the time frame.
Indeed there is little trend measured in solar radiation since the start of the satellite measurements (which is unfortunately only a few decades of data). For the period before the satellite era, we depend on indirect indications of solar variations like sunspots, magnetic field data and cosmogenic isotopes. These are interconnected, but have no 100% match. What is clear, is that the sun’s activity since 1930-1940 is higher than in any period of the preceding 1,000 years, see Usoskin, Solanki ea. or even the preceding 8,000 years proximated by 14C data and sunspot number or a more than doubling of the sun’s magnetic field in the past 100 years.
It takes time for the earth to get to a new equilibrium (especially for the oceans), even if the extra forcing stays steady after a shift to one or other side. That is true for GHGs as good as for solar disturbances…
First point: Esper et al. talk about natural forcing as a whole, not solar forcing. Thus we have to include volcanos. They compare anthropogenic forcing (GHG + aerosols + other forcings) to natural forcing (solar and volcano in principle). If you only look at solar forcing, the picture might be somewhat different. During the second half of the 20th century, there is only a very small solar forcing, if ever; whereas during the same time there is a negative forcing by volcanoes which probably gives a net negative forcing during that time (IPCC 2001, http://www.grida.no/climate/ipcc_tar/wg1/448.htm)
Second point: You seem to suggest a time lag of global temperature to solar forcing or a lagged reaction due to ocean inertia. This time lag or lagged reaction must be at least about 50 years to explain the recent warming since the rise of solar activity ended in about 1940.
Shindell et al. (2001, http://pubs.giss.nasa.gov/docs/2001/2001_ShindellSchmidtM1.pdf) have found a time lag of about 20 years for the solar influence on AO/NAO with impacts on the regional scale, but hardly on the global scale. A time lag of about 20 years could not explain the temperature rise after 1970 which started 30 years after the rise of solar activity has stopped.
If you compare the Usoskin et al. data with the temperature evolution of the last millenium, there is hardly any evidence of a 50-or-more-year time lag even on the centennial time scale. The strongest change, i.e. the rise of solar activity and temperature at about 1900 is more or less synchronous. I can’t see room for a time lag of more than some years there.
Recent GCM calculations (Meehl et al. 2005, http://www.sciencemag.org/cgi/content/full/307/5716/1769, Fig. 1B) show that there is only very weak remaining inertia reaction of temperature after 30-40 years for a forcing comparable to the solar forcing 1900-1940, i.e. in the region of about 0.02ºC per decade. Far too less to explain recent warming.
However, since the warming after 1970 is comparable to the one 1900-1940 an inertia reaction seems unlikely, it would have to be a time lag. But how would you then explain the temperature rise of 1900-1940?
Regrettably, I’ve been tied up with other things, and haven’t been able to actively monitor this interesting conversation in the past week or so. I will make a few last remarks concerning the issue of possible differences in sensitivity to solar vs. GHG radiative forcing, mostly prompted by Ferdinand’s musings.
First, note that in my article I stated that the equivalence of solar radiative forcing to GHG radiative forcing is not as precise as, say, the equivalence of methane to CO2 forcing. Solar forcing has a different pattern than GHG forcing. In particular, sunlight is absorbed mostly at the surface, and energy changes have to work their way into the atmosphere via changes in surface conditions. This can have a big effect for short-time response over the oceans, where the surface doesn’t have time to catch up. For radiative-convective (column) models, the energy changes at the surface are pretty much rapidly mixed into the troposphere by convection, and so the equivalence of solar to GHG forcing seems pretty secure. This has been known since Manabe and Wetherald and Manabe and Strickler. Now, when you go to a full GCM, things, a priori, start to look fuzzy. The changes in solar luminosity cause a forcing pattern that is very non-uniform both in time and space, and not like the pattern of GHG forcing. You might think this would completely invalidate the concept of radiative forcing equivalents, but GCM simulations with large changes (equivalent to doubled CO2) show a remarkable degree of equivalence (see Govindaswami and Caldeira GRL Vol. 27, 2001). I don’t know what is going on in the Hansen study cited by Ferdinand, but last time I read it I didn’t read it with this point in mind, and I haven’t had time to check that the results are being cited correctly.
Generally speaking, it’s not impossible for models to have a different sensitivity to solar vs. GHG, but there’s a lot of physical reasoning and numerical simulation supporting a near equivalence, so one has to argue carefully for why a given case should have different sensitivity.
Now, regarding the Stott et al paper, one has to be careful to read what they actually said in its entirety and not just look at the high end numbers for sensitivity. In their Table 1, they actually find that equal sensitivity to solar between the model and observations is within the confidence limits of their estimates.
The “factor of 3″ figure is the high end of the confidence interval. The mid-range estimate does indicate 1.65, but the calculation isn’t incompatible with the null hypothesis that there’s nothing missing in the model sensitivity to solar. Now, when they break out the “Natural” forcing into separate volcanic and solar components, they do find support for enhanced amplification of solar forcing BUT, as the authors themselves note, part or all of this result is likely to be spurious. The reason for thinking that, is that when they apply their same regression analysis to the TOTAL model response against individual components (instead of doing data vs. model components) they find that a lot of the GHG response is mis-attributed to solar, so that one gets a completely spurious implied reduction of GHG sensitivity. This is known to be a misattribution because one knows why the model did what it did, though not necessarily the atmosphere. It is difficult to separate the observed solar response from the GHG response, because over part of the record the patterns look similar.
Note that the main issue treated by Stott et al is model sensitivity vs. observed sensitivity, which is a different thing from model sensitivity to solar vs GHG.
It is true that models with different climate sensitivities can equally well match the 20th century record, owing to compensating uncertainties in aerosol forcing. This is precisely why there is a range of IPCC forecasts and why we can’t at present say which of the IPCC models is most likely. This doesn’t render the prediction “questionable.” It renders it uncertain, which was always openly admitted. Note that nobody has yet produced a model which fits the recent data and which also has much lower sensitivity to CO2 than the bottom of the IPCC range. Ferdinand isn’t saying anything more than the IPCC reports say. From the standpoint of risk assessment, though, one must keep in mind that as far as present knowledge goes the top of the IPCC range is essentially as likely as the bottom of the range.
The whole reason for the interest in natural variability and climate sensitivity is that one would indeed like to say which of the forecasts is most correct. This is proving to be very difficult, but nothing has emerged to seriously question the range given by IPCC.
Regarding the Holocene variability, the issue isn’t trade-off between CO2 and solar sensitivity. The CO2 fluctuations in that time were small. If the sensitivity of climate to solar changes is different from the sensitivity to CO2, that makes it even harder to infer CO2 sensitivity from the Holocene fluctuations — and darn near impossible if the supposed mechanism for giving extra amplification to the solar forcing is completely unknown. My article gives the most straightforward interpretation of the data, and links an increased observed amplitude of response to increased sensitivity to the particular form of the
forcing responsible (which sensitivity may or may not be transferrable to CO2). The overall point is that inferring sensitivity from observations is HARD. The subsequent discussion here embellishes that point. However, I repeat that there’s nothing in this discussion that points to an appreciable downward revision of the current estimates of sensitivity of climate to CO2.
Now, there is certainly one general point on which I am wholeheartedly in agreement with Ferdinand. That is that we badly need better reconstructions of the past millennium — but be assured nobody is neglecting that problem. I would expand Ferdinand’s sentiment to say we badly need a better understanding of the response of climate in the deeper past, including glacial times and the Eocene. While there are many researchers interested in this question, it is regrettable that funding agencies don’t seem to see it as a big part of global change research. The funding for this area is pitiful compared to satellite data collection and modelling of the next century. The first draft of the US Strategic Plan for Climate research left out paleoclimate altogether. I see this as a major problem in funding priorities.
There’s no end to how long one could go on regarding these fascinating topics, but I’ll end by expressing my appreciation for all the interesting points people have raised. With this, I sign off for the holidays, wishing you all a Happy Holiday season and a fruitful and prosperous New Year.
Comment by R. T. Pierrehumbert (raypierre) — 22 Dec 2005 @ 4:27 PM
Re #48 (JH): One of the standard contrarian myths, I think still being actively promoted by at least the Idsos, has been that volcanos emit some much larger quantity of CO2 than do anthropogenic sources. This claim was made up from whole cloth. Actually it has been discussed once or twice on this site, but I think in comments rather than as a post topic. Anyway, the upshot is that volcanos emit far less CO2 than do anthropogenic sources. Conveniently, there was a very large eruption quite recently (Pinatubo in ’92) that was studied very closely. Large amounts of CO2 would have been hard to miss. In fact, the main effect of Pinatubo was a temporary cooling (from the aerosols) that maxed out at about half a degree. Had there been a lot of CO2, this cooling effect would have been overwhelmed by an obvious warming pulse that would yet be with us (since CO2 remains in the atmosphere long-term rather than falling out quickly like the aerosols).
Re #53 (RP): Thanks, Raypierre, for taking the time to make this one of the most informative RC posts ever. Happy holidays!
Re #50 (UN): I had an earlier comment, apparently eaten by cyberspace, covering somewhat the same ground as your final paragraph, but I think there’s a further implication:
I think Esper was a little bit the victim of his own bad writing or of bad editing when he wrote the closing paragraph Raypierre quoted (reproduced here so folks don’t have to scroll back up):
“So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberg et al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought.”
Milloy spun this as something of an attack on Kyoto, but I don’t think it was intended as any such thing. Esper pointedly did not propose any reduction in the absolute amount of anthropogenic forcings (and if anything implies some degree of increase), but rather suggested that if natural forcings are larger, then the *relative* value of the anthropogenic forcings declines and that of the natural forcings increases. If this is the case, it is a truism that Kyoto and similar efforts to control anthropogenic forcings would be “less effective than thought” since the natural forcings are by definition uncontrollable by climate treaties.
At the same time (and this is the thought Esper really needed to add), climate treaties such as Kyoto (or more to the point its successors) become that much more essential since without them we have the potential of enhanced warming from a combination of natural and anthropogenic forcings considerably in excess of what would be possible from anthropogenic forcings alone. It would be more than a little ironic if this turns out to be the conclusion to which all the attacks on the flatter versions of the “hockey stick” lead.
Volcanic forcing in the last 50 years, as good as in the previous 600 years (Fig. 6) results in an average less than 0.1 K cooling, with quieter periods and more active periods. The influence of GHG variations in the pre-industrial times is very low. That means that the residual historical climate change (0.1 K for MBH98, 0.9 K for bore hole reconstructions) is near entirely from solar changes, be it that (multi)decadal internal natural variations/oscillations may play a role in all times.
If you have a look at the shape of the different solar reconstructions at the IPCC pages, you can see that solar fluctuations can explain near all of the variations of the last centuries, including the 1900-1940 warming, the 1945-1975 cooling and the 1975-2000 warming, if the sensitivity were large enough, and including the inertia of the climate (which is larger for larger changes).
As the sum of all influences (solar, volcanic, GHGs, aerosols) results in the temperature record, a larger pre-industrial natural variation (in this case by a higher sensitivity for solar), will go at the cost of the sensitivity for man-made emissions in the current period. Thus Esper is right that the result of Kyoto then will be less than expected.
Besides the main forcings, in recent decades there is something natural happening which is hardly explainable by increasing GHGs. See Wielicki and Chen last two paragraphs (and the rest of the pages before and after).
Thanks Raypierre for the response and I wish you and other readers too Happy Holidays and all the best for the New Year.
About the attribution of sensitivities to CO2 and solar in all GCMs: as there is a overlap in CO2 and solar since the start of the industrial revolution, it is difficult to know the right attribution for each, except if there is near-equivalence of sensitivities. As most GCMs have near identical sensitivities for the tandem GHG/aerosols, it is normal that the simulations have similar results (it is a necessary but not sufficient property for validation). But if there is a difference in sensitivities, the relative attribution can go either way.
About the response to CO2 in the pre-industrial Holocene: the variations are very low, which has as advantage that it is possible to deduct the sensitivities for mainly solar, without the overlap of solar and CO2 changes.
In summary, the discussion is about the possibility that there may be differences in response to solar and other forcings, because of the differences in spectrum, the influence of these differences on several layers of the atmosphere (and the surface for land/oceans) and cloud responses.
The latter is clearly a very weak point in current GCMs and the main cause of the large range of future climate projections.
A second problem in current GCMs is the influence of (human made) aerosols. Here there is an offset between aerosols and GHGs. If aerosols have a low sensitivity (or a low forcing), then GHGs have a low(er) sensitivity and vv.
Further discussion somewhere in the New Year, I hope!
Re #58 (ER): Thanks for the fact check. Since my memory was clearly in error, I wasted way too much time tracking down what I suspect may be the source for this urban legend. See http://www.agelesslove.com/boards/archive/index.php/t-16764.html for the gory details. The claim made by the book isn’t quite that excess CO2 comes directly from volcanos, but rather that undersea volcanos warm the oceans which in turn emit the CO2. I’d say you can’t make this stuff up, but obviously you can. :)
Re #56: You only can explain the 1975-2000 warming by solar forcing if the sensitivity has changed in the middle of the 20th century. The solar reconstructions you mention show that there is an increase in TSI from 1900 to 1950 of about 1.5-3 W/m2 (depending on the reconstruction) and an increase of global temperature of about 0.4ºC. The following decrease (until ~ 1970) and increase (since 1970) of TSI is of about the same amount (0.5-1.5 W/m2) and thus compensates each other while the temperature increase after 1970 is much greater (at least 5 times) than the cooling before (about 0.5ºC). I can’t see how you can explain the warming after 1970 by TSI changes without changing the climate sensitivity to solar forcing at that time.
Concerning Chen and Wielicki:
1. It is quite difficult to detect decadal variations in a 15 year time series.
2. It is physically plausible that global warming changes large scale circulation patterns (in the atmosphere as well as in the ocean).
3. C&W “feel” and “believe” that this variation is independent from global warming, any arguments are missing.
First, the solar reconstructions are for the TOA (top of atmosphere) forcing. Any increase in sensitivity, due to the change of cloud amount caused by the long-term (longer than the solar cycle) change in solar intensity is not included.
Second, the measured increase in 1900-1945 temperature is only a transient one, and if solar should have levelled in 1945, the temperature would have increased further, until a new equilibrium had been reached (all other forcings being frozen too). But as there was a decrease in solar strength 1945-1975, the net effect is a small cooling instead. After 1975, solar strength increased again until the current level, which still is higher than in the 1930’s…
You need to read the original works of Wielicki ea. and Chen ea. to see the basic point of what happened in the past decades in the tropics. CO2 increased in the period 1985-2000 from 345 ppmv to 370 ppmv. This should give a direct radiative gain of some 0.35 W/m2 in the period and area of interest. In contrast, due to a change in cloud cover (as result of increased Hadley cell circulation), there is some 2 W/m2 more solar insolation at the surface and some 5 W/m2 more outgoing IR to space in the same area and period. The net effect is an extra loss of some 3 W/m2 to space…
That is an order of magnitude larger loss than the gain from the extra CO2 and with an opposite sign. Which points to a natural cause.
Neat! I hadn’t the considered the warm water effect.
Now the claim can easily be checked because undersea volcanic eruptions are coincident with earthquakes. As seismicity has been monitored in detail ever since underground nuclear testing started in the 50’s, we have a reliable database available.
Just need some time to query it, or has somebody done this already?
…. Changes in solar forcing can potentially explain only about 2% of the observed increase in ocean heat content (Crowley et al. 2004). Geothermal heat escaping to the oceans from the great rifts may explain perhaps 15% of the observed change (W. Munk and J. Orcutt 2003, pers. com.) and thus sea floor heating is probably not a major factor. In contrast, estimates of changes in ocean heat content caused by anthropogenic warming provide a much closer fit to the observations ….
And of course volcanic warming is not a zero effect, but it’s interesting that someone figured out how to quantify it. Hans, I believe Louis Hissink has been going on about this for years, claiming it to be the dominant factor in ocean warming. You may take that for what it’s worth.
There are some scientists who disagree with the 2% attributed to solar changes. According to Scafetta and West (2005), the small increase (0.45 W/m2 according to ACRIM) in TSI (total solar irradiation at the top of the atmosphere) measured by satellites is good for at least 10-30% of the observed increase in surface temperatures in the period 1980-2002. This is only based on recently measured TSI values and the observed effect of ~11/~22 year solar cycles since 1850. That doesn’t include longer-term effects on surface temperature or ocean heat content, due to the increase in solar radiation over the past century.
Further, the observed ocean heat content (since 1955, the first halve century lacks sufficient sub-surface data) only resembles the GHG forcing for the linear increase in heat content. The cyclic behaviour of ocean heat content points to natural cycles of forcing (probably caused by changes in cloud cover) with one order higher magnitude than the changes attributable to GHGs. Thus we need to know what drives the natural cycles before we can make any real attribution to the different forcings…
[Response: As you are no doubt aware, there are are two separate efforts to string the solar observing satellites together, and that the other method (PMOD) doesn’t show any trend at all…. -gavin]
I had a look at the Scafetta and West paper (SW). The arguments and conclusions of this paper are, mildly speaking, very questionable. The main flaws are the following:
SW apply a band-pass filter to global temperature (GT) and total solar irradiance (TSI) for the last 150 years for two bands (7.3-14.7 years, centered at 11 year, and 14.7-29.3, centered at 22 years). Then they compare the solar and temperature signals of these frequency band.
To derive climate sensitivity to the 11 and 22 year cycle, resp., they compare the amplitudes of both signals in the above mentionned frequency bands. They do so in assuming that 100% of the temperature signal in this frequency band is due to the solar 11 year cycle.
This latter assumption is certainly not true: first, from figure 4 of their paper it is obvious that the (filtered) temperature signal and the solar signal have different frequencies (temperature with 16 cycles over the 150 year period compared to 14 for the solar signal; or a frequency of about 9 years for temperature for the last 5 cycles compared to about 11 years for the solar signal). The same for the 22year cycle (7 temperature cycles compared to 6 solar cycles). Not to speak about the amplitudes of the cycles they pretend to match showing no apparent correlation at all. Just the fact, that both factors show a signal in the same wide frequency band (7-14 years!) is not very convincing (to say it politely…). Second, the filtered signal certainly contains components of other forcing factors (like volcanoes or El Nino), since these signals surely have components in all frequency bands.
SW assume that “our methodology filtered off volcano-aerosol and ENSO-SST signals from the temperature data because these estimates are partially consistent with already published independent empirical findings.” This is a very peculiar logic. They compare their result of the lower frequency band (11 year) e.g. to the climate sensitivity found by Douglass and Clader (2002) (DC) through a regression analysis including volcanoes and El Nino. However, their climate sensitivity of the upper band (22 year) is much higher than the result of DC (0.17 K/Wm-2 compared to 0.11). And for their calculation of the solar influence, they use this value.
However, they make another logical mistake, comparing the TSI increase between two 11-year cycles (mean of 1980-1991 to mean of 1991-2002, which compares periods separated by 11 years) to the global temperature trend 1980-2002 which compares periods separated by 23 years!
This compensates somewhat for their use of the higher sensitivity…
I wonder how this paper could pass peer review?
Or: another paper claiming solar influence by easily passing over the fact, that the frequencies of the signals which they claim to be linked, unfortunately do not match at all (after Shaviv and Veizer 2003 in GSA Today, see Rahmstorf et al. 2004 in EOS)…
[…] have contributed 10-30% of the warming from 1980-2002 (Urs Neu comments on this particular study here), or 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of […]
[…] have contributed 10-30% of the warming from 1980-2002 (Urs Neu comments on this particular study here), or 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of […]