Tiny point, I’m sure you’ve hit it before.
Peer review is important, so assumptions and data can be questioned. Replication is essential, to see if findings generalize. That’s the difference between micro-critiquing (very popular in some circles) and science, and what really makes this a collective enterprise.
[Response:Hopefully, this kind of exercise motivates authors to take greater care when they design their analytical approach. It’s not a tiny point if the process of peer review starts to slip slide… -rasmus]
If solar is responsible for half of global warming, then all the more reason to reduce our greenhouse gas emissions. We can’t control the sun, but we can alter the greenhouse effect through the amount of fossil fuels we burn.
Re #2: Alastair, if solar has a higher influence than currently implemented in GCM’s, that implies that the effect of 2xCO2 is lower than expected. That means that we have more time to mitigate to non-fossil alternatives… Thus no need to push the panic button, but a huge need for more subsidies for alternative research/implementation.
[Response: No. it doesn’t. CO2 sensitivity is an independent calculation (as I have explained many times). -gavin]
Much depend of what the historical variability was. If one takes the MBH98/99 reconstruction as base, the variation in the pre-industrial period was ~0.2 K, of which less than 0.1 K (in average) from volcanic eruptions, the rest mostly from solar (I doubt that land use changes had much influence on global temperatures). If one takes the Moberg reconstruction as base, the variation was ~0.8 K, again of which less than 0.1 K from volcanic, thus 0.7 K from solar changes. Or a ratio of 1:7 for solar influence on temperature estimate between the two reconstructions.
As we only have one instrumental temperature trend, the difference between the two estimates for solar sensitivities means that a larger influence must be compensated by a smaller influence of the GHG/aerosol tandem, to fit the temperature trend in the past century…
The pre-industrial historic correlation between temperature and CO2 levels was remarkably stable around 10 ppmv/K, and shows up in the Law Dome ice core as a 10 ppmv change for the MWP-LIA transition. Which points to a change of ~1 K in that period…
As I have said in the other thread, one can easely fit the past century trend with different sets of sensitivities in a simple EBM model (see here, especially by reducing (halving) the sensitivities for GHGs and aerosols in tandem… As the influence of aerosols probably is overestimated (and of solar underestimated) in current GCM’s…
Re 3 (response)
“No. it doesn’t. CO2 sensitivity is an independent calculation (as I have explained many times)” Gavin, do you have a link on this subject ? I still do not correctly understand how the calculation of CO2 sensitivity in past or present climates can be independant from the estimation of the other forcings related to temperature trends (and from natural/chaotic variability). By advance, thanks.
[Response: We’ve discussed climate sensitivity many times (see here) – the specific issue that I think confuses some is that we can’t use the 20th century changes to usefully constrain this because of the uncertainties you allude to. Thus empiricially it must be derived from other sources. Within a model of course, it is absolutely independent. -gavin]
The point to realise is that if the climate sensitivity is 3C for x 2.0 CO2 then the fact that the sun has added a further 0.5C will not make much difference. I think Gavin is arguing that because x 1.3 CO2 produces a much smaller effect, then it will currently be masked by aerosols, the ABC, land use, solar, and the changes in methane levels etc., so we can’t use the current temperature change to estimate the climatate sensitivity of CO2.
I would add that we have already reached or even passed the tipping point, because the melting of the Greenland glaciers is accelerating and they will not stop melting unless the atmosphere cools. So long as we keep increasing atmospheric CO2, then the glaciers will melt faster. One could hope that the sun will get weaker, but that seems highly unlikely if it is indeed causing warming.
If the sun stays the same and CO2 stays the same (we stabilise emissions at today’s level immediately,) then the tempreature will still rise because of climate commitment from the oceans continuing to warm. That means that the Greenland ice is doomed, and the subsequent sea level rise means that Bangladesh, Florida and Flanders are doomed too!
I agree with you, Alastair, if [deltaT]2xCO2 is 3°C, 0,5°C from solar irradiance variation doesn’t matter. But the problem is precisely the uncertainty of this climate sensitivity, either empirically or theoretically defined. And my question was : in what way can we consider that direct and indirect effects of solar variations on climate (and temperature) are totally independant from this reduction of uncertainty in climate sensitivity.
Thank you for the link, Gavin, raypierre article and comments are particularly interesting.
So, as far I as (try to slowly) understand, XXth century (even the last 1000 years) is unuseful to estimate empirically climate sensitivity because of the too slight variations involved. We must compare more contrasted period, like LGM and PI.
But there’s is still a point on this : when we compare LGW and PI, how are we sure that we include all pertinent factors (others than GHG, albedo of ice, dust and vegetation)? For example, let’s imagine that latitudinal and seasonal variations (Milankovic) of insolation between LGW and PI imply a +/-10% of nebulosity, for some ocean-atmosphere circulation’s reasons. How is it implemented in models ? Or, even more broadly, we know there’s approx. 0,1% variation of solar irradiance between two cycles minima (Willson 2003), how do we exclude with reasonable confidence the hypothesis that there is a 1% variation between 21.000 y BP and 1750 AD ?
Rasmus, do I understand you correctly: Scafetta and West in their paper do the simple analysis by comparing 17th and 18th Century, and by comparing 17th and 19th Century – but they do not discuss the comparison of 18th and 19th Century which would have shown their approach is ill-conditioned? Since this would have taken them a few minutes, given they have all the data for those three centuries, I wonder whether they really didn’t try this before publishing their paper?
[Response:The expression becomes ill-posed when the denominator is zero, which is not exactly what happens for the 19th and 18th century, but these centuries nevertheless illustrate how sensitive the results are. The total solar irradiance (TSI) for 19th century is accoring to SW 1365.64 W/m2, 1364.68 W/m2, or 1365.52 W/m2, depending on whether you use Lean (1995), Lean (2000) or Lean (2005) values. For the 18th century, the corresponding figures are 1365.48, 1364.44, and 1365.43 W/m2. For the temperature anomaly, they use -0.41K for the 19th century and -0.41K the 18th century. Thus for the difference between 18th and 19th centuries, this gives values of 0.50 0.33 0.89Km2/W, depending on which data set you use. I get temperature changes of 1.08 0.54 0.67K respectively (I have put a simple R-scripthere). -rasmus]
An excerpt related to this paper from an earlier post:
In this paper Scafetta and West reveal an obviously sceptical agenda. They discuss an even higher solar influence, listing well-known and antiquated sceptic’s arguments as the overestimation of 20th century warming due to heat-island effects, or the lower trends in satellite observations. And at last they have found a new one: they suggest that the difference in the temperature increase over land and the oceans during the last decades might be due to contaminations of the land temperature record – They call it an anomalous behaviour – ignoring that it corresponds fully to what is physically expected. Maybe it would help them to have a look at the IPCC report.
The many problems in their analysis have been discussed above and before.
The world’s glaciers and ice caps are now in terminal decline because of global warming, scientists have discovered. A survey has revealed that the rate of melting across the world has sharply accelerated in recent years, placing even previously stable glaciers in jeopardy. The loss of glaciers in South America and Asia will threaten the water supplies of millions of people within a few decades, the experts warn.
Georg Kaser, a glaciologist at the University of Innsbruck, Austria, who led the research, said: “The glaciers are going to melt and melt until they are all gone. There are not any glaciers getting bigger any more.”
Dr Kaser said that “99.99% of all glaciers” were now shrinking. Increased winter snowfall meant that a few, most notably in New Zealand and Norway, got bigger during the 1990s, he said, but a succession of very warm summers since then had reversed the trend. His team combined different sets of measurements which used stakes and holes drilled into the ice to record the change in mass of more than 300 glaciers since the 1940s. They extrapolated these results to cover thousands of smaller and remote glaciers not directly surveyed.
The results revealed that the world’s glaciers and ice caps – defined as all land-based ice except the mighty Greenland and West Antarctic ice sheets – began to shrink far more quickly in 2001. On average, the world’s glaciers and ice caps lost enough water between 1961 and 1990 to raise global sea levels by 0.35-0.4 mm each year. For 2001-2004, the figure rose to 0.8-1mm each year.
Writing in the journal Geophysical Research Letters, the scientists say: “Late 20th century glacier wastage is essentially a response to post-1970 global warming.” Dr Kaser said: “There is very, very strong evidence that this is down to human-caused changes in the atmosphere.”
1) They appear to have looked at things globally – perhaps looking at regions’ responses would have been better
2) Assuming a direct transfer function between irradiance and temperature, instead of second order effects of irradiance and temperature, or even, instead of other non irradiance measures related to impinging full band EM energy of solar or other extreterrestrial origin, they may have made a bit too much of a leap.
3) They appear not to postulate specifically enough regading mechanisms – for example, postulating a specific role for a specific energy bands’ impingement accentuating or impairing cumuloform cloud formation in the tropics.
It seems unconscionable to me for SW to exclude all contributing factors and claim their results have any merit. In most professional circles I’m familiar with (particularly in aerospace engineering) omitting such obvious factors could bring serious professional consequences. In this field, with lives even more at stake than with those flying an engineered aircraft, it is irresponsible to present such an erroneous result as they did.
Their work builds upon the expanding volume of observed and measured data (including the vital work of Dr. Lonnie Thompson at Ohio State Univ) confirming what reasonable people agree – a warmer world melts glaciers; and perhaps more rapidly than earlier imagined.
RealClimate posted a very informative thread on tropical glacier retreat and is worth a revisit.
I believe India, Pakistan, Kashmir, Nepal, China will feel the full effect of lost glacier melt runoff that feeds major rivers in their part of the world and provide irrigation and drinking water for tens of millions of people.
Sometimes I wonder if we are all in a collective dream about glacial melt back because it does not appear to be of any real interest to the CIA, National Security Council or the Council on Foreign Relations.
Why am I concerned and they appear to be ignorant of the consequences of losing Himalayan glaciers? Maybe it is simply that I accept AGW and for them to poke into the glacier melt back studies is an admittal there is a looming problem that will dwarf WMD and democratization of the Middle East. Mass migration of hungry and desparate farmers and villages along the norhtern coast of the Indian Ocean is a frightening prospect. Guess the large-brained national security types have their hands full trying to secure our future oil supply. Silly me.
Comment by John L. McCormick — 11 Oct 2006 @ 2:47 PM
I am surprised that the left hand side of your modified graph is annotated with ‘Trend in reconstr but not in T’
-In the period you’ve shown, if one looks at the low frequency component of the Moberg reconstruction, there is a clear max at around 1630 and a minimum at around 1675.
Smoothing the full Moberg reconstruction yields maxima and minima that are slightly further apart ~1620 and ~1690
The trend over this period is only about a degree which looks to be about a third the size of S&W but they are trends none the less.
Have you cut out the middle of Fig2 so as to stop readers developing unsavoury thoughts, of a solar nature ;-)
I noticed an interesting sounding review entitled “The Global warming Debate: A Review of the State of the Science” in PURE AND APPLIED GEOPHYSICS 162, 1557-1586 (2005). It turned out to be “interesting” but for all the wrong reasons.
I’m a scientist but not a climate scientist (molecular biology actually!). However, it is blatantly obvious that this so-called “review” (by ML Khandekar, TS Murty and P Chittibabu) is a disgraceful and (I would have thought) embarrassing piece of propaganda full of downright falsehoods about the “state of science”.
A number of questions come to mind about this. Clearly the editors of this “journal” (which I assumed was “respectable” but I now wonder) are aware that they have allowed a piece of misrepresentational garbage to appear in their journal. So are they part of, and party to, this process of misrepresentation? In the normal course of events the “scientific process” would pertain. People publishing blatant rubbish would lose their scientific credibility. Authors might think twice before submitting their work to this journal. The paper would be ignored and sink into oblivion. But in “normal” fields of scientific endeavour (like molecular biology for instance) this sort of rubbish just doesn’t happen.
I thought about doing several things, none of which I did in the end. Writing to the editors to express my distaste along with a synopsis of the errors and misrepresentations that are obvious even to a molecular biologist. I considered discussing having the journal subscription cancelled by our university…
Is there a useful way of adressing this problem? Or do we just accept that the normal scientific publication process has been “infiltrated” at several levels (presumably up to the editorial level in this case), and just allow time to deal with the (still very small) misrepresentations and propaganda-dressed-as-science that surprisingly (for me) has been allowed to ooze into the mainstream scientific literature. Or maybe “Pure and Applied Geophysics” isn’t mainstream scientific literature…
>Mass migration of hungry and desparate farmers and villages along the norhtern coast of the Indian Ocean is a frightening prospect.
If changing to energy intensive agriculture allows more food to be produced by just 2% of Indian farmers on half of the previous land, that should be no more frightening than when the other 98% of US and Canadian farmers migrated to cities in the late 19th and early 20th centuries.
I have purchased (again, AGU must get rich from all those single contributions…) the article. And have some problems with your comment.
– They used the difference between 17th and 18th/19th century as that are the largest differences. Of course, if one uses a smaller difference as like between the 18th and 19th century (about 3:1/4:1), that increases the error margins (the error margin is up to 50% for the 18/19th century difference, 5-20% for the others). But that doesn’t show that the method itself is wrong, or gives unacceptable answers. Thus your Fig.1 end only shows that the error in the scaling factor blows up with smaller differences, which they of course didn’t use to calculate their factors.
– The transfer functions range from 0.20 to 0.57. This is for different solar reconstructions (all by Lean ea.). As the different solar reconstructions only differ in amplitude – not in shape – with the same temperature reconstruction the endresult is (nearly) the same. That doesn’t show any inconsistency in the method used, only that smaller differences in solar reconstruction need larger factors to explain the reconstructed temperature.
– As already said, land use changes probably had (and have) a limited effect on global temperatures, as good as volcanoes (less than 0.1 K). Thus the remaining pre-industrial variability is mostly solar (and internal) variability.
– According to several Antarctic ice cores, there was a ~10 ppmv drop in CO2 between the MWP and LIA, this points to a ~1 K drop in global temperature, which is more in line with the higher variable reconstructions.
– One shouldn’t highlight short discrepancies in the trends, what is of interest is the overall trend.
– The 1900-2000 NH temperature trend is 0.8 K (GISS NH, 5-year smoothed). An extra 0.1 K is added after the year 2000…
The Moberg reconstruction ends in 1950… thus the comparison for the 1900-2000 period probably uses part of the Moberg reconstruction and part of the instrumental record (for the period 1950-2000). There is a known problem with tree-ring based reconstructions for the period after ~1950, the “divergence problem”, which still is not resolved. Moberg’s reconstruction also uses other proxies, but these are not extended after 1950. This makes any comparison after 1950 (or 1980 for solely tree-ring based proxies) rather problematic.
About the attribution of climate responses to different forcings in current GCM’s, as used in Fig.1 of this page:
– volcanic is overestimated. The Pinatubo (of which strength there were only 9 in the past 600 years, see Fig.6 from Briffa) had an influence of -0.6 K in its peak year, much less in the next years. The decadal average thus is less than 0.1 K. In the beginning of the century there were 2 consecutive eruptions with the same strength (10 years apart). Over the rest of the century, much less. Thus the smoothed variation over the century should be less than what is shown, anyway without positive value (I have never heard of an inverse volcanic effect).
– (sulfate) aerosols are largely overestimated. Because of their very short lifetime (a few days) vs. volcanoes (a few years) for identical physico-chemical reactions, their (primary) effect is less than of volcanoes, despite the higher emission rates (secondary and tertiary effects even are far more uncertain). For a comprehensive overview of all my doubts on aerosol influence, see my comment on RC here.
– But if the (negative) influence of aerosols (and volcanic) is overestimated, the modeled trend would be way too high. That means that the (positive) response to GHGs must be lower than currently implemented in the models (the direct effect – without feedbacks – of anthropogenic GHG forcing is currently ~0.3 K). Thus that 50% of the past century warming may be attributable to solar forcing comes into sight (even in current models with more realistic estimates)…
Last but not least: do you know of trends from any climate model which covers the full 400 year as what S&W did, compared to the Moberg reconstruction, and has that a better performance? I have found Fig.1 in Cubash ea. and a strange graph as Fig.1 in Widmann and Tett (with natural influences-only runs giving higher 1900-2000 temperatures than runs with natural + anthro!). Compare these to the performance of the S&W trends…
I am the author of the paper under accuse, I try to reply to Dr. Rasmus.
(when I finished writing this comment I saw that #21 replied already to Dr. Rasmus, very well, but I add my comments any way. BTW papers may be obtained for free by writing an email to the authors, for example. )
1) The greatest flaw.
Dr. Rasmus uses the differences between averages during the 18th and 19th centuries, while I started from the 17th century, and he found unrealistic results. So Why didn’t I do the same Rasmus’ calculations?
Well, Rasmus adds these comments: “In my physics undergraduate course, we learned that one should stay away from analyses based on the difference between two large but almost equal numbers, especially when their accuracy is not exceptional;” and “hence neglect factors (natural forcings) such as landscape changes (that the North America and Europe underwent large-scale de-forestation);” and “It is, however, possible to select two intervals over which the average total solar irradiance is the same but not so for the temperature. When the difference in the denominator of their equation is small (the changes in the total solar irradiance are small), then the model blows up.”
Well, I think Rasmus has given also answer! It is unsafe to do the calculations comparing the 18th and 19th centuries because the errors will be much larger due to the fact that the averages are very close and because the 19th century would be already partially effected by some anthropogenic factors due to the beginning of the industrialization and some deforestation (that is an anthropogenic component too!!!). So, I used an algorithm that stresses the values during the previous 17th and 18th centuries when anthropogenic contamination is practically zero!!
2) A charm: cutting a picture in two.
By cutting my picture in two, Dr. Rasmus gives the impression that his calculations are fine. The truth is that with my calculation a reader can see an evident correspondence, (that is, a good fit) between the temperature and the solar signal during the pre-industrial (before 1800-1900) era that suggests my results are reasonable. By using Rasmus’ numbers the correspondence would be visibly lost because he gets a double value for the sensitivity that would imply double amplitudes of the reconstructed solar-induced temperature signal that would not fit the data for any of the 4 centuries any more
3) Ambiguities on numbers.
Dr Rasmus writes: “the results gave a wide range of different values for the transfer functions: from 0.20 to 0.57!”
Well Dr. Rasmus did not realize that I am using three different solar records. He should read more carefully my paper. Look at figure 1 !!!
Dr. Rasmus writes: “But the figure in the SW paper would suggest at the most 40%!”
Well, a carefully look would have the solar signal value of -0.45K (1900) and ending at 0.0K (2000), so the difference would be 0.45K. The temperature goes from -0.40K (1900) to 0.50K (2000) (look at the average value between 1950-2000), so the difference is 0.9K. Well, the value is 50%, correct?
4) Galileo, and the inquisition !
Dr. Rasmus writes: “Looking at the SW curves in more detail (their Fig. 2), one of the most pronounced changes in their solar-based temperature predictions is a cooling at the beginning of the record (before 1650), but a corresponding drop is not seen in the temperature curve before 1650.”
Well let us look at the history. Galileo just started to observe the sun-spot since 1611. The sun-spot measurements from 1611 and 1650 are probably quite poor, so the TSI reconstructions during that time are poor because poor are the data. And then there were the religious wars in Europe and the inquisition… I guess that perhaps the 17th centuries guys involved in these matters were to busy to think to fix the measurements to avoid the comments by Dr. Rasmus.
5) A good point.
Dr. Rasmus writes “The proper way to address this question, I think, would be to identify all the physical relationships.”
Well I agree, but today we do not know all the physical relationships, that is why I wrote “To circumvent the lack of knowledge in climate physics, we adopt an alternative approach.. etc, etc…..”
I hope this is helpful.
[Response:Thanks for your repsonse, Dr. Nicola Scafetta. However, I’m still not convinced, as the temperature is not only affected by greenhouse gases and solar activity (take the ice ages, for example…?). I think your choice of method is not robust and is likely to provide wrong answers, and one can illustrate that by applying the same method to the difference between the 19th and 18th centuries (for which you say factors other than solar may play a role – just my point!). You ask me a question: “a carefully look would have the solar signal value of -0.45K (1900) and ending at 0.0K (2000), so the difference would be 0.45K. The temperature goes from -0.40K (1900) to 0.50K (2000) (look at the average value between 1950-2000), so the difference is 0.9K. Well, the value is 50%, correct?” I may misread the figure, but the solar signal seems to be lower than 0.0K at year 2000. An the temperature seems to be greater than 0.50K, but my reading may have been ‘disturbed’ by the fact that not many years later, the temperature exceeds 0.6K. -rasmus]
Comment by Nicola Scafetta, PhD — 11 Oct 2006 @ 9:00 PM
I’ve been lurking here for years, gathering such understanding as I’m capable of, which is only partial. I just wanted to say how grateful I am and admiring of your efforts, integrity, and dedication. Your science is beautiful and crucial. We need you guys.
Perhaps Steve Reynolds (#18) missed the stories in the NY Times regarding groundwater depletion and problems with surface water supplies in India. Or he has never heard of the fate of the agricultural production dpendent on the Ogalala Aquifer?
Sorry, but your explanations did not help very much yet. There remain several questions, e.g.
– You state that Sun spot observations before 1650 are poor. However, this is half of the data for the 17th century value. This would mean, your value for the 17th century is poor and in fact you shouldn’t use it.
– You state that the measurements in the 19th century are already anthropogenically influenced. This means that your value in the 19th century has an anthropogenic influence and is not only solar induced.
– According to your Zs values, the error bars and the TSI values, the temperature should have increased from the 18th to the 19th century between 0.03 and at most 0.06. In fact, the increase was 0.08, which is between 30 and 160% more. You seem to explain this by possible anthropogenic influence. However, if the anthropogenic influence in the 19th century possibly is more than 50%, how can you use the value of the 19th century in a calculation which assumes nearly 100% solar influence?
Thus, if the data is poor before 1650 and is possibly anthropogenically influenced after 1850 it might be more appropriate to compare 1650-1750 and 1750-1850.
A rough estimate from the graphs shows that you would get Zs values between about 0.10 (or even below) and about 0.25 for the three TSI reconstructions, which is about half of what you get.
And a last question:
You compare different TSI reconstructions. Why don’t you also compare different temperature reconstructions?
[Response:I think that the sunspot data for the 18th and 19th is questionable as well, because there are something strange happening to solar cycle length. -rasmus]
You write: “that by applying the same method to the difference between the 19th and 18th centuries (for which you say factors other than solar may play a role – just my point!).”
Yes, Rasmus, by doing the calculations as you want to do them, the result would be not robust. That is why I chose to do the calculations in a different way.
You are criticizing your own calculations and your own methodology, not mine!!!
About the numbers in the paper I wrote “approximately 50%”, the error is about 15-20% of that value. So, it is fine.
A short reply to #27
1) I use the reconstructions that are available. All reconstructions might have problems.
2)I did not use the 19th century at 100% value but an agorithm that stesses the 17th and 18th century values, and then I checked that the things look OK.
3) In the paper I used Moberg data because these data are the latest one. And because in GRL we have a limited number of pages.
[Response: Rasmus’ point is I think more general. In any period there will likely be more than one thing going on. This is as true for the 17th Century as for the 19th Century. These ‘other things’ (principally volcanoes, but also land use etc.) are presumably uncorrelated with solar forcing over the long term, but in the absence of enough centennial cycles in the observational record, one can’t assume that you can average over them by taking enough examples. In the late 17th Century for instance, our work has suggested about a 50-50 split between volcanic and solar effects (compared to the late 18th Century) which enhances the global cooling. Other studies have come up with other splits – including some which find a dominant role for volcanic forcing. It’s not easy to distinguish between the two (given the uncertainties in the forcing data), but it might preclude one from simply assuming that it was all solar. -gavin]
[Response:I think we are not quite on the same frequency (to give you a clue – I’m not critising myself, but only demonstrating that you method is not reliable), and I hope that Gavin’s explanation helps. On a more specific issue, even counting in errorbars, the usual way to do this is this to center the range around the best estimate, so you should have written 40% +- error. How did you estimate the error, is it one standard deviation or the 2.5-97.5 quantiles? (by giving a range 15-20% gives me the impression of being a bit handwavy, but 20% of 40 is 8, right? And that upper limit is not the most likly estimate…) But the figure is reproduced above, and the readers can make up their own minds. -rasmus]
Comment by Nicola Scafetta, PhD — 12 Oct 2006 @ 11:54 AM
Study Links Extinction Cycles to Changes in Earth’s Orbit and Tilt
By JOHN NOBLE WILFORD
If rodents in Spain are any guide, periodic changes in Earth’s orbit may account for the apparent regularity with which new species of mammals emerge and then go extinct, scientists are reporting today.
It so happens, the paleontologists say, that variations in the course Earth travels around the Sun and in the tilt of its axis are associated with episodes of global cooling. Their new research on the fossil record shows that the cyclical pattern of these phenomena corresponds to species turnover in rodents and probably other mammal groups as well.
In a report appearing today in the journal Nature, Dutch and Spanish scientists led by Jan A. van Dam of Utrecht University in the Netherlands say the “astronomical hypothesis for species turnover provides a crucial missing piece in the puzzle of mammal species- and genus-level evolution.”
In addition, the researchers write, the hypothesis “offers a plausible explanation for the characteristic duration of more or less 2.5 million years of the mean species life span in mammals.”
Dr. van Dam and his colleagues studied the fossil record of rats, mice and other rodents over the last 22 million years in central Spain. The fossils are numerous and show a largely uninterrupted record of the rise and fall of individual species. Other scientists say rodents, thanks to their large numbers, are commonly used in studies of such evolutionary transitions.
As the scientists pored over some 80,000 isolated molars, the most distinct markers of different species, the patterns of turnovers emerged. They seemed often to occur in clusters, which seemed unrelated to biology. And they occurred in cycles of about 2.5 million and 1 million years.
It seems unconscionable to me for SW to exclude all contributing factors and claim their results have any merit. In most professional circles I’m familiar with (particularly in aerospace engineering) omitting such obvious factors could bring serious professional consequences. In this field, with lives even more at stake than with those flying an engineered aircraft, it is irresponsible to present such an erroneous result as they did.
I think they learned it from watching so called climate scientists do the same thing when they focus all the “blame” on CO2.
[Response: Why not try reading what we ‘so-called’ climate scientists actually say?
Sorry to be snippy, but it can get a little tiresome to always be dealing with strawmen arguments…. – gavin]
In the 20 or so AOGCM models used by IPCC, for simulation of XXth and projection on XXIth centuries, do you know how many use some varying value for solar radiative forcing and how many consider it as constant (or ignore it as insignificant) ?
[Response: In the 19 models studied in Santer et al (2005) (Table 1), 11 models have historical variations in solar irradiance, 7 don’t, and one was uncertain. I’m sure there is a better description of the specific forcings for each model somewhere, but I don’t know where (anyone?). – gavin]
It seems to me if you only can get the given result by using a particular pair of 100 year intervals you are in deep doo doo. For example, if you start using 1600 to 1700, how does the result change if you use 1610-1710, 1620-1720, etc. As Gavin points out, what is being done is to pick one particular set of differences from a set of measurements which are both noisy and not particularly well known.
Also wrt #18, the Enclosure Acts did not lead to a life of milk and honey for the displaced farmers and in the US, for example, the life of the Okies displaced from their farms by an ecological disaster was not celebrated for the ease and luxury they found in the cities.
Re #29: We are a bit off topic here, but I saw this article and would like to know which 2.5 million and 1 million orbital cycles they are talking about? The eccentricity and obliquity (axial tilt) cycles they mention have periods of about 100,000 and 22,000 years respectively.
It’s always interesting to read a paper like the one you reference and then take a look at who the authors are and what their background is.
Here’s a quote from the paper: “”During the long geological history of the earth, there was no correlation between global temperature and atmospheric CO2 levels. Earth has been warming and cooling at highly irregular intervals and the amplitudes of temperature change were also irregular. The warming of about 0.3 C in recent years has prompted suggestions about anthropogenic influence on the earthâ��s climate due to increasing human activity worldwide. However, a close examination of the earthâ��s temperature change suggests that the recent warming may be primarily due to urbanization and land-use change impact and not due to increased levels of CO2 and other greenhouse gases.” This is a nonsensical view.
“INDIA’S ECONOMIC PROGRESS IN A CHANGING CLIMATE: BENEFITS OF GLOBAL WARMING
As a weather & climate scientist, what impressed me was the fact that India’s strong economic progress has come about in an increasingly warmer world of the last forty years or so, completely defying the projections of deleterious impact of Global Warming by IPCC (Intergovernmental Panel for Climate Change, a United Nations Group of Scientists) and its supporters.”
What about the other authors? Tad Mundy has published a fair number of papers on tsunamis, but nothing on climate – last time I checked there was no relation between earthquakes and global warming. This paper is his first on climate change – generally ‘reviews’ are written by leaders in their field. P Chittibabu has published a few papers with Tad Mundy on tsunamis, but that’s it. Of more interest is his employment with W.F. Baird and Associates, a coastal engineering firm that also sells 3D imaging software – a little research into this firm reveals that one of their main clients is likely the Canadian petroleum industry. It’s highly likely that restrictions on Canadian CO2 emissions would hurt their bottom line. See http://www.esricanada.com/english/solutions/wfbaird.asp and also http://www.esricanada.com/english/nresources/default.asp#petroleum for more.
As #22 points out, this paper has been ignored by the climate science community. That’s not the problem – the paper is available at http://www.friendsofscience.org/documents/debate.pdf . Who is “friends of science?” Here is the lead statement from their website: “The Kyoto Protocol is a political solution to a non-existent problem without scientific justification”. This paper is used to promote that notion outside of climate science circles.
So why go to all this trouble? In a nutshell – Canadian tar sands in Alberta. See http://www.ualberta.ca/~parkland/research/perspectives/GassyElephant06OpEd.htm – as they point out, “The tar sands are the single largest contributor to the growth of greenhouse-gas emissions in Canada, because it takes so much of Canada’s diminishing supply of natural gas to make tar sands oil. Greenhouse-gas intensity in the tar sands is almost triple that of conventional oil. As Jim Dinning, Alberta’s former treasurer and front-runner to replace Ralph Klein as Alberta’s premier, recently quipped, “Injecting natural gas into the oil sands to produce oil is like turning gold into lead.”
This really represents a serious abuse of the scientific process, and the journal’s editors should know better. Is it appropriate to email the journal editor and ask what happened to the review process? Well – that’s what I did, so we’ll see.
I imagine that these papers on solar influence on climate will also be widely posted on sites like ‘friends of science’.
Re #33. The 100,000 and 22,000 year cycles coincide periodically. Technically they should coincide every 1,100,000 years (the least common multiple of 100,000 and 22,000) – but the cycles are not exactly 100,000 and 22,000 – and the time actually varies slightly depending on extraterrestial forces (the other planets) and internal Earth dynamics (position of tectonic plates, cryosphere, core flows?). So the cycles theoretically are estimated to coincide at the times specified in the past [those calcuations have inherent approximations due to limitations of underlying estimates required to make them].
Rodent extinctions in Spain correlating with this cycle is quite interesting – but obviously hardly decisive evidence. The crux of the hypothesis being that mammals would have a survival advantage over (reptiles?) during periods of high seasonality and vice-versa during periods of low seasonality – primarily I suppose due to food mix changes, temperatures and disease carrying insects.
A cousin of mine who is a Physicist and worked in solid state manufacturing, is a total climate change denier, he reasons that Carbon dioxide has insignificant interaction with infrared radiation (as compared with H2O), therefore the increased levels of Carbon dioxide cannot be influencing climate.
I believe that it is the case that Carbon dioxide has little interaction with the higher frequencies of infrared, however, I suspect that it is the lower frequencies of infrared that interact more with Carbon dioxide.
I have tried a Google on the issue and end up with too many hits that all seem to be about unrelated issues.
Can anyone please tell me the details for infrared – Carbon dioxide interactions, or direct me to a reference in which it is described? I doubt that anything would change my cousins mind, however I would like to give others (to whom my cousin has preached) the details in order to put them straight on the issue of whether or not Carbon dioxide is a ‘greenhouse’ gas.
Comment by Lawrence McLean — 13 Oct 2006 @ 1:07 AM
Thank you Gavin, however, it is not the actual detail I require. I am actually looking for the infrared properties of CO2, that is the frequencies that it absorbes and the frequencies that it is transparent to. Perhaps emmisivity properties of CO2 may be important as well. Can anyone help?
Comment by Lawrence McLean — 13 Oct 2006 @ 3:42 AM
That is exactly the point. The results of SW (the Zs values) for a specific TSI reconstruction vary by a factor of about 4 to 5 (e.g. at least between 0.10 and 0.50 for the Lean 1995 reconstruction) depending on the pair of 100y intervals out of the 1600 to 1900 period you chose. With the SW method in that paper you can get almost any result you want – just by chosing the intervals. The reasons for that were well explained by Gavin.
I’m still waiting for a comment of Nicola Scafetta on that point.
Re #36 Lawrence, your cousin is correct that the greenhouse effect of water vapour, and even more so clouds are much larger (x 2 – x 4) than that from CO2. However, the greenhouse effect from water vapour is due to a (positive) feedback from the temperature and so any warming caused by CO2 is amplified by water vapour. In other words increasing CO2 has three to four times the effect one gets by just calculating the changes due to its radiative effects.
A second point to note, which I do not recall having seen elsewhere, is that CO2 has more effect in cold regions than in hot. In other words, in cold regions where the ground is covered by ice or snow, then the vapour pressure of water vapour is low, and CO2 is the dominant greenhouse gas. In the tropics, where the humidity is high, then the effect from CO2 is very much less significant.
This means that while the effect of an increase in CO2 on global temperatures is small, it does cause the snow line to rise in altitude and latitude. This reduces global albedo, and consequently the additional heat absorbed from the sun acts as another positive feedback raising global tempreatures. But, more importantly, it doss not alter the temperature uniformly over the globe. There is a polar amplification which will disrupt the current climate system, and create havoc with agriculture which is now finely tuned to the current climate. Chaos in agriculture inevitably leads to famine.
Re #36: Short answer – the error your friend is making is to look at the relative concentration of water vapor and carbon dioxide near the Earth’s surface. Most of the greenhouse effect takes place in the upper atmosphere where water vapor levels are relatively much lower. The issue of which infrared bands are absorbed is not very relevant.
Sometime in the 90’s I read an article in scientific american that predicted “minning oil sand would become economically feasible by the year 2010”, the reason was dimminishing oil supplies and increased consumption (ie: peak oil and all it’s associated mayhem).
Another thing I have seen predicted is that climate change and mono culture could bring about a global food crisis. Here in Australia our (grain) crop estimates have been cut in half (it’s dryer and rains at the wrong times) and our last bannana crop was wiped off the map by a cyclone. My own morbid prediction is we will see farmers bulldozing piles of dead sheep around jan – febuary.
I belive (but am not sure) that the mid west also has similar but less severe problems, and Europe has suffered a “mixed blessing”. I would like to think that I am wrong and we will “somehow” avoid a sudden and painfull population “implosion” within the next generation or two. Fiffty years of experince assisting humanity to screw up, and a high school experiment with fast breeding bugs, tells me otherwise. :(
The Milankovitch cycles are not the same every 100,000, 41,000 and 22,000 years but oscillate around cycles of millions of years where by at some points dueing these cycles (1 in 20 say) the cycle is very bad for warmth and good for Ice and Snow. All Milankovitch cycles are not created equal.
Lawrence, look at the first link in the sidebar under Science http://www.aip.org/history/climate/index.html
You could get your cousin the book and refer to the website, which is an extended version of the book. Having been a solid state physicist he will probably understand it from there.
Your calculation of the relative impact of volcanic vs. solar is based on the MBH99 (and earlier) proxy variability in the pre-industrial period, which are the proxies with the lowest variability. If more recent proxies are used (Esper, Moberg, bore hole reconstructions), the influence of volcanic in the total variability reduces to ~14%, instead of 50%.
[Response: No. The split was from the individual GCM experiments using best guess forcings. – gavin]
Further, the (every 12 year) repeated simulation of the Pinatubo eruption in Shindell, Schmidt, Miller and Mann gives a value of -0.35 K over the full period. The observed values are -0.3 K for the first and second year after the eruption and essentially zero in the third year. Thus the 12 year average global cooling for this type of eruption is 0.05 K. Maybe I have underestimated the long-time effects of repeated series, but the difference between observed and simulation seems much too large to me…
[Response: There is a cumulative effect which explains the difference, but with a mixed layer ocean I wouldn’t stand by that exact number. The issue we were really looking for was whether there was a different spatial signature in the volcanic-induced cooling which could be used to distinguish it from solar cooling – and there was: volcanic long term cooling was much more homogenous than solar-induced cooling. That remains, I think, the best hope of trying to distinguish the two effects. – gavin]
In physics the calculations are not done by blindly appling a methodology to a set of data. This is what computers do, not humans, who are supposed to be smart!
One should think first, and think for sufficient long time to exploring a lot of different options.
One first has to look at the data, second has to decide what is the most reasonable way to do the calculations and finally has to compares the results with the data to see if everything looks good. If something goes wrong, he has to start again and think deeply.
My choice was made in such a way to yield to a low relative error and a good correspondence between the curves during the pre-industrial era. Rasmus’ calculation, on the contrary, yield to the highest relative error (double than mine) and would produce the worst data correspondence. Moreover the 19th century is already a little bit contamineted, so it should not be used at 100%.
That is why Rasmus’ way to apply my methodology is scientifically unwise and incorrect.
By looking at my full picture, instead of cutting it in two as Rasmus did, you would realize that by using Rasmus calculations the good correspondence seen in the data would be complitely lost for all 4 centuries.
I hope that this help.
[Response:Sorry, but no Nicola. You say that you look at the data and then choose the method. And, besides, what information do you really have about the data and their quality, if you do not take into account other facors? (how can you judge the data, when you forget that factors other than GHG and solar may play a role?) how do you justify using Moberg et al’s temperature reconstruction, and not the others? I demonstrated that your method is unsuitable. I think you do it the wrong way round: first you need to design a method that will give you a objective answer and that is not unduely influenced by noise or external factors, and then apply this to your data. This is the reason why your method is – as far as I know – never used in e.g. the statistics community. -rasmus]
Comment by Nicola Scafetta, PhD — 13 Oct 2006 @ 2:04 PM
The 50% +/- 20% which is found by the method which S&W used is not that different of what Stott ea. found with their optimal detection HadCM3 experiments, where they increased solar and volcanic with a factor 10 and 5 resp. The best estimates for the 1900-1950 period were 40% solar (Lean ea. 1995) and 60% for the Hoyt & Schatten (1993) solar reconstruction. This reduces to 16% and 36% resp. for the two solar reconstructions in the 1950-2000 period.
These findings are within the constraints of the HadCM3 model, where they didn’t vary the influence of aerosols. With a lower influence of aerosols, optimal detection might be more towards solar…
To be fair to the RealClimate moderators, they also post critiques of articles in the literature which would tend to increase concern about climate change. An example would be concerns about a shutdown of the thermohaline circulation. Again, the argument isn’t with the conclusion, but with the scientific arguments leading to it.
Unfortunately this general explanations do not answer my questions. Your comment is once more on the article of Rasmus but not on my questions. Maybe I can specify according to your response:
1. Does the comparison of the periods 1650-1750 and 1750-1850 result in a higher relative error than 1600-1700 to 1700-1800 or 1600-1700 to 1800-1900, respectively?
How much is the difference between the relative errors?
2. If the difference of the relative errors is similar, wouldn’t it be better to avoid the periods 1600-1650 with poor data and the end of the 19th century which is contaminated? Wouldn’t it be smart thinking to avoid bad data, if better data is available?
3. How do you explain, that choosing 1650-1750 and 1750-1850 instead of 1600-1700 and 1700-1800 gives such a different result?
And another question arising from your answer:
Did you really choose your data periods in a way that there was the best fit of your calculated curves to the temperature curve? Do I get that right?
if you have doubts, the best thing is that you repeat the calculations by using the following roles:
1) use the data as they are, you cannot apriori assume some data are wrong.
2) the climate sensitivity to solar changes should be calculated by going as far as possible in the past by using as much as possible data.
3) it is better to use an algorithm that covers the three centuries before the industrialization (before 1900) but in a way that the 19th century is only partialy covered.
I would also suggest that everybody reads very carefully a paper before criticizing it. Many doubts can be easily solved by simply reading carefully and trying to understand what an author has written and how he is reasoning. Sometime it is also necessary to read carefully the references to understand why an author is writing what he writes. In general, it is better not to be arrogant in science.
Is it possible, for example, that nobody here has noticed the long comment from #21?
Ferdinand Engelbeen has read my paper very carefully and he has immediately understood that the comments of Rasmus are wrong.
Why doesn’t Rasmus want to unswer Dr. Engelbeen’s critique to his own criticism?
Dr. Engelbeen has also correctly noticed that the volcano model predicted signals are too large when compared with the surface temperature data. If you look at Figure 5B in the recent paper by Foukal et al. on Nature, where they claim the Sun does not have any effect on climate, you will be able to see huge volcano spikes that really do not have anything to do with the temperature data. This suggests that the IPCC models, these authors have used, might be badly wrong because also the other forcings might be mistakly modelled.
Why doesn’t Rasmus write a nice criticism on this point?
Urs Neu, any comment on this?
[Response: I would suggest that you heed your own advice and read the referenced papers carefully. Foukal et al did not claim to show that solar forcing has no effect on climate, merely that there is no positive evidence for a longer term forcing larger than seen over the 11 year cycle. And just to show that we are fair about criticising work, the averaging of different paleo-reconstructions in their Fig 5A is a very odd procedure and not one I would approve of. You should also be aware that there are different volcanic resconstructions around (Crowley, Amman) and it makes some difference which ones you use. In particular, estimates for the really large eruptions (i.e. Tambora) are very uncertain. For the period from the Maunder minimum to a century later though (which is the period we looked at), there are no obvious discrepencies between the solar+volcanic forced changes and the reconstructions. Assuming that the change is all solar would lead to a doubling of the solar impact, thus including long term volcanic effects is important. They might well be uncertain – but that is not a reason to ignore them. -gavin]
Comment by Nicola Scafetta, PhD — 16 Oct 2006 @ 5:43 PM
About Foukal et al., they have simply ignored a lot of litterature that says otherwise. It is true that the amplitude of the slow variation of the solar activity are uncertain, that is why I used 3 different TSI reconstructions in my paper (even if Rasmus has missed this fact). However, this uncertenty does not imply that the slow solar component is missing. Only that its amplitudes are uncertain, that is all.
They have not even cited the ACRIM satellite reconstruction that shows a significant slow variation of the temperature since 1978 (They have shown only the PMOD composite, and I can ensure you those people know quite well about the existence of the ACRIM composite).
You write:”For the period from the Maunder minimum to a century later though (which is the period we looked at), there are no obvious discrepencies between the solar+volcanic forced changes and the reconstructions.”
I am sorry but you should look more carefully figure 5B in Foukal et al.. You will see that around 1810 the two models predict a huge cold spike with a cooling at -0.4K. The average temperature is around 0.0K, and no temperature reconstruction shown in Fugure 5A shows a cooling at -0.4K, some of them go to -0.1K. So, there is at least a -0.3K error that is sufficiently big to say the IPCC model used by Foukal et al. is wrong.
Also the other volcano spikes at 1175, 1250, 1450, 1960 are not recovered at all by any temperature reconstruction. In 1250 the difference between temperature and model recontruction is as large as -0.6K!!!!
I just wanted to show that it is very easy to criticize traditional climate model studies, that is all.
About my calculations I had to compromize among many things.
One thing is that from 1600 to 1900 there might be some little cooling because of volcano and deforestation but on the other side there was a little warming because of antropogenic added GHG. So, because climate models are not perfect yet I cannot use them to predict these minor effects and I supposed the two things compensate each other from 1600 to 1900, and just with this hypothesis I found a good correspondence between solar signal and temperature data. I do not claim my calculation are perfect, but my findings seem to reproduce temperature patterns much better than model simulations.
[Response: Thanks. But read again. I said Maunder minimum to a century later, and in our papers we used the periods 1680-1700 and 1780-1800 precisely to avoid the Tambora uncertainty (which you highlight). I am a little bothered by the description of the Foukal analysis as being with an ‘IPCC’ model. What does that mean? IPCC doesn’t develop models nor does it do original research. In any case, the ‘model’ being used in Foukal is simply an energy balance model of the most basic type. Certainly nothing like as sophisticated as the GCM models being used in the detection and attribution sections of IPCC AR4 for instance. However, the response to Tambora (1815) is completely a function of the specified forcing, which is indeed uncertain. It doesn’t prove the model wrong – though it certainly highlights an inconsistency – most likely with the forcing though. As to whether you’re results fit better than ‘tradiational’ analysis, I’m afraid I must disagree. Crowley (2000) does a very good job (again with an EBM though), as does Gerber et al (2003). Full GCMs are only now being run over the these periods, but the biggest problems are in specifying the forcings prior to the 20th Century. During the 20th Century of course, the GCMs do a much better job (as seen in the figure above), or in http://pubs.giss.nasa.gov/abstracts/submitted/Hansen_etal_1.html for instance, and they have attributions to solar of much much less than you suggest (but see my previous post on attribution) for details on that. – gavin]
[Response:Just for the record, Nicola, I didn’t miss the fact that you used three different TSI reconstructions. It’s pretty clear from your Figure 1b. This aspect is not the so central to my criticism, although the fact that these different estimates gave such large spread in the coefficient estimates is a bit concerning (don’t you think? so wouldn’t that be a good reason for exploring different temperature proxies too? I didn’t miss that you relied on just one temperature reconstruction too, which perhaps would give you so crazy results (?) that you would realise your method is not sound) . I think you are trying to jump too quickly to conclusions before you have all the facts. -rasmus]
Comment by Nicola Scafetta, PhD — 16 Oct 2006 @ 8:04 PM
As the biologist and journalist in the fray here, I’ve found it’s all in the forcings. Trying to deny the greater effect of increasing CO2 is a fool’s errand, but there are many complex attempts I’ll give them that much.
One thing puzzles me though, and that’s how can one get a Ph.D in physics and apply this level of falty logic? I don’t mean to be abusive which, frankly happens to me all too often for my liking. I just don’t get the logic path here outside of blatant denialism such as Milloy uses.
Neither you nor the comments of F. Engelbeen (Nr. 21) do answer my questions. Iâ��m sorry, my doubts are not solved by reading your paper very carefully. Just pointing to your paper does not help to answer a question which is not covered in the paper.
It was you, not me, who said in comment 23: The sun-spot measurements from 1611 and 1650 are probably quite poor, so the TSI reconstructions during that time are poor because poor are the data.
By this argument you tried to explain that TSI does not match the temperature data before 1650. If it is o.k. to use this data in your study, why isnâ��t it o.k. to compare the TSI and the temperature data?
You say that the 19th century should only partially be covered. However, you calculate the difference between the 17th and the full 19th century and you take the unweighted mean between the two periods, thus you fully use the 19th century. But if you really want to use the 19th century only partially you should use the period 1750-1850.
Also if reading very carefully, your paper does not explain what is wrong with using the periods 1650-1750 and 1750-1850, nor do the comments of F. Engelbeen. You also could use 1600-1700 instead of 1650-1750, according to your rules, the result is the same. Thus with the periods 1600-1700 and 1750-1850 all your rules are fulfilled:
– Use of the data as they are
– Going as far back as possible
– Use the 19th century only partially
– The relative error is similar to your periods, so this is no reason not to choose it.
The only reason not to choose this period is that the results are different, that means the solar signal does only explain half of the longterm trend during the preindustrial period (which astonishingly well matches the results of Gavin).
However, if the match of the solar signal to the temperature data is used as a selection criterion in your study (as you stated), you cannot, of course, present this match as a result of your study. If you have to cherry-pick your data period to get to your result, your method is not robust and the result is not meaningful at all.
And a last comment on the TSI satellite data after 1978:
You are ignoring two things:
1. There is a third independent composite by Dewitte et al. (Dewitte et al., 2005: Measurement and uncertainty of the long-term total solar irradiance trend. Solar Physics, 224, 209-216) which shows a similar result as the PMOD reconstruction (no significant trend of TSI since 1978).
2. there are two reasons why the trend in the ACRIM composite (by Willson and Mordinov) likely is an artifact:
a) the positive trend is not due to a long-term increase, but the result of a short episode of increase (1989-1992) found in the data of one satellite (Nimbus 7). This increase has not been measured by the other satellite measuring at this period (ERBS);
b) other indicators of solar activity, which are closely correlated to TSI (sunspot number, faculae, geomagnetic activity) show no trend in that period, either.
If there are two composites with a similar result (no trend) and a third with only a probably artificial trend, it is much more likely that there is no trend in TSI since 1978.
Getting a PhD does not change you from an ordinary person into a demi-god, who can spout the truth effortlessly. You still retain the same old prejudices.
One of those, which is common to all people including myself, is that disaster is not just around the corner. We all believe that people who predict it are just Chicken Littles. That is why there is no progress being made in curbing greenhouse gases. And that is why the skeptics have been so successful. Not because they are telling people what they want to hear, but because they are telling people what they already believe!
Most of what appears in scientific journals is produced and peer reviewed by PhDs, but that is not science. Science, which has been so successsful in replacing religion as the standard paradigm, is the distillation of those papers in journals which have been proved true. Science is true by definition, but papers in journals are just the thoughts of people. The history of science is littered with cases of mistaken beliefs, but these play no part in the science that is now accepted. It is also littered with breakthroughs that were not believed at the time and are now standard science.
Mankind and scientists have not changed, and mistaken beliefs are still widely held by scientists today. The Younger Dryas being caused by the outflow from Lake Agassiz stopping the THC is probably the best example of that, since it is now being recognised as false but most earth scientists still adhere to it.
(Rasmus continues to not understand that the reason why I used the Moberg temperature signal is that Moberg data are the novelty and that in a letter I cannot write too many things! Of course by using Mann and Jones’ hockey stick record I would get different results, but recently this record has been seriously criticized.)
Gavin, your comments just confirm that the model have serious problems that are in part due to uncertainty in the data. This is a huge huge problem, because a model cannot be really tested if the data are uncertain!!! Moreover, the model should be tested on a long period of time, several centuries, to be really significant; when this is done the models give results that are at odd with the data as Foukal’s model show. Finally, there are different secular temperature reconstructions, Mann and Jones’ reconstruction is very different from Moderg’s one. If a model fit Mann’s reconstruction, it will not fit Moberg’s one. So, if your model looks ok with Mann’s data, if Moberg’s data are correct your model is badly wrong (and viceversa)!!!!
About IPCC, Foukal el al writes: “We use the upwelling-diffusion energy balance climate model used by IPCC(Ref 63-67) for these model simulations”. So I was referring to this.
Moreover, paradoxically Foukal et al. show that by using a low-frequency component of their model gives better results. So the logical conclusion is that the literature that they reference claiming that there in no variation in solar output and/or that the models predict a low sensitivity to solar change is likely wrong!!!
About the paper by Hansen you reference, you have to look carefully their pictures. In Fig 6 you will find that their simulations simply cross the data. For example they do not recover the warming during 1940 and the volcano signals such as during 1961, 1982, 1992 are clearly overestimated, the temperature record does not present those deep spikes. Moreover, they are partitioning the forcings in a way that would underestimate the solar impact on climate. For example they are using the measured CO2 as an external forcing, without considering that CO2 concentration is also the results of several carbon cycles natural mechanisms that might in part respond to solar variation too. So, part of the solar signal might be embedded in what Hansen consider “other forcings” such as GHG forcing. These things are explained in my paper. Just, read it carefully.
[Response: Nicola, with all due respect, take a step back here. Your arguments will be more persuasive if you focus on one aspect at a time. However, are you seriously arguing that the increase in CO2 over the 20th Century is a feedback to solar forcing? If so, I would suggest you work out what the rough temperature-CO2 relationship is over glacial to interglacial time scales (Petit et al etc) (or even over the last 1000 years – Gerber et al, 2003) and estimate how big an effect a ~1 deg C rise would have on CO2. Then compare that with the known emissions, carbon isotope data and increases in CO2 in the ocean and biosphere. So putting that aside, your criticism of the Hansen paper (which I am a co-author on) is that the ensemble mean results show clearer volcanic peaks than observed. That is actually to be expected since the ensembles average over the ‘weather’ noise, and are therefore closer to the forcings. The real world is of course only one realisation, and if you look at individual realisations, say around 1963 (Mt. Agung), you don’t see any obvious disconnect – the cooling from Pinatubo (1991) is also well matched. ENSO is a confounding factor of course because the tropical variability in the model is not correlated with the observed record (and is too weak in any case), but in the real world was coincident with both El Chichon and Pinatubo. In any case, I will guarantee that the GISS model match to the observed out-performs any statistical or physical model that relies purely on natural forcings.
Going back to Foukal, you misrepresent their position completely. They do not argue that ‘there is no variation in solar output’, nor that ‘models predict a small sensitivity to solar’. Our model is pretty much as sensitive to solar as it is to GHGs, and that is seen in most other cases too. Foukal point out correctly that the arguments used for supporting a long term solar component – other than that related to sunspot/faculae – have fallen down. In particular, the cycling/non-cycling sun-like stars argument used in Lean 1994 has not survived more comprehensive sampling of sun-like stars. That is not to say that there is no long-term component, just that there isn’t any evidence of it’s magnitude. It’s therefore a tricky thing to have to rely on. And now going back to your paper, the basic point is that single forcing attribution studies cannot distinguish between different forcings that have correlated behaviour. Thus when there is ‘constructive’ interference (like at the Maunder Minimum, or the 20th Century), any such method will over-attribute the response. Surely this is something we can all agree on? – gavin]
About the comments from Urs,
The climate sensitivity to solar variations cannot be ½ (as Urs claims) or 2 times (as Rasmus claims) larger that what I estimated for the simply reason that there would be no match between the data patterns.
About ACRIM data, Urs should read carefully the relative papers. He would realize that the quality of Nimbus 7 data is much much higher than the Erbs one. And that Nimbus patterns during the ACRIM gap (1988-1992) do not fit the sunspot numbers but they do fit the magnetic record. This is why there is a controversy between the two groups.
Moreover, ACRIM composite uses the data as they are published, while the other two composites alter the data with hypothetical models that might have large errors and might be wrong.
I am sorry but I think that Foukal el al’s paper is very biased in the references they decided to use and dismiss all debates and, paradoxically, they arrive at the conclusion that the side of the debate they represent is likely wrong!!!
Comment by Nicola Scafetta, PhD — 17 Oct 2006 @ 12:17 PM
the things look more complex to me than what you say.
1) Hansen’s model does not reproduce the warming around 1940.
The warming from 1900 to 1940 occurred at an higher rate than Hansen’s model predicts. If you try to adjust a little bit the model sensitivity to CO2 forcing to better match the warming from 1900 to 1940, the model would not match the data from 1950 to 2006 any more.
Just try to fit the temperature data and the simulations from 1900 to 1940 and compare the two slopes in your figure 6, you will see that there is a significant difference. 
[Response: Climate sensitivity is not an adjustable parameter (though it does vary over models). You could get a slightly better fit by playing with the forcings, solar or land use maybe, or it could be due to internal variability. How could you tell? – gavin]
2) the model should include carbon cycle mechanisms and methan cycle mechanisms to be realistic, and it does not. The bacteria produce a lot of CO2 and CH4 and the ocean exchange a lot of gases with the athmosphere. These mechanisms are temperature dependent. Moreover, the water vapor feedbacks and cloud cover are not very well modelled yet.
[Response: Over the 20th Century we know what CO2 and CH4 changes were so you don’t need to. Understanding the feedbacks is of course useful and should also be done (and it is underway), but a simple back of the envelope calculation demonstrates clearly they can not have had a significant effect over the 20th C compared to the rise due to industrial emissions. Water vapour feedbacks are very robust across different models – cloud feedbacks less so of course. See Brian Soden’s posting and paper for more details on that. ]
3) I am not claiming that “all” CO2 increase comes from solar increase, but if only 5% of that increase is due to the sun increase, this component should be included in the sun contributions, not in the anthropogenic ones. The same should be done with the other forcings. The models are not capable to disintangle these things.
[Response: Give me a quantitative estimate based on more than your feelings! That would be a good contribution, but just saying it’s plausible doesn’t make it significant (and if it’s even as large as 5% of the CO2 rise (~ 5ppm), it’s completely trivial in terms of the forcing – < 0.1 W/m2. ]
4) the millenaria records of CO2 should be interpreted correctly, These data (Petit et al.) are moving averages on several centuries and large faster fluctuations would be cut off. So it is wrong to compare the Petit’s CO2 data with the measured athmospheric CO2 data of the last century
[Response: Then look at the higher resolution data from Law Dome or Siple… ]
5)About Foukal’s paper, you are correct. However, in my paper I have argued that if the long term of the solar variability falls down and the Moberg temperature data are correct, the actual models are very wrong because they will never be able to reproduce the millenaria cycle presented in the Moberg data without a strong climate sensitivity to solar cicle. Foukal too argue that the models might be missing a lot of sun-climate interactions.
[Response: But this depends on a big ‘if’ – (if Moberg is correct). You cannot use that to argue that solar must be stronger – it is simply underdetermined (i.e. a dozen things could be tweaked to get a better agreement). But if we look at the 20th Century, we have much better data, and your claim of a 50% attribution to solar just doesn’t fit once you take into account all the other forcings. ]
To respond to Rasmus’  criticism, I am simply showing that it is easy to criticize traditional model studies too.
I am not accusing anybody. I am just saying that these things are complex, and that nobody can claim that everything is clear and already understood. So everybody (me first of all, but also you, Rasmus, Hansen etc.) should be humble in these things.
Don’t you agree?
[Response: Of course. All the more reason to be clear about errors and assumptions when giving the ‘headline’ numbers…. ]
Moreover, the climate model’s guys, when they write new proposals, always argue that their models are imperfect and that they need more money to update them. Is it true, or isn’t? :)
Comment by Nicola Scafetta, PhD — 17 Oct 2006 @ 2:41 PM
thanks for your reply.
However, I think that there are problems in your model.
If we look at the 1900-1940 period the temperature increase rate is almost 0.011K/y, while your simulations seem to me to give a rate not higher than 0.0035K/y. This is one third smaller than observed. This is a significant difference, sufficient to claim the model is wrong and/or severely incomplete.
Your model also shows an increase of the temperature from 1940 to 1960, while the temperature is decreasing during such a period. Also in 1890 there is again a large volcano cooling not seen in the data. So for 80 years (on 125 years) your model does not seem to reproduce the data well.
Your model also uses the TSI data by Lean 2000. The most recent data (referenced by Foukal et al) shows a lower TSI variability. Thus by using the latest solar data your model will further lose the match with the temperature data, in particular during 1900-1950 when solar activity increases.
You claim that you can adjust the things by adjusting solar and land change. The problem is that if you adjust the forcing, in particular the solar one, during this period you might not get the good fit during the second half of the century. So you might need to do a large correction to the model and adjust the CO2 forcing as well.
Using the argument â??there might be some additional internal variabilityâ?? is convincing but does not help you, because you are just stating that the actual model is wrong and/or incomplete. GCM are supposed to reproduce internal variability, right?
It is true that you can simply add CO2 and CH4 measured values as a forcing, but if you do so, you cannot interpret them as anthropogenic forcing because their concentration is modulated by CO2 and CH4 cycle mechanisms that might in part be modulated by the Sun variability. The CO2 and CH4 cycle mechanisms are not a little thing, IPCC 2001 chap. 3 claims that the GHG human emission rate is approximately 2 times larger than the observed rate during the last decades, so if the Sun drives a little bit the CO2 and CH4 cycle mechanisms (which in this case are absorbing large amount of CO2 and CH4 from the atmosphere) it might leave a signal in the CO2 and CH4 record as well.
(I cannot estimate this amount; it is the model guys’ job to write a model with “all” mechanisms included and do the calculations and control that everything looks good on simulations covering several centuries ).
I just suspect that the models are overestimating the anthropogenic contribution and that there is a larger solar effect. The reason is because the solar pattern seems to mimic quite well the pattern in the temperature for all 4 centuries I have analyzed. My simple reconstruction does not cross the data as your simulation seems to do from 1880 to 1960, but reproduces quite well the large patterns of the Moberg temperature for several centuries. This might be a coincidence, I agree, but it might also suggest that Moberg’s reconstruction might be a reasonable one and that the Sun is an important contribution to climate change.
It is true that my findings are based on Moberg temperature reconstruction, but Mann’s reconstruction has been seriously discredited. The warm Middle Age period and the cool periods during the 16th and 17th centuries seem to be well established historical facts (Vikings could not live in Greenland and could not navigate in the northern Atlantic if there in the Middle age the temperature was low, even today Greenland temperature is too cold). This historical record is a strong evidence that would support Moberg’s reconstruction with a large millenarian cycle. Solar data seems to have such a millenarian cycle that could drive a large internal variability in the ocean circulation, for example. If so, the actual models are in serious trouble because none of them is able to reproduce this temperature patter, and they might need a much stronger climate sensitivity to solar cycle that might include a lot of things in addition to the simple TSI forcing.
This is just a paragraph taken from Wikipedia about the medieval maximum and little ice age
The Medieval Warm Period partially coincides in time with the peak in solar activity named the Medieval Maximum (AD 1100-1250).
In Chesapeake Bay, Maryland, researchers found large temperature excursions during the Little Ice Age (~AD 1400-1850) and the Medieval Warm Period (~AD 800-1300) possibly related to changes in the strength of North Atlantic thermohaline circulation. Sediments in Piermont Marsh of the lower Hudson Valley show a dry Medieval Warm period from AD 800-1300.
Prolonged droughts affected many parts of the western United States and especially eastern California and the western Great Basin. Alaska experienced three time intervals of comparable warmth: 1-300, 850-1200, and post-1800 AD. 
A radiocarbon-dated box core in the Sargasso Sea shows that sea surface temperature was approximately 1°C cooler than today approximately 400 years ago (the Little Ice Age) and 1700 years ago, and approximately 1°C warmer than today 1000 years ago (the Medieval Warm Period)..
The climate in equatorial east Africa has alternated between drier than today, and relatively wet. The drier climate took place during the Medieval Warm Period (~AD 1000-1270).
An ice core from the eastern Bransfield Basin, Antarctic Peninsula, clearly identifies events of the Little Ice Age and Medieval Warm Period. The core clearly shows a distinctly cold period about AD 1000-1100, nicely illustrating the fact that “MWP” is a moveable term, and that during the “warm” period there were, regionally, periods of both warmth and cold.
Corals in the tropical Pacific ocean suggest that relatively cool, dry conditions may have persisted early in the millennium, consistent with a La Niña-like configuration of the ENSO patterns. Although there is an extreme scarcity of data from Australia (for both the Medieval Warm Period and Little Ice Age) evidence from wave built shingle terraces for a permanently full Lake Eyre during the ninth and tenth centuries is consistent with this La Niña-like configuration, though of itself inadequate to show how lake levels varied from year to year or what climatic conditions elsewhere in Australia were like.
Adhikari and Kumon (2001) in investigating sediments in Lake Nakatsuna in central Japan have verified there the existence of both the Medieval Warm period and the Little Ice Age.
Of couse, it is a “if” argument. But everything right now suggests that it might not be unreasonable. So, we should look at the data and solve their ambiguities first. The data will solve many of these problems. But until the secular data are ambigous, I do not think that it is safe to claim that the debate is over. don’t you agree?
[Response: Now you are confusing me. We were talking about recent centuries, and you bring in a whole lot of random references to the Medieval Warm Period? Since neither your analysis, nor my model have directly assessed that, I will put that aside (though I would recommend reading Bradley et al (2003).). With respect to your assessment of our model simulations, you are advised to go directly to the source (http://data.giss.nasa.gov/modelE/transient/) to calculate the trends of any particular period. For 1900-1940 the mean trend is 0.19 deg C/40 years, compared to 0.33 deg C/40 years in the observed data (land/ocean index). It’s definitely not an exact match, and examination of the regional patterns show distinct differences. Some part of the difference is undoubtedly due to the intrinsic variability which cannot be coherently captured in a climate model, while uncertainties in forcings, and model inadequacies probably also play an unquantified role. Your comments regarding the response to Krakatoa are less well supported – look at figure 7 and the associated discussion. Met stations clearly record a cooling over land associated with Krakatoa of the same magnitude as we predict – however the SST data do not show this. I would be more inclined to question the SST data in this case (since it is a reconstruction rather than pure observations). Look, I do not claim that climate models are perfect – far from it – but their matches to observed data at the large scale are impressive – Pinatubo, last 30 years, response to ENSO, NAO response, sea ice response, ozone hole response etc. They are, and will remain, the only way to quantitatively compare the myriad different influences and pathways in the climate system in physically consistent ways. Some of the forcings used are more certain than others. For instance, CO2 forcing can be calculated from highly accurate line-by-line codes and is implemented in GCMs with less than 10% error (it can’t be ‘adjusted’ in any signficant way) However, aerosol forcings are highly uncertain – our best guesses for those could well be significantly off – the same for solar. But when we take the best guesses (independently derived) for each of these items, we do get a reasonable, though not perfect, match to the obs. We don’t go back and adjust the forcings to ‘improve’ the match, since that would assume that the model was perfect (which we know it isn’t), so the comparison to the observed data is a valid test. And in that comparison, the model does a better job than any statistical model based on solar alone (or CO2 alone for that matter). Why don’t you suggest a test, and we’ll put your statistical model up against the GCM output and we’ll see who has the best match against the 20th C data. -gavin]
Comment by Nicola Scafetta, PhD — 17 Oct 2006 @ 8:09 PM
I notice the ozone holes are at the maximum — more UV light. Does that change climate sensitivity, by boosting plankton primary productivity and the plankton boosting ocean circulation and so gas exchange into the ocean, compared to any other time in the past when there was no ozone hole?
Serious question, I’m still as an amateur wrestling with understanding what ‘climate sensitivity’ takes into account. I guess the question is, if all else was held the same — if we had our fossil fuel industry but had not invented the chlorofluorocarbons and equivalents so hadn’t lost so much of the ozone layer for so long — would that change climate sensitivity?
The bottom line is: the method used in the paper is unsound because it neglects noise (influence from factors other than solar and GHG) and it may blow up if the temperature changes while there are unrelated changes in the temperature. -rasmus
(About the comment by Hank, yes there are also the plankton and vegetation effects that are temperature and light sensitive that the models do not contain)
Now a short reply to Gavin.
I am talking of several centuries because that is what my paper addresses. If you read carefully my paper there I discussed too the 1000years case, even if very briefly.
The warming that I associate to the solar activity since 1600 is approximately equal to the cooling from the Medieval Maximum to the 17th century. This cooling according to Moberg is about 0.6K and this could not be caused by humans. So, it must be natural and the sun is probably the major cause. Because there are some evidence that the sun in the middle age was as hot as today, this simply implies that the sun could induce another 0.6K since the little ice age in the 17th century.
And this would confirm my calculations.
[Response: False premise. You already assume that a) Moberg is correct, and b) no other factors are involved. We have already discussed evidence that volcanic activity could play a significant role, and absent better constraints on the forcings it is difficult to attribute this effect precisely – your assumption that it must all be solar, is an assumption, not a result. – gavin]
The problem with the models is that all of them have an energy balance model inside, also the GISS one, The energy balance core is what drives the slow climate variability. The problem is that when the modern energy balance models, such as the Foukal’s one, are run on 1000 years they give a result that is compatible with the hockey stick reconstruction of Mann and Jones. They predict that the cooling from the medieval warming and the little ice age was approximately 0.2K. Moberg reconstruction suggests that such a cooling is 0.6/0.7K, that is three times larger: this is a lot. So, if Moberg is correct the core itself of all present climate model (that is their energy balance model), including the GISS one, is incorrect.
[Response: The results from EBMs, and indeed GCMs, depend a great deal on the forcings. A mismatch between the paleo reconstruction and a model result can be due to a) an incorrect forcing, b) an incorrect reconstruction, or c) an incorrect climate sensitivity (or of course all three). There is independent evidence that the climate sensitivities are in the right ballpark, and that leaves the forcings or the reconstructions. Both have significant uncertainties and so absent other information you cannot conclude what the error is. Foukal may well be wrong about long-term solar impacts, Moberg may have over-estimated low frequency variability – my point is not that either one of these things is correct, but that neither you nor I can tell. Therefore conclusions that there is something wrong with energy balances models (which are just conservation of energy, which I doubt you want to challenge) are highly premature. ]
Now, the problem is determining if Mann and Jones are right or if Moberg is right.  On the contrary, there are overwhelming evidences from several independent studies that the middle age was quite warm and that the 16/17th centuries were quite cold, this would strongly support Moberg reconstruction and therefore the conclusion of above (and my estimates).
[Response: You are grasping at straws. Medieval climate was much more complicated than simply ‘it was quite warm’ and it remains entirely unclear to what extent it was globally warm. It is completely possible that neither Mann and Jones nor Moberg are right – better data will help, but you must acknowledge the uncertainty there. And given that uncertainty, and the uncertainty in the forcings, the sensitivity cannot be usefully constrained from Medieval data. ]
Now, let us go to your model.
I thank you for having kindly acknowledged that the model is not “perfect” and that right now there is a serious discrepancy between the model prediction and the data from 1900 to 1960. This would suggest that the model has a serious problem. Be careful! Your model must match the data to be credible, while my simple analysis of the Solar effect alone does not need to match the data perfectly, in my paper I say that only approximately 50% of the warming since 1900 is related to the sun and from 1600 to 1900 the match is quite good, indeed!
[Response: That is curious logic. An independent estimate matches the observed data better than your statistical model. That estimate has solar contributing less than 10% of the overall warming factors (0.3 W/m2 out of 3 or so). That is therefore evidence that your attribution of 50% is a better fit? How does that work? I should point out that your attribution makes the error I alluded to recently in not accounting for cooling effects – thus the warming forcings will add up to more than 100%. A better metric would be the ratio of GHG influences to solar and our modelling (as well as many others) demonstrate that ratio is much greater than 1. ]
In any case, let us now “assume” that your model is perfect. My argument is that Hansen’s “interpretation” of his own model is badly wrong.
The reason is that Hansen and many other model guys interpret the “measured” CO2 and CH4 concentration increase as “anthropogenic” forcing. This is in principle wrong because there are CO2 and CH4 natural cycle mechanisms that determine the concentration of these gases in the atmosphere and these mechanisms might be driven by the sun. The model do not contain these mechanisms, so the model does not have any way of determining how much of the observed CO2 and CH4 concentration come from humans and how much is a natural response to solar increase.
Now you ask: can you give any estimate of the solar signal in the CH4 and CO2 record? I reply that I cannot because I need the CO2 and CH4 natural cycle mechanism models that I nor you do have. So the problem remains unsolved from a model point of view and you cannot used the findings of you model to criticize my findings which would take care of correclty considering the possible solar signal inside the CO2 and CH4 concentration records, that might be quite large.
[Response: On the contrary, it is easy to constrain. Over very long time periods such that the carbon cycle is in equilibrium with the climate, one gets a sensitivity to global temperature of about 20 ppm CO2/ deg C, or 75 ppb CH4/deg C. On shorter timescales, the sensitivity for CO2 must be less (since there is no time for the deep ocean to come into balance), and variations over the last 1000 years or so (which are less than 10 ppm), indicate that even if Moberg is correct, the maximum sensitivity is around 15 ppm CO2/deg C. CH4 reacts faster, but even for short term excursions (such as the 8.2 kyr event) has a similar sensitivity. Now compare that to the unprecedented post-industrial rise: 100 ppm for CO2, over 1000 ppb for CH4. In both cases the potential for a temperature lead contribution is around 10% – way too small to effect the forcings substantially. The evidence that the current rises of both CO2 and CH4 are anthropogenic are overwhelming (from isotope data, O2 data, ocean data, emission inventories etc.). If your result relies on reducing the impact of CO2 and CH4 as forcings, then we might as well stop here. ]
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 11:52 AM
Your argumentation gets more and more contradictory.
It is not me who claims that the sensitivity to solar variations is only half of what you say. It is your method with your rules that produces that result.
And you reject this result only because it does not match your claim that solar variations are the only relevant forcing factor. This is not convincing at all. If you want to show that there is a match, you can not use this as a precondition. This is basic logic. You persistently ignore that point.
You write to Gavin: … because the solar pattern seems to mimic quite well the pattern in the temperature for all 4 centuries I have analyzed.
The solar pattern only mimics the temperature pattern so well because you reject all the other results which do not show this.
However, I agree that on the multidecadal scale (Dalton Minimum etc.) both patterns are similar. But there isn’t any evidence at all that the long-term trend from the 17th to the 19th century can be solely explained by solar forcing. There are such large differences in the trends over these centuries both for temperature and for TSI (the only consistency is that the trends are all positive) that any attempt to analyse the influence from these data is hopeless. If you use only half the Zs value, the multidecadal pattern will still match, but not the long-term trend.
You claim that the model Gavin used does not match exactly the temperature curve. That is only because Gavin included all the different results of the model (the ensemble) in his result. If he would do the same as you did in your study and exclude all the results that do not match, the match of the model would look much better (as he has explained). If you want to compare the model to your study, please include all the results and not just the results that match your hypothesis.
As an exercise and self test, I’ll take an amatuer’s stab at your question. The phrase “climate sensitivity” absent any particular context seems to generally mean the temperature change the climate will undergo on the short term given a doubling of atmospheric CO2 concentrations and all other forcings held constant, I will presume that is what you also mean.
Calculations of the climate’s sensitivity to 2xCO2 includes the radiative effect of CO2, the radiative effect of H2O vapour that increases as a feedback and the albedo changes that result from loss of sea ice. It does not include potential long term feedbacks such as ice sheet melting or carbon cycle feedbacks.
As such it is a theoretical property of the climate as a whole. Any change in the climate as a whole will therefore potentially impact the climate sensitivity value. For example, in a climate where there is no sea ice, this feedback would be absent and climate sensitivity would be less. An earth with continents in different locations or more or less land would also respond differently and have a different sensitivity value. Thus it is possible that any alteration of the atmosphere such as ozone depletion, will alter the climate’s sensitivity to CO2 doubling.
That said, I don’t know if the climate sensitivty studies try to isolate and remove the other currently understood anthropogenic forcings to get a more natural or theoretically pure value, ie the value of 2xCO2 in an otherwise natural world.
I rely on any of the forum’s experts to correct me where required.
I think that there is some problem of communication here.
I am not starting from a false premise.
I am explaining the meaning of my paper.
In my paper I start with a hypothesis and I have a goal.
This is the hypothesis and goal:
“Let us use the Moberg’s data and let us deduce its main consequence.”
So you cannot argue that I am starting from a false premise. I do not care if the premise is false or true. I just do that hypothesis, that is all. And I claim that in accordance with this hypothesis, we have to conclude that the climate sensitivity to solar change is likely much stronger than what present models claim.
This is all.
I have also no problem in acknowledging that if the Medieval warming did not exist, and if Mann and Jones data are closer to the truth, the sun might have had a little impact on climate and that present models well reproduce the Hockey Stick pattern these data show. So, in this case, the actual models do not need great corrections and are already quite good.
This is all.
However, I notice that the hypothesis I did is not totally unrealistic because today we have several evidences that the medieval period was quite hot and the little ice age what quite cold. And other reconstructions of the temperature suggest a large millenarian cycle more or less compatible with the Moberâ??s reconstruction. Perhaps, all this is wrong, or perhaps not. We do not know yet.
This is all.
Finally I attempt a suggestion that perhaps one solution to the problem that the solar impact on climate is underestimated by models might be because EBM and GCM, like GISS, do not contain CO2 and CH4 cycle mechanisms that might be partially effected by the Sun, and other mechanisms are missing or uncertain (water vapor, cloud cover, vegetation, bacteria respiration, UV radiation, cosmic ray effects etc.). When all these mechanisms will be included in the model, perhaps we will understand better climate and the causes of climate change.
This is all.
So, what is wrong with this?
Do you want that I do not have to do a study by doing an innocent hypothesis and deduce its consequences?
[Response:My greatest objection is your choice of methods – they are likely to give you spurious results, and therefore I do not trust the conclusions. The merit of any analysis or experiment depends on the objectivity of their setup, even if you were to start with a valid hypothesis. -rasmus]
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 3:13 PM
I have a final comment and I ask you to give a fair response.
I have claimed above that perhaps a solar signature is likely hidden in the GHG signature and in other climate forcing as well.
Rasmus have done a nice thing above.
Look carefully the data reported by Rasmus in his figure above.
In particular look at the ozone and GHG signature. I do not know where Rasmus took these data from. But let us assume they are correct.
The Ozone signature seem to have great oscillations with 5 maxima in 100 years. This well fit the 22-year solar magnetic cycle.
The GHG signature is more complex but if you see before 1950 there might be four oscillations in 40 years. This well fit the 11-year solar cycle. (Note that there is a time lag of 5 years, this might be OK, because the climate does not respond instantaneously to a solar change).
Perhaps, this is only an illusion, but if not and if Rasmus data are correct these pattern might suggest that there might be a serious possibility that there is a solar signal hidden in the GHG and Ozone record. As I think.
Do you have any comment?
Do these oscillations have some other reasonable explanation?
[Response: The figure is from Wikipedia, based on results from Meehl et al (2004). However, your question illustrates exactly why one has to be very careful when looking at noisy data. These curves are the smoothed model output, and as such contain ‘weather’ noise that is uncorrelated to the forcings. While you might think you can see a 22 year cycle in the GHG and ozone runs, I am 99% sure that there was no 22 year cycle in the forcings. There certainly isn’t such a cycle in the GHG concentrations, and while it’s conceivable that the ozone runs used an ozone field that was modulated by solar activity, I doubt that this was the case (that kind of simulation is only just being to be made). The ozone forcing most probably only has a trend due to industrial activity affecting tropopsheric ozone (a warming), and the depletion of stratospheric ozone due to CFCs from the 1980s onwards (a slight cooling). Therefore, what you perceive as a cycle, is simply decadal ‘noise’ and would be different in another set of realisations. But please be sure to understand me – I am not claiming that there is no possibility of solar feedbacks affecting the other ‘forcings’ (in fact we are working on ozone feedbacks quite extensively – though they are affected by the irradiance changes on 11 year timescales, not on 22-year timescales). However, these potential feedbacks can be shown, by looking at the pre-industrial for instance, to be much smaller than the anthropogenic contributions over the 20th C. Prior to that they would have been relatively more important and potentially detectable in the record, but one has to be very careful about extrapolating from much more uncertain forcings/response in the paleo-climate record to the much better characterised changes in recent decades. – gavin]
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 4:12 PM
everybody here has understood what is your objection.
I accept your point but I respectfully disagree with your interpretation of my paper.
I told you that you should look at the large picture, and not focus on some details that you might misunderstand.
Moreover, I notice that I am not alone. Here there are people who had some problem with your way to read and criticize my paper.
Again, I tell you to read comment #21.
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 6:53 PM
thanks for your explanation.
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 6:56 PM
just a last comment
I still notice that also the Meehl model, whose result is in the above figure has some problem from 1900 to 1950.
The increase rate of the model simulation is significantly lower (~1/2) than the increase rate of the data during the same period.
Yes, i agree that the Ozone should have a 11-year cycle.
Well, I think we have discussed enough, good work with the ozone feedback (and do nor forget the second generation water feedbacks to it.)
thanks for the discussion.
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 7:12 PM
Just a question, before Nicola and Gavin make mincemeat of each other :D. GHG forcing is global (“well-mixed” in atmosphere). Could we imagine a different and regional modulation of solar forcing for past decades and centuries ? For example, Milankovic global solar forcing is quite small, but latitudinal and seasonal variations imply strong feedbacks. On a shorter time scale (decennal, pluridecennal), could it occur with modest variation in TSI related to atmosphere-ocean circulation ? Or is it a non-sense ?
Any method to detect the real climate sensitivity for the four main climate drivers (GHGs, aerosols, solar, volcanic) is dependent of several assumptions and constraints.
The first constraint is the temperature profile of the past century. Any model, regardles its sensitivity, must fit the past century’s profile. But the problem is that many sets of sensitivities (different sensitivities for different forcings) can fit the past century’s profile, because if one forcing or sensitivity (like solar) is underestimated, another one (like CO2) may be overestimated and need to be adjusted downward. Because there is an overlap between the warming induced by solar and by GHGs, it is impossible to know which one is responsible for what part of the warming. Btw. sensitivity for CO2 in current models (~3 K/2xCO2) is much higher than what can deduced from IR absorption alone (~0.85 K/2xCO2), this includes all feedbacks, of which some (like clouds) are very uncertain…
The second constraint is for the pre-industrial part: we have only reconstructions, which are quite different in amplitude for the same time frame. Some show small variations (~0.2 K), the newer ones show larger variations (~0.8 K). In the case of smaller variations, the attribution between solar and volcanic is about 50:50 for the pre-industrial period, as human made GHGs and aerosols had little influence. If the newer reconstructions are taken into account, then the ratio between solar and volcanic influences increases to 7:1. Not taken into account that volcanic may be overestimated…
In general, the larger amplitude reconstructions have a better match with independent bore hole and ice core temperature reconstructions. The latter show a ~10 ppmv CO2/K relationship over all glacials/interglacials, including a change of ~10 ppmv between the MWP and LIA, thus caused by ~1 K cooling. A similar temperature change may be expected between the LIA and current, be it that the expected ~10 ppmv CO2 change is overwhelmed by human emissions.
This is what Moberg, Esper, Luterbacher and others have concluded: if the variability in the pre-industrial past was larger, then the sensitivity of the climate to man-made emissions must be lower than currently implemented in the models.
[Response: This is nonsense. The sensitivity implied by a particular temperature history is not determined by the amplitude of that history. It is determined by the covariance between the temperature history and the estimated radiative forcing. A large amplitude that is uncorrelated with the forcing would indicate a sensitivity of zero. You might learn something from the appendix of this paper by Waple et al describing how to estimate sensitivity from forcings and their estimated responses. It is somewhat ironic that you cite both the Moberg et al and Esper et al temperature reconstructions in your efforts to argue for a greater role of solar forcing in past temperature variations. Indeed, they both have larger amplitudes than most other reconstructions as you say. But they also both happen to be essentially uncorrelated with each other at centennial timescales (see the Wikipedia plot), and Esper et al is actually negatively correlated with most reconstructions of Solar Irradiance. But perhaps you are advocating a negative sensitivity to solar forcing? Of course, this behavior has nothing to do with solar irradiance. It has to do with the fact that there is an enhanced sensitivity to volcanic forcing in the Esper et al series, owing to its bias towards the summer seasonal and extratropical continental centers. And it happens that the centennial timescale changes in the amplitude of explosive volcanic forcing happen to be negatively correlated with estimated solar irradiance variations over past centuries. This simply arises from chance, and the fact that there are very few realizations of the century-scale variability present in the two short forcing series. But it does serve to underscore how naive your interpretations are. You might benefit from a more thorough and careful reading of the literature. The GISS modeling work referred to elsewhere in this thread would be a good place to start. -mike]
In your comment you allude to a 20 ppmv/K sensitivity of climate to changes in CO2 in equilibrium.
There is a ~10 ppmv/K change in CO2 as result of temperature changes in ice cores, but that is one-way, as temperature changes near always preceded the CO2 changes. As there is in general a huge overlap between temperature change and CO2 change during the ice age – interglacial and vv. transitions, one can think of a huge influence of CO2 on temperature, as a positive feedback.
But in one particular case, there was no overlap, the end of the Eemian, where CO2 levels stayed steady high, while temperature (and CH4 levels) were already near their minimum. If the 20 ppmv/K holds, then the subsequent decrease of CO2 levels with ~40 ppmv would have induced a ~2 K temperature drop, which is not visible in the temperature proxy of the ice core. See here
[Response: Wrong way around. ~20 ppmv/K sensitivity of CO2 to global temperature change (assuming ~5 deg C global cooling at the LGM, and roughly 100 ppm decrease in CO2). You get pretty much the same thing with a regression over the whole Vostok timeseries if you make some reasonable assumption about the relationship of vostok T to global T. The relationship the other way is related to the cliamte sensitivity and is complicated by the other forcings (ice sheets, other GHGs etc.). You can’t derive that purely from the Vostok curves. – gavin]
Gavin and I are not making mincemeat of each other :)
We are just discussing from two different prospectives, the theoretical model one and the empirical one.
These problems are very complex.
To answer your question, for what I know,
there are regions of the Earth were the 11-solar cycle is quite evident and other regions where it is weak or it seems that it is not present. Several studies claim that on average the earth the solar cycle has amplitude of approximately 0.1K. (Three times larger than what present models predict).
The regional pattern might depend on the cloud pattern formations, or I do not know.
If you are interested in this topic you might read the Book by Hoyt and Schatten “The role of the sun in climate change”
[Response:I’d of course recommend my own book: Benestad, R.E. (2002) Solar Activity and Earth’s Climate, Praxis-Springer, Berlin and Heidelberg, 287pp, ISBN: 3-540-43302-3 (available at amazon.com). My book provides takes a more critical stance… -rasmus]
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 8:06 PM
Just a final response to Urs Neu #63,
As I tried to explain above, the calculations are done by taking in consideration a lot of alternative facts that you are not considering.
It is true that by doing them as you want you might get a lower value than mine (1/2 you claim), and by doing the calculations as Rasmus wants you might get a value larger (2 times, perhaps).
The problem is that in both cases the match with the data from 1600 to 1900 is lost. If I plot the data in such a way that the signal and the temperature have the same value during the 17th century and I do the calculations as you want, the signal will be too low during the 19th century (-0.2K in 1900). If I do the calculations as Rasmus wants to do them, the signal will be too high (+0.4K in 1900).
The hypothesis that I did was that the warming during the pre-industrial centuries, before 1900 is given by the sun increase. So I need to find values that match the data during all three centuries. Moreover to minimize the error I need to compare a period with a minimum value and a period with a maximum value. With your calculation the relative error is 2-3 times larger than mine. With Rasmusâ?? method the relative error is 3-4 times larger than mine. My choice fits better all these properties.
But there is another deeper problem with your calculations. Your value might be seriously in conflict with other findings and expectations.
You have to understand that I am looking for the climate sensitivity to slow secular change, with your method you say that you find one half of what I found. Well let us seen the data from Lean1995, I found for the sensitivity something between 0.17 to0.23, you would find something between 0.9 to 0.12.
The problem is that there are several independent studies, not only mine, that claim that the climate sensitivity to the 11-year solar cycle is 0.11. And the sensitivity to a 22-year cycle might be as large as 0.16. Theoretical estimate (Wigler) estimates that lower frequency sensitivity might be much larger (even 3-5 times for a smooth secular component). This is due to the thermal inertia from the ocean. I have argued in a previous paper that the climate sensitivity to a slow solar variation component might be as large as 0.21 and I cannot go too low of this value. (I wrote this in the paper if you look carefully)
You understand that your numbers are too small and do not match these further condition.
[Response:In this estimation, you divided a small amplitude ba an even smaller (the 22-year Hale cycle is not very strong, and not even discernable in the sunspot record, even though we have reasons to believe it exists since the magnetic fields flip), thus not a very reliable method. Furthermore, the phase information is ignored (the phases to not match), and the contention that the ~11 and ~22 year variability is linked to solar activity can only be built on faith… In other words, it’s not very scientific. -rasmus]
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 9:54 PM
thanks, it seems you are the only one who understand what I try to say.
So, what I say should not look so crazy, after all :)
Comment by Nicola Scafetta, PhD — 18 Oct 2006 @ 10:19 PM
Re #71 Mike answer
It’s quite difficult to judge on Wikipedia plot, but Esper and Moberg reconstructions seem to have the same trends at centennial scale (even if Esper’s one have globally more amplitude than all others). Look at 1000-1500, period where the grah is more clear : upward and downward trends are quite similar. In fact, they diverge mainly for 1500-1600.
And for 1600-1900, period of interest for Nicola’s paper, Moberg, Esper, Huang (and even Briffa) reconstructions seem to have a comparable slope. So, I think the main question of the debate is volcanic vs solar forcing. Since the beginning, Nicola explains the same thing : he chooses the solar forcing attribution that best fits the temperature data (Moberg) for the period. If volcanic forcing vs solar forcing is 10:90 (negligible), there’s no problem. If we are at 50:50, there’s a problem.
More generally, the higher temp. variability a reconstruction shows, the higher sensitivity to natural forcings and / or the higher natural – chaotic climate variability we should expect. I think Ferdinand was OK on this point.
[Response: You’re simply wrong on both counts. It doesn’t take a calculation to see that the two curves are nearly orthogonal over the first 600 years of overlap, at least. Tom Crowley (AGU Transactions, 2003) has shown that the Esper et al reconstruction yields a negative sensitivity to solar forcing using his estimated solar irradiance reconstruction, despite its very large amplitude. These issues just aren’t as simple as non-experts often like to think they are. -mike]
Gavin, I misinterpreted your remark as the sensitivity of temperature for CO2. Indeed the ice cores show a remarkable (near) linear response of CO2 to temperature changes, be it overall ~8 ppmv/K for the 420,000 years Vostok ice core, where K more or less reflects the SH ocean temperature. As I suppose that the NH temperature variations are larger (more land, ice sheet building), this should reduce the ppmv/K ratio. Except if there are other influences to be taken into consideration. But no matter what the real ratio is, there is a quite good relationship between CO2 and temperature (at least in one direction)…
Anyway, the change of ~12 ppmv during the MWP-LIA transition (seen in several Antarctic ice cores) points to an about 1 K cooling in that period. If the same co-variance still is applicable for the LIA-current temperature increase of ~1 K, then this increase in temperature can not be responsible for more than a 12 ppmv rise in CO2 levels. The rest of the observed increase (~70 ppmv in the Law Dome ice core) certainly is caused by burning fossil fuels.
Thus even if solar was only/mainly responsible for the recent temperature increase, the total increase of CO2 (and CH4) is only marginally attributable to that change. Here I differ with Dr. Scafetta who supposes that solar variations may be more responsible for changes in CO2 and CH4 than can be deduced from historical data. For other responses (especially clouds), there is far more uncertainty which need to be cleared…
Re #43, Thanks for that link, however, I would still like to see the infra-red absorbion data for CO2.
On an unrelated subject, at work today, a colleague sent me an email and pronounced that man cannot be causing global warming and, that the whole issue is a load of bunk! His proof was the link: http://www.atimes.com/atimes/Front_Page/GB25Aa02.html . He said I should get my “facts straight”. I have told him of this site (www.realclimate.org), however, he refuses to even look. What can you do with folks like that?
Comment by Lawrence McLean — 19 Oct 2006 @ 9:19 AM
I am afraid you will not get the point (or you do not want to).
I perfectly understand what you did. You exclude any result which does not match what you predetermine at the beginning. You predetermine that there has to be a match and you predetermine that the sensitivity has to be around 0.20. It is no wonder that at the end there is a match and the sensitivity is 0.20.
The problem is not what you did, but your conclusions:
You write in the abstract:
1. We find good correspondence between global temperature and solar induced temperature curves during the pre-industrial period.
– This is in no way a finding but your presumption (if you only look for data which lead to this match, the finding is predetermined).
2. The approach we propose (..) yields results proven to be almost independent on the secular TSI proxy reconstruction used.
– This finding is also predetermined by your method because you look for a good match for each TSI reconstruction independently. If you adjust three curves to the same other curve, you will end with three similar curves in any case, this is no surprise.
3. The sun might have contributed approximately 50% of the observed global warming since 1900.
– This is a result you did not calculate but estimated from a graph with curves that show very questionnable things at their ends: The smooth Lean2005 curve (data is from Wang et al. 2005 according to your text) goes until about 2010 although Wang et al. 2005 only contain data until 1996. This is miraculous, although considering your 5y time lag. And smooth data is rather uncertain at the end if extended to the very end of the original data as you did.
I don’t expect an answer again to my points above, but I really would be interested to know why your Lean2005 curve (Wang et al. 2005) contains data in the 17th century and clearly beyond the year 2000 although the Wang et al. data is from 1713-1996. And how you smoothed your curves until the end of the data period.
Of course this explanation would have blown up the length of your paper, but it is relevant since your 50% approximation depends on the last years of your reconstructions.
# Some minor points
#57 Nicola Scafetta first writes that
“Rasmus continues to not understand that the reason why I used the Moberg temperature signal is that Moberg data are the novelty and that in a letter I cannot write too many things!”
meaning that he has chosen Moberg for being the latest millenial reconstruction available. That’s however not the case: The Hegerl, G. C., Crowley, T. J., Hyde, W. T. & Frame, D. J. Climate sensitivity constrained by temperature reconstructions over the past seven centuries. Nature 440, 1029-1032 (2006).
#72 The millenial CO2 variations are quite difficult to estimate. Between different cores one finds differences of more than 10ppm for a specific century (most probably due to post-depositional effects and/or measuring process). If one would average over all records (disregarding the different quality of the data) one would get probably a millenial amplitude of less than 10ppm. The 12ppm/1K is the modelled value from Gerber et al.
To Rasmus: it is not only me who claim that there is a 11-year and 22-year and a long-range solar signature in the climate. There is a lot of litterature starting from 1800 to today that say the same thing, even if the things are very complex. Please, read some book such as that by Hoyt and Schatten. In the new IPCC 2007 there will be a paragraph by Lean where she talks also about a 11-year solar signature on climate of the size of 0.1K in average referencing a lot of authors, as I too found. (Science too starts with an act of faith, Rasmus! )
[Response:You are right, the literature is littered with examples ‘demonstrating’ that there are ‘solar signals’ in the climate. Many of which were based on unconvincing analysis. The fact that we are still debating this centuries after, suggests that any effect is weak/obscure at best. Look at the northern lights in comparison: these phenomena are remote to most people, still there is little doubt that the frequency of northern lights are closely related to the level of solar activity; our climate, on the other hand, is intimately around us… The existence of variations with near 11-year periodicity is not by itself a proof of a solar influence – you may even find such ‘signals’ in computer simulations of Earth’s climate where the changes in the solar activity is not prescribed, as the dnyamics of Earth’s climate gives rise to intrinsic (chaotic) variations. -rasmus]
To Urs: You have to see the good correspondence of the patterns between the curves. They correspond quite well and this suggests that the hyphotheses might be correct. This might be a coincidence, I agree. But the probability that it is a simple coincidence is quite small. Try to randomize the solar data and then put them on a linear slope equal to the actual linear slope found in the solar data and do again your calculation, then let me know if you find the same good pattern correspondence I found.
About your question about Wang and Lean data, you should read more carefully my paper, you will see at the very end this sentence: “Acknowledgment. We thank J. Lean for having sent us the LATEST UPDATES of her TSI reconstructions.” Does this respond your question?
To Ferdinand: I do not claim that “all” or even “most” of the observed CO2 during the last century is induced by the Sun. But let us do a very naive calculation.
Law Dome ice core shows that there is a natural variability of 10 ppmv from the Middle age maximum to the little ice age of the 17th century. Let us suppose that this variation is induced by the decrease of the solar activity during that period. Let us suppose that the increase of solar activity from 1880 to 1975 has been of the same size at most of the previous decrease or 1/2 of it. This means that the solar increase from 1880 to 1975 might be responsible of 5-10 ppmv.
Law Dome ice core shows a CO2 increase of 38.6 ppmv from 1880 to 1975.
This means the sun might have contributes 13-26% of it. This might have a large effect on climate and this might be only the solar contribution via CO2, then there are other contributions from other sources.
[Response: One last word. We have put more CO2 into the air over the industrial period than now remains there. Therefore the flux of CO2 must be into the other reservoirs (principally the biosphere and the deep ocean). There is no possibility of more CO2 coming back out because of highly speculative solar feedbacks. This is shown by actually measuring the increase of CO2 in the ocean (and it is increasing) and matching the isotopic data (both 14C and 13C) with the fossil fuel source. At most, changes in climate may be making it harder for the anthropogenic CO2 to get sequestered. But that is not good news! -gavin]
Comment by Nicola Scafetta, PhD — 19 Oct 2006 @ 12:35 PM
thanks but you should understand that a reconstruction should be “recent” but not “too recent” such as the Hegerl’s one. Scientific papers are not written few hours before they are published like newspaper articles!
Note that Hegerl has not used Moberg’s one in his picture too, just referenced them!!!!!!!!!
In any case I did not use it because I wanted to show the Medieval maximum that Hegerl’s reconstruction does not show (it starts in 1280).
Moreover from 1600 to 2000, Moberg is a compromise between Hegerl and Esper.
In any case the Hegerl climate model simulation (blue line in fig 1) has the same problem of the Foukal’s simulation. That is low secular variability of 0.2-0.3K from 1000 to 1900 and other problems. It mirrors the Mann and Jones data. Again, if the secular temperature variability is larger as Moberg claim, the model is in serious trouble and the climate sensitivity to solar variations must be significantly increased.
Comment by Nicola Scafetta, PhD — 19 Oct 2006 @ 1:07 PM
“We have put more CO2 into the air over the industrial period than now remains there. Therefore the flux of CO2 must be into the other reservoirs (principally the biosphere and the deep ocean). ”
yes, this is what I was trying to tell you. There are carbon cycle mechanisms that are absorbing CO2 from the atmosphere into the ocean. And others that are producing is. The net flux is negative.
You model does not contain any of these mechanisms. These mechanism might be conditioned by the Sun.
What I am telling is that without the increse of the solar activity during that period the flux from athmosphere into the ocean could be more negative than what it has been.
Thus, you and your group cannot interpret the “measured” increase of GHG as an “anthropogenic” contribution. Above I have calculated that 13-26% of it might be from the Sun (at least from 1880 to 1975).
Are you understanding now what is the problem with Hansen’s interpretation of his model?
A significant part of the observed GHG increase might be due to solar increse.
[Response: No. All of the extra CO2 in the atmosphere is anthropogenic. Possible feedbacks from the climate to CO2 absorbtion in the deep reservoirs may have made a few ppmv difference to current CO2 concentrations (so far). Your calculation of the potential for the solar input is woefully inadequate (you assume the result you want to prove, ignore timescales of response and exaggerate the importance of your preferred mechanism by ignoring all other potential effects). Basically you show that if no other effects occur than solar cannot explain the CO2 rise (which is obvious anyway), but if other effects are important (volcanos, GHGs) then your attribution of all the warming to solar cannot be valid and thus the CO2 ‘portion’ you come up with cannot possibly be that large. The only fair way to procede is to take account of ALL effects (with their uncertainties), and when you do, you find that solar is a small contributor relatively and a completely trivial contributor to any climate-CO2 feedbacks. – gavin]
Comment by Nicola Scafetta, PhD — 19 Oct 2006 @ 1:59 PM
to prove that “All of the extra CO2 in the atmosphere is anthropogenic” first you need the carbon cycle mechanisms and you do not have them yet.
You are jumping to the conclusion you like without doing any calculation and ignoring the phenomenological evidences.
I did a simple calculation on a time scale of several centuries, and only the Sun has such long range variability. Volcanoes do not have such long effects and in any case I have already show you that the temperature reconstructions almost do not show volcano signals, give again a look to the figure of Foukal, if you have forget it.
Do you think that the decrease of CO2 from the medieval maximum to the little ice age has been “anthropogenic”?
If it was not the Solar variability, what was its cause? (Do not be vague in your reply.)
[Response: Nicola, think of a bathtub. I pour 2 buckets of water in, some of it spills over so that we end up with only 1 buckets worth of new water in the bath. All of the new water is from the 2 buckets I put in right? Now let’s imagine that the weight of the water or some other agency distorts the bath so that it stretches a little and less spills out. Now I have 1.5 buckets of new water in the bath. All of the new water is still from the two buckets I put in. It cannot be otherwise. Plus, in the CO2 case we have the isotope results and ocean measurements and O2 declines etc. which demonstrate conclusively that the extra CO2 is anthropogenic. Next point, changes in volcanic activity can affect decadal and century-scale temperatures due to the random occurence of eruptions of the right sort (though I don’t think you dispute that). Solar activity too can affect temperatures, as can internal variability or potentially, land use changes etc. All of these changes will lead to a change in climate that could affect CO2 sources/sinks. Right now, since we do not have accurate forcing functions for any of these factors going back to the medieval period it is difficult to say with any precision which one (or combination) caused the climate change and what effects that had on CO2. It could be all solar (though volcanoes certainly play a role in the late Maunder Minimum period), but even if it were, you cannot use that to prove that sensitivity to solar forcing is underestimated, for the very good reason that you don’t know what the solar forcing was in the first place. -gavin]
Comment by Nicola Scafetta, PhD — 19 Oct 2006 @ 4:06 PM
I understand that CO2 can be changed by multiple causes.
However, in science one interprets the data with what
is known, not with what is not known.
Right now, we know that there is a solar cycle, that this cycle is parallel to the CO2 cycle and that the volcano signals (as interpreted by the models) seem to be almost invisible in the temperature data (Foukal picture).
This leaves the solar cycle the only known cause of climate change on large scale, up to modern times where there is a documented antropogenic contribution.
Until you identify other credible causes, you might suggest that other causes might exist and look for them, but you cannot prove yet that they really exist. Therefore you cannot dismiss an interpretation of the data based on what is known by using what you do not known yet.
By the way, have you never thought the possibility that volcano activity might be some how influenced by solar variation too?
I am not talking of the single eruption, but cluster of eruptions. and I see a correspondence between Mauder minimum, Dalton minimum and an increase of volcano activity.
If so, volcano activity too might be a solar feedback (at least in part), who can tell?. :)
(have you noticed that I talk about solar “variation” and not of solar “forcing”?)
[Response: Solar ‘variation’ impacts on volcanoes? Please. Imagination is a good quality in a scientist but occasionally it needs to be exposed to the real world. -gavin]
Comment by Nicola Scafetta, PhD — 19 Oct 2006 @ 5:30 PM
no one is denying that the climate is changing for the simple reason that it has always done so and always will continue to do so until this planet’s lifespan expires
climate change is a natural phenomenon
no amount of scaremongering can ever make it a man made phenomenon
the mere fact that people advocate shoveling money to other countries to solve it proves it is about wealth redistribution
we should focus on new technology and fighting pollution
re: 85. No, scientific inquiry and results published in peer-reviewed scientific literature have shown the recent global warming is primarily due to anthropogenic GHG emissions. Any other conclusion is simply denying science. No amount of denial can ever make the scientific evidence go away just because someone does not like the results. Read the information for yourself. Don’t let others tell you what to think or say.
Thanks for the information. This is the first paper where I have to search for the origin of the data in the acknowledgement, that s why I have missed it.
As I have said before, I completely agree that there is some evidence for a corresponding pattern on the timescale of several decades up to one century (Maunder and Dalton Minimum). However, there is absolutely no evidence for the link in the long-term trend over the three centuries (17th to 19th) since the increase is nearly linear. The only correspondence is that both trends are positive.
The correspondence of the medium-range pattern is even better with half of your sensitivity. The Dalton Minimum in the smoothed temperature curve (your Figure 1) has an amplitude of about 0.15K, while your reconstruction has an amplitude of about 0.3K. With half Zs this match would be much better. If you would have plotted the smooth temperature curve instead of the unsmoothed in Figure 2 (which would have been much more adequate for comparison), this would be obvious.
Thus with half Zs the match would be better on the time scale, where a corresponding pattern is obvious, and would only be worse for the long-term trend, where the correspondence is only based on your assumption.
Besides: It is not the first time that you do not follow your own arguments: On the one hand you tell Georg that he should understand that a reconstruction should be “recent” but not “too recent”, on the other hand you use very recent data, which is even not yet published (the Lean data of the acknowledgement).
Please be consistent.
# in 82 Sacfetta writes:
“you should understand that a reconstruction should be “recent” but not “too recent” such as the Hegerl’s one”
My point was mainly if you prefer Moberg for your analysis than please give rational arguments for it (and not “novelty”). Would you have taken Hegerl et al if your submissions were a month later? No, you prefer Moberg since it gives you a large signal (that’s not a crime as such but it should be honestly reported). It’s beyound me why none of your reviewers has asked you to check on the sensitivity to the selected reconstructions.
Also some of the available reconstructions estimate the uncertainties in the reconstruction. Why did nobody ask you to do a proper error analysis in your paper? What I am missing is 1) sensitivity to different reconstructions. 2) Uncertainty/error analysis based on both the uncertainties in the temperature reconstruction and in the TSI (allready mentioned here by rasmus and urs). This is nearly no work due to the simplicity and partly linearity (Z function) of your empirical approach.
By the way the reconstruction of Hegerl et al is presented in more detail in another paper (check on her webside:http://www.nicholas.duke.edu/people/faculty/hegerl2.html#pubs and in fact includes the MWP).
Last point about hidden solar GHG feedbacks. On longer timesscales one could compare 14C reconstructed sunspot numbers (Solanki) with the Taylor Dome CO2 record. Whatever the smoothing in the ice exactly is and whatever inertia you assume between this proxy of solar activity and hypothesized CO2 feedbacks you will not find any similarity. Basically C02 shows a maximum of 270ppm at about 10k slight decrease to 8K (260ppm) and then a slow increase to 280ppm preindustrial. The reconstructed sunspot number shows in particular a minimum at about 7k, slow rise to a maximum at 4.5k and decrease again. Of course you might argue that there are also changes on Milankovitch timescales involved, but at least one can state that there is no hint on longer scales that your ideas about CO2/solar activity feedbacks were confirmed.
I think your comments here deserve a comment to GRL.
Urs: I have already explained you why I do not believe that the long sensitivity can be 1/2 of what I found. It will simply not match the data from 1600 to 1900 and will be incompatible with other measures. You should not see only one pattern, but several of them. The data are updated continously, and are available under request, normally a new paper with updated data is published every few years, not every month.
Georg: I used Moberg because of the message it gives. That is, that there is a wide 1000 year cycle in the temperature against the Mann and Jones record. This is the “novelty,” not the fact that it is the most recent in time.
A new reconstruction does not automatically prove that the previous one is wrong, in any case. The message of my paper is that if the temperature has a large 1000-year cycle, likely the climate models are seriously underestimating the solar effect on climate. This is all.
If you change the temperature reconstruction you will get a greater or lower solar impact on climate according to the amplitude of the 1000 year temperature signal, of course. So, if you use Mann and Jones you will find a much lower value than mine which is more or less compatible to that the current models have, if you use Hegerl you will find a value lower than mine, then there is mine and finally if you use Ensper you will find a larger value than mine.
About the CO2 effect, I did not do any real calculations, I simply suggest that the model should contain the CO2 and CH4 cycle mechanisms and many other mechanisms to be correctly interpreted. You comments on this are based on the assumption that the CO2 and CH4 cycle mechanisms are linear, while they might be strongly nonlinear. Until these mechanisms are understood the data cannot be interpreted correctly. This is all.
I strongly believe that in 25 years the climate models will be much much better than today, and I strongly believe that Rasmus and Gavin are 100% in agreement with me on this (is it correct?).
Comment by Nicola Scafetta, PhD — 20 Oct 2006 @ 11:55 AM
Re #71 (comment):
Mike, if there is a larger variation in the pre-industrial past, no matter what the cause is, that implies that the current variation to a certain extent is more variable than is implemented in most GCMs. No matter if that was caused by solar, volcanic or other natural (internal) variations.
In that way, I am in good company, see the discussion about a similar point of view by Esper, Wilson, Moberg, Luterbacher ea. at RC
Most climate models use a climate sensitivity around 3 K/2xCO2. The Echo-G model uses a 2.1 K/2xCO2 sensitivity and is able to reproduce the Moberg reconstruction + instrumental trend. It also reproduces the Eemian-last glacial transition where CO2 levels remained high, while temperatures were near minimum (and ice sheet formation was near maximum).
That means that models with high 2xCO2 sensitivity will show an overshoot for current temperatures, if the real variation in the past was larger. Or that models with low 2xCO2 sensitivity will show an underestimate for current temperatures, if the real variation in the past was lower.
About the recontructions, there are large variations between them, partly because proxy data before 1600 are increasingly unreliable, according to the NAS panel. Further even the calibration period and scaling/regression methods play a huge role and can give a difference of 0.5 K in amplitude for the same reconstructions. See Esper, Frank, Wilson & Briffa.
Further, reconstructions based only on tree rings may overestimate the influence of volcanic eruptions, as not only the temperature is reduced, but there is also a change in direct and diffuse incoming sunlight. If this is corrected, then there is no influence longer than a few years (up to ten years after Tambora) for the strongest eruptions, according to Alan Robock (warning: 36 Mb .ppt file!):
When proxy records of Northern Hemisphere climate change are corrected for the diffuse effect, there is no impact on climate change for time scales longer than 20 years. However, it appears that there was a hemispheric cooling of about 0.6Â°C for a decade following the unknown volcanic eruption of 1809 and Tambora in 1815, and a cooling of 0.3Â°C for several years following the Krakatau eruption of 1883.
Thus if volcanic eruptions only had a temperary and a limited influence on climate, most of the temperature variations are caused by solar changes (and maybe some internal variations).
The reconstruction by Esper is only based on tree rings. If not corrected for the above influence, it will show too much cooling after major eruptions, while the reconstruction of Moberg has a reduced impact of tree rings vs. other proxies, thus is less influenced by them.
To take up the bathtub analogy, Gavin’s version misses Nicola’s point:
The tube has to have a tap that is filling it and a drain that is draining it, roughly balanced. Now we dump in the buckets. Nicola’s argument is that at the same time the tap flow could be increasing. So now part of the 1 1/2 extra bucketfuls in the tub is indeed the “natural” tap water. This would be the solar feedback vs the anthro CO2.
However this point is defeated by the isotope analysis of the atmospheric CO2. I don’t know the details, like the uncertainties, so there may indeed be room for a small increase in natural CO2, but it seems that it can’t be a significant amount (a point I believe Gavin already acknowledged up thread).
that is another reason I like Moberg’s reconstruction more than the others. He uses a wavelet filter to separate the tree ring proxies from the other proxies. There are a lot of mathematical/physical reasons to prove that this is the right approach.
Moreover, I believe that Gavin is missing an important thing I told above. That is, his GISS model does not reproduce the temperature increase from 1880 to 1960. Only from 1960 to 2003 it is reasonable.
I did the calculations exactly by using the GISS simulation and the HadCRUT3 global temperature data: this is the result.
If I look at the period [1880:1910] I get:
So, the temperature DECREASES and the GISS model INCREASES!!!!
If I look at the period [1900:1950] I get:
This means that in [1900:1950] the GISS rate increase is from 2 to 4 times LOWER than the observed temperature rate increase!!!!!!!!!
Moreover if I look at the period [1940:1960] I get:
Finally, the temperature DECREASES and the GISS model INCREASES!!!!
Conclusion: from 1880 to 1960 the data have a pattern of the type
“Down-Up-Down” while GISS has a pattern “Up-Up-Up”. So, GISS simply crosses the data.
Question for all of you, which forcing has a pattern “Down-Up-Down” in 1880-1960?
Guess the answer:
[Response: [Can we reduce the number of exclamation points please? It’s the equivalent of you jumping on the table at a meeting….]. None of your analyses take account of the internal variability which is an important factor in the earlier years and would show that most of the differences in trends over short periods are not significant. Of course, if you have a scheme to detect the difference between intrinsic decadal variability and forced variability, we’d love to see it. Secondly, your comment about Moberg being ‘proven’ right because they use a wavelet filter demonstrates conclusively that you haven’t really thought about this. The big problem with Moberg (or with any scheme that separates out the low frequency component) is that it is very very difficult to calibrate that component against observed instrumental data while still holding enough data back to allow for validation of that low frequency component. Thus the component you are most interested in, is the one with the loosest calibration – that is just a fact of life. -gavin]
[Response: Following up Gavin’s comment, it has indeed already been shown–based on experiments with synthetic proxy data derived from a long climate model simulation (see Figure 5 herein)–that the calibration method used by Moberg et al is prone to artificially inflating low-frequency variability. -mike]
Comment by Nicola Scafetta, PhD — 20 Oct 2006 @ 6:32 PM
Giss model has simulated 123 years (from 1880 to 2003) and for the first 60 there is an evident problem that can be solved if the solar contribution is stressed against the antropogenic one.
GCM are supposed to reproduce the “internal variability”!
Or not? What kind of “circulation” do they contain?
About Moberg I did not state that his approach is the “easiest” one!
[Response: ??? Internal variability in each GCM simulation is uncorrelated to internal variability in the real world and so for short periods (a couple of decades), that variability makes any comparison of short term trends problematic. The forcing fields for the model are the best available to date. Should anyone come up with forcings that are shown to be more accurate, we will use them – however, the tendency with solar studies has been to reduce the magnitude, not increase it (and if you don’t like that, take it up with them, not us). -gavin]
Comment by Nicola Scafetta, PhD — 20 Oct 2006 @ 7:11 PM
An interesting discussion,ther seems to be an over reliance on the accuracy of interpretation of the volcanic signature viz a viz the solar signature.
Looking at the review of the IPCC AR4 climate models review Stenchikov et al there seems to be some question on the accuracy of the models you use Gavin.
Whilst the differentials and ranges in hindcast can be expected,the observations vs Giss was over estimated 115 times.No wonder you cannot see the inverse solar signal.
As noted above, most earlier hindcasts of 20th century climate as well as current IPCC AR4 runs [Miller et al., 2006; Knutson et al., 2006] do not reproduce the observed trends over recent decades in the AO component of the circulation, and thus do not capture the intensification of warming trends that has been observed over Northern Europe and Asia. There are various possible explanations for this discrepancy, but it is interesting to speculate that it could indicate that the models employed may have a basic inadequacy that does not allow a sufficiently strong AO response to large-scale forcing, and that this inadequacy could also be reflected in the simulated response to volcanic aerosol loading.
I would be really interested in a comment about the double amplitude of the Dalton Minimum in your reconstruction compared to the Moberg temperature. To compare amplitudes of two signals is much more meaningful than to compare trends.
Why did you not show the smoothed curve of the Moberg temperature in Figure 2 (to compare apples with apples) if not to make comparison more difficult and hide otherwise obvious differences?
Re #94 (comment):
Mike, the (possible) overestimate of Moberg for the low-frequency signal is based on a model, which uses a small solar variability of 0.15 W/m2 over the 1650-2000 period. Von Storch ea. used a ~1 W/m2 solar variance for their model runs. As the Lean ea. 2000/2005 TSI estimates at the TOA are 2.5/1.25 W/m2 or at the surface 0.43/0.21 W/m2 resp., it seems to me that your model decreases solar variability, while VS inflates it.
Thus are we not looking at a chicken-and-eg problem here? If solar is enhanced (by some factor like stratospheric and/or cloud responses), then the model fits the Moberg reconstruction, while with lower solar variability, the low-variability reconstructions are within the model margins…
Re #95 (comment):
Gavin, solar reconstructions have a lot of problems, as good as temperature reconstructions. But even if the secular trend of solar is reduced in more recent estimates, that only means that the pre-industrial fit of the temperature reconstructions needs a larger factor for solar…
I’m just a humble observer but it seems to me that the main points Nicola and Ferdinand are making are quite reasonable.
[Response:??? I must say I’m a bit surprised, as I think it’s fairly clear that the methods employed in the paper – the main criticism of this post – are unsuitable. Not only are the TSI and temperature reconstructions very uncertain (hence, probably quite noisy and containing large errors, as the choice of different curves give such large variations) , but also the fact that the temperature is affected by a multitude of factors. In addition, there are points which have been stretched beyond justification… No? – rasmus]
Let’s put it this way: it is assumed that sensitivity to 2xCO2 is around +3 C. And we know that, since the late 19th century, CO2 has risen x1.35 and global temperature approximately +0.7 C. Well, if we come to the conclusion that half of that T rise was solar-induced or that the negative effect on T of sulphate aerosols over this period was much lower than previously estimated, we’ll have to revise our 2xCO2 sensitivity estimation downwards.
(As a matter of fact, we might well have already experienced the equivalent of a 1.5xCO2 increase when we take into account the rest of AGHGs for a total T increase of only +0.7C, but let’s not complicate matters).
As Mike told me in his answer, it’s sometimes difficult for non-expert to make his own opinion on a climatic issue, when there is a conflict. So, I’ve a very basic and non-expert question : is there any theoretical (non empirical, for the moment) reason to expect that climative sensitivity to 1 W/m2 solar forcing is the same that climate sensitivity to 1 W/m2 GHG forcing ? Or, another way for the same question, should we expect identical (in amplitude) feedbacks for the same (radiative) variation in TSI and well-mixed GHGs ? By advance, thanks.
[Response: You don’t expect it to be completely the same since there are differences: GHGs cause stratospheric cooling, solar irradiance increases cause warming there – GHGs have a very even effect across latitudes, solar is stronger in the tropics. GHGs are stronger at night, solar obviously isn’t. However, in all modelling experiments so far done (and with many different models), the effects of equivalent GHG and solar forcings (defined as the forcing at the tropopause) are very similar (within about 10%). – gavin]
Comment by muller.charles — 21 Oct 2006 @ 11:18 AM
A question, only slightly off topic:
I’ve been looking for the solar cycle response in the global temperature time series. Haven’t found it yet.
I know of two papers giving an estimate of response to the solar cycle: 1. Scafetta & West 2005, GRL 32, L18713, and 2. Douglass & Clader 2002, GRL 29, 1786. Both give a sensitivity to solar-cycle forcing of about 0.1 K/(W/m^2) (about twice what would be expected from radiative calculations alone). The Scafetta & West paper is just plain rubbish; they take the total signal power in an entire *octave* of the spectrum and attribute it entirely to 11-yr solar cycle variation, with no account for other forcings or for noise.
The Douglass & Clader article seems, on the face of it, to be basically sound, but I have three misgivings: 1. the total time span of the temperature time series is a scant 23yr or so — very short for study of an 11-yr periodicity; 2. the error bars they give for their coefficients seem unrealistically small; 3. their analysis is based on MSU tempererature time series, which I know has some serious problems.
So my question is: are there any other papers I should look at for a realistic estimate of the sensitivity of global temperature to the solar cycle? Urs, you’ve mentioned that the solar cycle is detectable if one detrends, and removes the effects of volcanos and ENSO. Is your statement based on Douglass & Clader, or some other paper, or your own investigation? Is the temperature data on which this is based MSU, HADCRU, GISS, or other?
Re #92: . . . reconstructions based only on tree rings may overestimate the influence of volcanic eruptions, as not only the temperature is reduced, but there is also a change in direct and diffuse incoming sunlight . . .
Iâ??m not sure this is well understood. The Gu et al. 2003 Science paper found increased photosynthetic activity after Pinatubo as a response to increases in diffuse sunlight in a deciduous forest, typically not the type of tree rings used in temperature reconstructions. More importantly, a post-eruption increase in photosynthesis would lead to positive growth anomalies and, all other things being equal, an underestimation of the influence of eruptions on temperature.
[Response: Yes, as usual Ferdinand doesn’t quite understand what he’s talking about. In this case, he has it 180 degrees wrong, as you suggest. See the paper by Robock (2004) on this. – mike]
Gavin, you forget the Hadcm3 model tests with 10 x solar and 5 x volcanic, which found that the model probably underestimates solar variations with a factor 2…
Btw, the largest changes in the ocean heat content are found in the (sub)tropics, where insolation differences are at their maximum. See Levitus, Fig. 2.
[Response: Completely different thing. The question was whether, for the same forcing, there was much evidence for different responses in temperature. As far as I can tell, the answer is no (i.e. the efficacy of solar is near 1). The Haldey centre experiments were related to pattern matching with observations where they deduce a stronger pattern for solar in the obs than they produce in the model given a prescribed forcing. That says nothing about efficacy, since the real world forcing might have been bigger. However, there are other reasons to hold back from accepting that result wholeheartedly. First off, the patterns of solar response seem to be affected by the mechanisms incorporated. At least when you allow the strat ozone feedbacks to occur, you get a different spatial signature of warming due to interactions with the annular modes. Further new mechanisms might change that again (conceivably). And with different spatial patterns, the fit to the spatial pattern in the obs. might change. I don’t know how important that might be, but it’s something that needs consideration. -gavin]
I have not the slightest problem to admit that I am wrong (if the arguments are good…). Indeed, trees growth with less direct sunlight can be (over)compensated by enhanced diffuse light.
In this case I did look at the slide (of Robock in #92) in which he didn’t find any influence on climate over 20-year periods, after correction for the diffuse light effect on tree growth, except after quite strong eruptions (of we had only 9 in the past 600 years + Tambora, which was an order of magnitude larger). This is confirmed in the Robock article (in comment #102) if you have a look at Fig.4, where dendro in general have less pronounced variations (even with opposite sign in some non-volcanic periods) on short-term scale. After huge eruptions, the cooling is more pronounced for non-dendro than dendro (even some warming in the latter).
If we may believe what Robock says in #92, then, after correction for the diffuse light in dendro, a back-of-the-envelope calculation of volcanic cooling over the past 600 years (9 eruptions of Pinatubo strength x 3 K x 3 yrs + unknown/Tambora x 6 K x 10 yrs + 24 smaller ones x 0.3 K x 1 yr) is some 0.036 K, based on dendro and non-dendro proxy reconstructions…
Some longer-term effects may remain after several consecutive eruptions, but even then, the 0.1 K cooling by volcanic eruptions over the past 600 years (0.3 K modeled over the past 100 years, see fig.1 on this page) seems rather high…
Thank you Gavin, for answer #100. So :
1) There’s no physical reasons (on the contrary) to expect the same kind of feedbacks, and climate sensitivity, for solar and GHG’s forcings.
2) But, models give a similar sensitivity.
So my next (basic) question is : which models for which parametrizations ? (e.g. : Empirical models from temperature reconstructions ? Energy balance models ? AOGCM models for 21st prediction and 20th retrovalidation ? In each case and in relation to 1), do we deduce from “independant” datas that solar and GHGs sentivities are similar, or do we parametrize a priori a similar sensitivity ? Or in each case, what is the methodology to exclude an “ad hoc” attribution to each forcing / feedback, not so different from Nicola’s method you seem to strongly criticize here ? By advance, thanks.
Grant, I did some calculations myself. I looked at the period after 1970. I used the detrended CRU global temperature dataset and the MEI-Index for ENSO, one-year running mean for both. The ENSO signal is clearly visible in the temperature. Now I compensated for the ENSO signal in the temperature data (empirically: CRU temp minus the MEI-Index with a scaling factor 0.11 and a 5-month time lag). Now if you look at the residual you will see the volcano signals (El Chichon and Pinatubo) together with an oscillation which corresponds more or less to the 11y solar cycle (hardly any time-lag), with 0.1K per W per m2 TSI change (about the same as Douglass and Clader). It is difficult to compensate mathematically for the volcano signal, since the global forcing signal is not so easy to establish (as the discussions above show ..). But the picture seems rather evident. Of course, it s still only less than 3 TSI cycles (I looked at the PMOD data). I can send you the graphs if you like.
However, I cannot find any 22-year signal, neither in TSI nor in temperature . At least it will be much weaker than the 11-year signal.
It is obvious that there are strong ENSO and volcanic signals in the frequency bands Scafetta and West 2005 are using. That is the reason why they find a 9-year instead of an 11-year signal in the temperature. Thus as you stated their results are meaningless. For the 11-year cycle they get probably the right result by chance, but for the 22-year cycle their signal is much too strong.
Re #100: Gavin, when you say “the effects of equivalent GHG and solar forcings are…within about 10%”, do you mean global average or within any region (eg. the Arctic)?
The IPCC 2001 report states “Several recent reconstructions estimate that variations in solar irradiance give rise to a forcing at the Earth’s surface of about 0.6 to 0.7 Wm-2 since the Maunder Minimum and about half this over the 20th century… All reconstructions indicate that the direct effect of variations in solar forcing over the 20th century was about 20 to 25% of the change in forcing due to increases in the well-mixed greenhouse gases.”
Depending on from where you measure, the warming in the first half of the 20th century is about one third to one half of the total, so that suggests 40 to 50% of the warming in the first half of the century was caused by solar (which is a lot less than Scafetta is claiming).
But according to the data in this 2000 Delworth and Knutson in Science the warming in the first part of the 20th century was strongly concentrated in the Arctic region. That does not fit well with the expectation that solar warming would be relatively stronger in the tropics. The paper seems to discount solar warming, and attributes it to a combination of greenhouse gases and internal variability. I notice they use a climate sensitivity of 3.4K, much higher than the 2.7K James Hansen is currently using, so they may be overstating the GHG contribution.
Is the climate science community backing away from earlier views of solar forcing in the early 20th century? If not, how do you account for the fact that most of the warming occured in the Arctic, and what value would you assign for the solar contribution to 20th century warming? I am sure that is rather less than 50%.
Thanks! It was easy to find the MEI ENSO data on the web.
So, I’ve just done a *preliminary* (only!) analysis. I fit the CRU temperature time series to volcanic and ENSO signals, a la Douglass & Clader. For ENSO I used the MEI index, for volcanic forcing I used data from Ammann et al. 2003, GRL 30, 1657. No smoothing was applied to any of the data. The time period is from 1950 (start of MEI index) to 2000 (end of volcanic data from Ammann et al.). I determined coefficients by least-squares fit (but with no time lag!). The results are a bit puzzling.
The residuals don’t seem to show the solar cycle. There is a strong response at P=9.3 yr, but this doesn’t really match. During the period 1950-2000, the average length of the solar cycle is 10.7 yr (using either the sunspot numbers, or the TSI reconstruction of Lean 2000, GRL 27, 2425). During the 50-yr interval, this leads to a phase differential of 0.7 cycles.
Some of the difference may be due to the different time interval, starting at 1950 rather than 1970. But most of it is probably due to my lack of application of a time lag for ENSO and volcanic signals. I’ll do that next (by nonlinear least squares, it’ll take me a few days). For one thing, the need for a lag is evident from visual inspection of the data-vs-fit graph. For another thing, the coefficient I get for ENSO is only 0.07, well below your value 0.11 (almost certainly due to mismatch from my lack of time lag). So, I’ll compute the fit with time lags computed, and report the results some time this week.
Well, I couldn’t help myself; I stayed up late to do some more analysis. I applied nonlinear least-squares to fit the MEI index for ENSO, and volcanic signal, to the CRU temperature time series from 1950 to 2000. The best fit is for a lag of 9 mon. for the volcanic signal, 5 mon. for the ENSO signal.
The residuals don’t show the solar cycle. There is a periodicity at 9.54 yr, of amplitude 0.07 K — but a 9.54-yr period doesn’t match the solar cycle period 10.7 yr (over the same time interval); the difference in periods leads to a phase differential over the 50-yr span of 0.57 cycles.
Even if I assume the 9.54-yr period *is* a solar cycle response, its amplitude is smaller than expected from the computations of Scafetta & West or Douglass & Clader. They quote a response (to the 11-year cycle) of 0.11 K/(W/m^2), I get 0.08 K/(W/m^2). And this is a more generous response level than is realistic; the 9.54-yr period really doesn’t fit the solar cycle.
So, I’m beginning to think that the response of global average surface temperature to solar variations for the 11-yr solar cycle is *not* amplified (by feedbacks). I suspect that it’s very near the 0.05 K/(W/m^2) from radiative calculations alone. It could even be less, due to the inertia of the climate system. But, the noise level of the data and the brevity of the time span prevent a more accurate determination.
Blair, the influence of direct insolation changes over the solar cycle is mainly in the tropics (~0.2 K amplitude in ocean surface temperature, in average 0.1 K global, see White ea.), while the indirect effect is via relative larger changes in the stratosphere. This is caused by changes of up to 10% in UV radiation, which influences the ozone concentration and stratospheric temperatures (up to 1 K), mainly in the tropics. This causes changes in the jetstream position, rain patterns and the Arctic Oscillation (AO).
The difference in climate reaction between GHGs and solar variation in the troposphere and even more in the stratosphere may be that the GHG warming(troposphere)/cooling(stratosphere) is more evenly distributed over the latitudes, while the solar heating/cooling is mainly in the tropics, which changes temperature(/pressure) differences between the tropics and higher latitudes, thus increasing/decreasing wind speeds and patterns…
Further, the current circumpolar temperatures were not higher than in the 1930-1940’s, a few years ago I checked all stations over 67N, some 30 % were higher than in the 1940’s (mainly West Canada/Alaska/East Siberia), 70% didn’t reach or just reached the 1940’s temperatures, but this may have changed in the past few years. The same for Greenland: updated (2005) summer (melting) temperatures still seems below the 1930-1945 period, while yearly temperatures are rather equal.
there are two possible reasons for the difference: I’ve only looked at the data since 1975. It is possible on the one hand that the period before 1975 shows more a 9.5y period in the temperature (the raw temp. data does not, apart from the small oscillation in the early 50ies). The second possibility is that you did not eliminate all of the volcano signal, since the forcing is regionally different and the global response is not easy to determine. The volcanoes have a strong 9y signal which might influence the period length. I would use the data until 2005 since there is no volcanic signal after 2000. this might also alter the result somewhat.
What is the period length before 1980? Or maybe it’s possible to do the analysis excluding somehow the years 1983-1985 and 1992-1995 which have the volcano disturbance. Because I fear that it’s very difficult to correct realistically for the volcano signal.
Proposition: Use the data until 2005 and then look at the temperature curve, if the mismatch might not be only due to the small peak in 1952 (or test the sensitivity to that peak by starting in 1955), and if there might still be a volcano signal (e.g. negative peaks around 1983/84 and 1992/94).
Anyway, it s really not easy to find a solar signal, but I try hard… (besides: to find the 22y cycle seems hopeless).
sorry, I have missed your comment in 91.
However, your argumentaion goes in illogic circles.
I have argued that the match of the long-term trend is only a result of your assumption. Then you answered that there additionally is a match of the patterns (Maunder and Dalton Minimum).
However, I have found that these patterns match better with half of the sensitivity (I can not see any other distinct patterns besides Maunder and Dalton minimum and the long-term trend) So you are just left again with only the match of the long-term trend … which is only based on your assumption…
You are auguing about a short pattern during the Dalton Minimum where the solar effect seems larger than the observed temperature pattern that is consistent with a scale of 30/40 year cycle.
In the paper, if you read it carefully, I clearly say (a lot of times) that the climate sensitivity to solar changes is “frequency” dependent. Larger is the frequency and smaller is the sensitivity.
In the paper I did the calculations having in mind the secular sensitivity, not a 30/40 year cycle sensitivity that would be significantly smaller than the secular one.
Your finding is perfectly consistent with the hypothesys of my paper (if you read it carefully)!
On a secular scale your estimate is too small, but on a 30/40 year cycle scale it might be realistic.
In other words, you cannot criticize my finding by stating something like:”your estimates for the climate sensitivity to secular solar changes is too large because I find that on a 30 years scale the sensitity is lower”!
By stating this you simply prove that you have not understood the principal hypothesys of my paper and of the previous ones that the sensitivity is “frequency” dependent because of the thermal inertia of the ocean, a fact confirmed by theoretical studies too (Wigley, Foukal, etc).
Comment by Nicola Scafetta, PhD — 23 Oct 2006 @ 10:38 AM
Just a quick question, last year Dr. Scafetta presented his 9/2005 paper at our University. At the time I was curious that the data did not seem to match logic. He was indicating that the solar to terrestrial energy coupling was “capacitive” instead of “inductive”, meaning that it appeared as rapidly rising energy content until it reached a plateau peak and discharged. Where as the satellites indicated in the TOA measurements that over the solar cycle, input from Sol appeared clearly to demonstrate an inductive input of slowly rising energy until the load was saturated and the energy rose rapidly to finally discharge.
In looking at the solar cycle neither the 11 or 22 year reversals appeared to play into the 20th century temperature pattern. However, if I look at 2(two) 22 year periods things begin to become interesting. If you consider a 22 year cycle and an inductive load the low build up to saturation and discharge rebuilding towards saturation becomes apparent. Is this mearly a coincidence of multiple forcings and feedbacks or does this merit solar influence?
I intend to try all those suggestions (thanks!). But, I’m giving a paper at a conference this weekend (on an entirely different topic) so it’ll be a week or so before I can get around to it. But rest assured, I’m gonna get around to it.
Re: response to #101
Thanks, Rasmus, for the lead. At first glance it looks like a very thorough and rigorous analysis.
So you agree, that only the secular trend matches. My original argument was, that this is not a finding but your assumption. It was you who then argued that other patterns match. But they do not. So your finding only consists of your assumption (that the secular trend has to fit). That s all. And it does not help to read your paper ten times more.
The problem is that it does not make sense to compare global temperature to the solar cycle without considering the other forcing factors. Volcanic eruptions and ENSO have a strong frequency component with a similar response amplitude in the same frequency bands, especially the 7-14year band. In Figure 4 of Scafetta and West 2005 you clearly can see that the temperature signal can not be a sole response of the solar cycle, since the frequency of the signals is significantly different. Especially for the 22year cycle you see 16 cycles for the temperature while there are less than 14 solar cycles over the same period. So it’s impossible that one signal is the response of the other. Of course it is possible that there is a solar component in the temperature signal, but there are certainly other signals of comparable strength. Thus the temperature signal certainly shows features of other factors. Neither inductive nor capacitive imaginations can wipe that off. Additionally I have not been able to find any 22 year signal neither in temperature nor in TSI.
If you look e.g. at the temperature of the last few decades, the clearest signals are the response to the El Chichon and Pinatubo eruptions in 1983/4 and 1992/94, respectively (cooling) and the El Nino in 1998 (warming). These events are crucial for the decadal frequency of the temperature at that time and produce something like a 9year oscillation. To pretend that this signal might be of solar origin, because the frequency is in a similar range is nonsense. This is mere coincidence. If you really want to see a solar signal you have to remove the other influences which is difficult for the volcano signal (see my discussion with Grant). I think there is some signal of the solar cycle in global temperature if you account for the other factors, but it looks different than what was presented by Scafetta and West.
Besides, I have not been able to find any 22 year signal neither in temperature nor in TSI. There is a 22-year signal in Cosmic rays, which is a result of the cycle of the magnetic field (you can see it e.g. in the 11-year running mean curve of the Climax CRF). If there is a response of the temperature to the 11year TSI cycle, but not to the 22year cosmic rays cycle (which is not seen in TSI), that would suggest that the temperature is influenced rather by TSI than by Cosmic Rays.
I never pretended that the match was “perfect” in the minimal details!
If you extend the same methodology to catch shorter pattern with 30/50 year time scale the match would be better, of course. But this was not the purpose of that work.
However, even in this simple form the match is much much better that what you would get with a GCM like GISS or a EBM like the one adopted by Foukal, whose simulations are at odd with the secular data and even from 1880 to 1970 when there are instrumental data, as I proved above in #94.
Comment by Nicola Scafetta, PhD — 24 Oct 2006 @ 10:03 AM
Thanks for the clarification, pretty much the pattern I saw that seemed to point up the 22 year cycle was the incoming satellite TOA measurements. Granted the distributed less then 7 watts/meter^2 would be eaten up as noise, it still elevates the noise and I believe that is the point the Dr. Scafetta may have been attempting to point out.
Primarily what we saw was a unique approach of applying electrical engineering wave form analysis to the issue and attempting to ascertain it’s impact. The approach is unique in that it provides an insight in a format that helps folk like me better grasp the variables and the possibility of concidental peaks of noise forcing a real effect. Not that I consider supporting the proposed conclusion; however, the approach if applied to all the known periodic contributors in a given period may make for some interesting model derivations. Now if we can only use a reverse Forier Transform analysis to try to extract the signals from the noise like we did in cryptology…
Comment by L. David Cooke — 24 Oct 2006 @ 11:03 AM
I don’t understand how #94 proves what you claim, when read in the light of the inline comments about Moberg. #94 claims what you claim, but the basis seems unreliable. Why do you consider Moberg more reliable than those who comment? Is this a statistical argument?
I believe we have discussed this topic enough, at least for me. Of course the issue remains open to future research and debates.
I do not believe the criticism of Rasmus to my paper is correct, as also #21 has easily discovered after a cerefull reading of my paper.
However, I would like to kindly thank Rasmus for having given to us all the possibility to discuss my research and this interesting and important topic.
I would like also to thank Gavin and Ferdinand for their interesting comments and auguments.
Well, perhaps I will see somebody of you at the AGU meeting in December. I am organizing one section there dedicated to solar variability and global change with Dr Willson (that is, Mr. ACRIM).
You all are invited to attend, if you like.
I am really busy in doing a lot of different things, I cannot continue to write here and probably there is no need any more.
This experience has been very interesting.
Comment by Nicola Scafetta, PhD — 24 Oct 2006 @ 7:26 PM
Re #110: Ferdinand, thanks for the response. The papers you linked to suggest the solar cycle feeds into the internal variability of the climate due to wind and ocean current patterns. But I am having trouble understanding an effect that is mostly felt in the tropics is entirely concentrated in the Arctic. If you could not access the Delworth and Knutson Science paper I referenced, here is the diagram I am referring to. It shows that the high Arctic was warmer in the 1940’s than today, as you state, but the rest of the world warmed very little. This is in contrast to the greenhouse warming since 1980, which is truly global in extent. The Medieval Warm Period has a a similar pattern.
The information I have seen make me think that solar changes, or the response to them, is less than commonly stated. Factors like the Atlantic Oscillation seem more important, with solar changes mainly acting as a trigger. This does not fit will with the Scafetta claim that solar is responsible for half of the warming in the 20th century. Rather, I am wondering if the IPCC 2001 report overstated solar effects.
Re #123: Blair, the effect is not only in the Arctic, but is more pronounced in the Arctic. It has something to do with the amount of energy which is absorbed in the tropics and via evaporation, ocean and air flows gets into the high latitudes.
Models don’t capture the real (regional) world that good. See the difference between observations and model estimates in Johanessen, Fig. 1. The real temperatures were increased in all latitudes for the period 1930-1940, but more in the Arctic. The same happens now, but the increase is also pronounced in the mid-latitudes. The Echam4 model overestimates temperatures with GHGs alone, and gives too much cooling when including aerosols. For the whole 1930-1940 period, the temperature is underestimated in both cases. The model described in Science does a better job for one of the simulations, but with an unrealistic “internal variation”…
Further, there are fortifying mechanisms for solar like cloud cover (inversely correlated to solar radiance at the TOA). And indeed, solar may trigger some of the internal variations, which may include the AMO/NAO/AO… If the rather uniform warming by GHGs triggers similar internal modes (and cloud cover), that is another question.
Re #124: Hi, Ferdinand. I agree that while models may give a good picture of the overall trend, they are not ready yet to give accurate regional forecasts. As you say, this is apparent in the Johanessen paper. However, they also state:
…we strongly support Delworth and Knutson’s (2000) contention that this high-latitude warming event represents primarily natural variability within the climate system, rather than being caused primarily by external forcings, whether solar forcing alone (Thejll and Lassen, 2000) or a combination of increasing solar irradiance, increasing anthropogenic trace gases, and decreasing volcanic aerosols.
It seems to me that solar forcing is not as significant as I used to think. Indeed, as you state, its main effect may be as a trigger for internal variations.
Thank you Gavin & Nicola et al for an interesting exchange. Sounded just like baseball fans arguing over the world series statistics. But now a question:
WHY isn’t there both a “convective and conductive feedback” to the internally generated RADIATIVE forcings (ie GHGs, volcanoes etc in Hansen et al 2005 – all except solar) such that the net result is that only external solar can raise or lower the temperature of the world?
Certainly any increase in air temperature from radiative forcings (apparently reasonably well modeled in the GCMs) is going to increase the temperature differential from ground to space, which will increase the vertical air velocity (ie increased hurricane strength) and DECREASE the residence time of energy in the air in the same manner that GHGs increase the residence time. Just WHY won’t convection/conduction increase to compensate for the decreased (slowed) radiative transport? Is it even in the Radiative GSMs?
IF conduction & convection increase, then we no longer have a problem with GHGs violating conservation of energy, Wien’s Law, the Ideal Gas law, and a violation of the second law of thermodynamics or Entropy. These all occur in the GCMs when GHGs etc create a higher air temperature without an external energy source. How can Wien’s law require more energy-out be generated but the only source of energy for global warming (except the solar) is by reducing the energy-out to create an energy imbalance to create the radiative warming. There can’t be an Earth energy imbalance in the air because the daily 10-20 degree warming/cooling cycle would very quickly reestablish the balance. How can the ideal gas law predict a trivial change in temperature (due to the change in air density by substituting CO2 for oxygen) when the GCMs predict global warming of 4 to 11 degrees? How can a GHG internally generate enough heat to cause global warming with out an external source of energy? (violates Entropy). IF there is a convective and conductive feedback, then all these problems go away.
Just think of convection, conduction and radiation being 3 parallel heat transport processes. If one decreases (eg GHG radiative heat trapping) then the others increase to compensate.
Solar is the only source of external energy added to the globe (identified so far), and capable of causing global warming and cooling. In fact since solar insolation changes add ~4 w/m2 out of 1364 w/m2 (since ~1700) or ~0.3% of 288Absolute or K ( http://www.grida.no/climate/ipcc_tar/wg1/245.htm), which equals a rise of 0.84K, it already accounts for ALL the observed global warming in the hockey stick. This means GHG caused warming doesn’t exist. There is no greenhouse effect. There is no need for carbon taxes or exchanges or carbon sequestration or the Kyoto treaty or IPCC. The only problems excess CO2 cause are less oxygen to breathe, and a higher ocean PH.which could be solved by dumping chemicals there.
Unfortunately there is also the problem of unemployed climatologists, environmentalists, politicians and bureaucrats.
[Response:I think virtually nobody denies the existence of the greenhouse effect. There is a natural greenhouse effect without, which surface temperatures would be too low (~256K) for presently known life forms. I don’t think that is a ‘violating conservation of energy’, but that you have misunderstood the concept. -rasmus]
John Dodds, your simple view of ocean acidification either mocks your inteligence or is your ironic humor at work. If ocean acidification requires, as you say, [ dumping chemicals there ] why would that be necessary since increasing CO2 atmospheric concentrations are not an AGW problem rather a consequence of increased ocean acidification.
Can you not find something more rewarding to do with your free time than to waste electricity spouting nonsense. Move on!
Comment by John L. McCormick — 26 Oct 2006 @ 10:30 AM
I’ve been discussing this global warming issue on a couple of other sites, but I have a problem; whenever I ask climate change denialists for the emperical evidence that they base their views on, they change the subject or run away, I am beginning to think that maybe there is not mathematical evidence that supports the view that increases in greenhouse gases doesn’t result in a stronger greenhouse effect, and that, just maybe, denialism is simply an allergic reaction to Kyoto.
I guess what I’m trying to say is, why is it that the flat Earthers are never asked to prove that the Earth is flat? If the pressure went on them to do so, maybe some of them would be forced to accept that their views are more a result of their politics, rather than being built on good science. Why are those that support the IPCC always on the defensive rather than attacking to exploit, what appear to me at least, to be glaring weaknesses?
If we are approaching dangerous tipping points (I make
By analogy, you seem to be arguing that if the water level (heat) in a lake goes up because rain increases the inflow (insolation) everything is fine, but if the water level goes up because building a dam (GHGs) reduces the outflow the laws of the universe have been violated.
The feedback fallacy at work here is that somehow convection and conduction will increase without any attendant increase in temperature. But since it’s temperature gradients that drive conduction and convection, that can’t be. So really it’s the gain of the temperature-convection feedback that’s at stake, and if it were high enough to fully offset all radiative effects on temperature, there’d be some obvious symptoms – low natural variability and glacial cycles perfectly correlated with insolation perhaps.
And who ever said that GCMs don’t include conduction and convection?
Can’t tell from outside if the John Dodds who asked the questions at 10:07 am is someone new, or is the engineer named John Dodds whose previous questions have been asked and answered, but in general, it’s worth reading and using the search tool for the really basic questions about physics. Reading that engineer’s questions and the inline answers say starting at the link below in the water vapor thread, you’ll see a pretty good review of the basic science, I think. http://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/#comment-1816
#110 Ferdinand, Must disagree with your statement about North of 67 Polar stations, although you hinted that the latest data may not be the same as during the 1940’s. As always I use recent examples to
disprove solar impact. A large swat of the Canadian Arctic has experienced above average temperatures which are astounding: +10 to +15C above average during the last 10 consecutive days or so.
The sun is about to set for the long night, it is not a solar effect, especially since we are right in the middle of the solar minima cycle. The answers lie elsewhere, advection from the Pacific and Atlantic, widespread cloud coverage, shrinkage of Arctic Ocean ice, and above all a prevailing warmer atmosphere which in this case has very little to do with the sun.
Wayne, one never should use what happens on a few days/weeks or even months as evidence of warming/cooling or whatever may happen for the next years…
As far as I have seen on TV here, New York State had a lot of snow in a only one day last week, because of the polar front going very deeply southward. That is no sign of a continuous cooling trend at all.
Solar effects are mainly in the tropics, but the result of that is more pronounced in the Arctic (as good as temperature variations are much smaller in the tropics than at higher latitudes), due to air/sea flows (including in the stratosphere), going from warm(er) places to cold(er) places… Thus at higher latitudes, the energy transport from the equator is the important item which governs much of the temperature variations, besides direct insolation, of course…
Re 130 & 131:
Same John Dodds, STILL a sceptic, but a better educated one. Thanks for the reference to previous posts. You saved me from doing it. The previous posts suffer from one big deficiency: the responses usually are not addressing all the questions. Eg See Rasmus above in 126 who says the greenhouse effect does exist, but fails to address my question of IF the GCMs take into account Convective forcing/feedback, and consevation of energy etc, etc
ie I find the answers to be NOT very convincing BUT as you pointed out a very good education sometimes.
As to the lake analogy: If the lake has 3 sluice gates over a dam, one called convection, one conduction & one called radiation. If I continuously add rain (solar insolation) all three increase output until in equals a new higher out & the lake level is higher, or if the rain stops the lake level goes back to its original level old in equals old out.
NOw if the rain stops to establish the old equilibrium, AND I build the radiation gate higher (add GHGs) then the reduced radiation flow (some? all?) goes out the convection and conduction gates, instead of the radiation gate. BUT OLD IN still equals OLD OUT. ie the higher GHGs did NOT raise the lake level/temperature, because no one added any water. The decrease in radiation flow (ie what the GHGs cause) is the driver of the increased convection which is actually a convection feedback that will revert the entire system to its old equilibrium state. SO the question is where is the convection feedback in the GCMs What size is it? As long as there is ANY increased temperature from the GHGs, then there will be a driver to cause increased convection etc. And the other question how did raising the GHG gate add any water to the system? – ie conservation of energy.
Note also that if the rain/solar increases, then some of the “out” goes out the convection gate. IS this accounted for in the GCMs? ie The total solar in is partly a convective forcing. Where is this? As far as I can see only radiative is addressed.
If you look at IPCC there is an entire chapter on Radiative forcing, but only that convection equals a fixed number. I want to know that convection forcing is addressed, that convective feedback is addressed, just like radiative forcingas are addressed. As for conduction, I’ve not seen any mention of conduction forcings/feedback in the GCMs, but a temperature increase (from splar or GHGs) will surely force an increase in conduction and lighting.
Because for the life of me I can’t figure out where the extra energy comes from to allow the GHGs to increase the temperature of the world without adding energy to do it. If adding GHGs reduces the output of radiated energy to space and raised the temperature per the GCMs, then Wien’s law which dictates the energy out, doesn’t work. How can adding rain to the lake, increase the lake level by 5 times (ratio of GHG forcing to solar forcing) what the amount of rain (solar in) is? (see Gavin’s numbers above) THE GCMs violate conservation of energy, entropy, cause Wien’s Law to not work, cause the ideal gas law to not work, So I am left with the conclusion that the GCMs are right and Physics is wrong, OR the GCMs are wrong. (actually not wrong – just incomplete by not addressing ALL the convection implications- according to my current theory.)
Re #134: You are just confusing yourself with water analogies. Let me give you a simple example of how the Earth’s temperature can rise without any change in solar input: Albedo. If the Earth gets a little darker, it will reflect less solar energy, therefore absorb more of it, and get warmer.
Conservation of energy is not violated here. To maintain a constant temperature, the Earth must radiate back into space the same amount of energy it receives from the Sun. If the Earth absorbs more energy, its temperature rises, which causes it to radiate more energy back into space (Stefan-Boltzmann law) until it reaches equilibrium at a higher temperature.
You can think of a greenhouse gas as a form of albedo that operates on the infrared radiation emitted by the Earth. If less energy is radiated into space because of greenhouse gases, the Earth’s temperature must rise until the emission of infrared increases enough that the system returns to equilibrium. No physical law is violated by this model of greenhouse gases.
#133 Ferdinand, This is where I seriously part with the idea that causation effects are only for Climate concerns and not for weather. Nonsense, both Climate and Weather causations are identical,
both are driven by clouds, ocean temperatures, wind patterns, Hadley cells, and yes solar effects. I distinguish climate from weather by its simplicity, recognition of patterns and trends, but a climate prediction is usually much easier than a weather one. The sun warms up your location , when its at zenith temperatures are usually warmer than at midnight on a clear night, this is a solar effect. The leap is then made, the sun is warmer, therefore GW is a solar effect, same as a high sun at noon. Here again I completely part with solar proponents, there are so many instruments about measuring the sun everywhere, from space, from the ocean, even underground (cosmic rays) , there are none as it is often repeated, no solar variation to justify GW present conditions. Now a SGW (Solar Global Warming) proponent, must be able to explain everything warmer as caused from the sun. I just explained, a present condition of a significantly lesser solar input giving a warmer (more than ten days) climate. Where is your solar causation in this Arctic case which is occuring right now during a solar minima (-1 w/m2)? What is happening in New York does not answer my question. I am equally fascinated by the apparent disparity, where the location with less solar input, has a much greater warming anomaly than the location with greater sunlight. The contradiction is delightful, but it points out that something else is warming the Arctic. I would suggest that the continent right now is certainly susceptible to cooling, as it always does, but the Northeast US gets its weather usually from the West, while the Arctic is heated by the Arctic Ocean, advecttion from greater Oceans, persistent cloud base and no or very little sun.
Okay? 100 units of that, coming in.
Now, before people started adding CO2 faster than nature consumes it again, the planet was at radiative equilibrium —- alla same in, alla same out, as far as the planet’s temperature measures it.
Yes, different wavelengths in and out, but the energy balanced. 100 = 100.
Now, add CO2. Mr. Arrhenius noticed that CO2 absorbs across some bands of the infrared.
Why? Wavelength happens to be the right length to be absorbed by the CO2 molecule — it’s absorbed, makes the molecule wiggle as it absorbs energy, then it re-radiates it in a random direction.
How much of the infrared part of the spectrum gets through the atmosphere without being absorbed and re-radiated in some random direction? Well, suppose you had a telescope and used infrared film. Could you take a picture of the stars in infrared?
So — most of the energy from the sun is blocked, of course. That 100 units is getting through the atmosphere, in the wavelengths shown above. 100 in. Earth proceeds to do some basic arithmetic.
Plus 100 — equals 100.
Ah, but that’s over time. So say that’s 100 units per year. Whatever you like.
Now, Earth’s temperature’s been fairly stable for ten thousand years or so, til these monkeys came along and discovered fire, and coal, and put the two together and made excess CO2. Cue Arrhenius, who noticed, what? That carbon dioxide absorbs infrared light.
So Earth has a math problem. It’s doing addition, and needs to do subtraction, to keep its energy balance — or it’s going to warm up or cool down.
Incoming, though you’ll get arguments on this elsewhere, we say is stable at 100 units.
Earth’s received all that solar energy — you get your conduction, and convection, and fraternization, and conglomeration, and all those other things, stirring around, because of the incoming energy. Doesn’t matter. 100 in, reaches the planet. All we count is the heat energy. Keep it simple, S.
Now, we boosted CO2. Earth’s sitting here, rotating, one side intercepting that 100 units of sunlight. Rotates (and all those other things going on that don’t matter, they just redistribute the heat energy, make waves, make hurricanes — negligible for energy balance calculations).
Warm Earth turns and is under the night sky. Sky’s dark in most wavelengths. Sky’s, however, become a little brighter — in the infrared. Why? That CO2 (and other greenhouse gases, mostly water vapor of course). There was a stable amount of it, for oh, ten thousand years, maybe more.
Incoming 100, outgoing 100, equilibrium.
Carbon dioxide increased with coal burning — increased because it’s showing up in the atmosphere, where it can absorb (in those bands of infrared).
What temperature is the planet? Oh, somewhere in the infrared. Infrared radiated from the planet goes where? Out, til it hits something. Like a greenhouse gas. Then it’s (aside from your conduction and convection, which are just stirring, no addition or subtraction there) going to be absorbed and re-radiated.
The sky’s gotten a little ‘brighter’ (or more opaque) — same thing, if you’re an astronomer interested in infrared objects. The heat isn’t being radiated away as fast as it was.
100 in, still. Same. 99.98 out. There’s your “extra” heat. It’s not addition — it’s failure to subtract.
What’s happened? Outgoing energy is now down to (begin arm waving here, if not sooner) say 99.98 instead of the previous 100.
What’s happened? Why, the planet’s failed its subtraction exam and has a bit of excess heat.
There you are. What’s the heat doing? Short term, conduction, convection, etcetera. Doesn’t matter.
Long term? Atmosphere is — say — stable at 2x the previous level of CO2. As they say, a miracle happens. Incoming energy from the sun is still at 100. Atmosphere’s gotten a bit warmer, and a bit bigger — it’s a slightly taller and wider radiating surface up there at the very top.
Why the top? Remember that CO2 molecule that intercepted an infrared photon and twanged and started wiggling, then emitted another infrared photon in any old random direction. Which way did it go? A few of them went directly into the outer dark, off into space — Earth did a subtraction problem successfully for that photon, and got rid of it.
The rest of the photons the C02 molecules intercepted —- which is most of them in a fairly wide range of infrared energies, see Arrhenius again —- went off in any other direction. What’d they do? Not subtraction. Not addition. They just hit another molecule somewhere in the atmosphere.
So — incoming we have 100 units of sunlight.
Outgoing, we have, oh, 99.98 units of infrared.
Where did that extra energy come from? You can do the math.
“Planets absorb light from the sun and heat up. They then re-radiate this heat as infrared light. This is different from the visible light that we see from the planets which is reflected sunlight. The planets in our solar system have temperatures ranging from about 53 to 573 degrees Kelvin. Objects in this temperature range emit most of their light in the mid-infrared. For example, the Earth itself radiates most strongly at about 10 microns…..”
Remember, CO2 is a greenhouse gas because the particular wavelengths that carbon dioxide absorbs and re-admits — the ones that are blocked for our infrared astronomers stuck on the ground — are in that 10 micron area, where planets heated by the sun are brightest.
Want to create, say, a heat ray, for your next planetary invasion? — what do you use? Why, carbon dioxide — in a laser:
“Unlike the other lasers producing visible or short near-IR light, the output of a CO2 laser is medium-IR radiation at 10.6 um. It is the classic heat ray of science fiction. I have no doubt that the Martians in H. G. Wells’ “The War of the Worlds” used CO2 lasers…..”
re: 134. It is nothing less than an astonishing height of arrogance that a layman who has apparently never published any climate-related research in peer-reviewed journals believes he knows something more than literally thousands of climate scientists engaged in climate modeling and research all over the world.
Re #134 Let’s assume the dam has three outlets each three feet wide. The water pouring out of them is one foot high. If you block the outlet labelled radiation then the water which was flowing through that outlet will flow through the other two labelled convection and conduction. But the water there will now be 1 foot 6 inches high to compensate for the water no longer being radiated. Thus the lake level (temperature) will rise by 6″ (or 1F.)
While the lake is filling up to reach the new height six inches higher, there will be a loss of outflow, and so the total outflow will be less than the rainfall, but only temporarily. It is important to realise that this delay also applies to the atmospheric temperature. Even if we stop adding more CO2 today, (raising the radiation outlet) the atmosphere will continue to warm until the surface is hot enough to produce the extra convection and conduction needed to return the system to balance. Most of the surface of the earth is water, and when it heats up it is soon cooled again by the wind mixing the upper layer. There is an awful lot of water to warm before the system stabilises at the new levels of CO2 we have now set up.
Here is the question that John Dodds brings up, and one which I have been trying to get addressed for months by Dan’s “experts” but have so far failed. If you increase the temperature in the bottom of a column of air, convection will increase. Anybody want to argue that point??? So, *how much* does convection compensate for a, say, 1Â° C theoretical increase in temperature at the ground due to radiation outflow restriction? That’s the question.
(In the dam analogy, by increasing the height of the radiation weir, you will increase the height of the water behind the dam, but only as relative to the widths of the other weirs. In order for water to increase flow out of the other weirs the height must increase – but not by as much.)
Then we have to consider that the amount of water coming in will decrease because since convection is increasing cumulus cloudiness is increasing. So, for extra credit, how much does cloudiness increase as a result of the increase in convection, and how much does *that* additionally detract from the original 1Â° C increase due to radiative imbalance?
Comment by Steve Hemphill — 28 Oct 2006 @ 11:46 AM
Re #134 and “fails to address my question of IF the GCMs take into account Convective forcing/feedback, and consevation of energy etc, etc”
I believe Manabe and Strickler were the first to parameterize convection in an atmosphere model, in 1964. They (Manabe and Wetherald) came up with a better model in 1967, and many subsequent models. Conservation of energy is treated by ensuring that the radiation leaving Earth at the top of the atmosphere is the same amount as that coming in. As far as I know, all existing GCMs have addressed both these issues for some 40 years now.
You have raised a very important point. Increasing the temperature of the air near the surface by one degree will cause convection. But there is an inversion at the tropopause, ie. below the stratosphere. Since the stratosphere is warmer than the troposphere the convection will halt when the parcel of air reaches the tropopause, the top of the troposphere. In other words, the troposphere can warm until its potential temperature is greater than that of the stratosphere. When that happens the global climate will face a major hiatus.
Your second point is also important. The balance in outflow is not completely controlled by the weirs as you point out. The convection weir feeds a water wheel which drives a weir on the river that feeds the lake. As the water in the convection weir increases, more of the feeder river is diverted into a bypass tunnel which feeds directly to below the resevoir dam. So the lake will not rise by the full six inches I predicted. As you say a major control on the global temperature are the clouds, which provide a quick escape route for incoming solar radiation.
However clouds only work in the tropics. In the sub tropics and over polar regions there are very few clouds. Antarctica is one of the world’s great deserts! Global warming will only have a small effect in the tropics provided the cloud forests remain. On the other hand it will have a major effect on the poles, especially the Arctic where the increased temperature will mean that there is continual cloud providing the greenhouse warming to ensure continual cloud.
The sceptics are right that the computer models are wrong, but the sceptics are wrong in arguing that the models are overestimating the effect of anthropogenic greenhouse gases. They are underestimating the effect in the Arctic and in the northern hemisphere continental land masses where we all live!
Alastair, In what sense are the models underestimating the effect in the Northern Hemisphere? Are you thinking only of temperature? Roesch found that most of the AR4 models overestimated precipitation in the temperate regions leading to a delayed snow melt relative to observations and a positive surface albedo bias. Other models had a positive surface albedo bias due to snow cover for other reasons. So some things in the Northern regions are underestimated and some are overestimated. A positive albedo bias would indicate that the effect of the solar forcing is underestimated and possibly compensated for by an overestimate of GHG effects. Furthermore since modelers tweak cloud parameters to match global albedo and achieve energy balance, and because the AR4 models achieve a good match to global average surface temperatures, there are at least partially compensating errors elsewhere in the models for both albedo and temperature. In a non-linear climate system, it would indeed be fortuitious if these compensating errors made the models more useful for projecting out 100 years. Can you convincingly demonstrate that the projections from such models would also be “underestimates”? So, what is being underestimated, and how can you be sure what the effects of the underestimate are on climate sensitivity to GHGs over where we live?
I submit, that if you can know these things, then we don’t need the models, we can just consult you. The whole point of the models is that we need them to understand and project such a non-linear complex system. The models just are not good enough yet, but they are well worth further investment. I can’t be sanguine about the errors.
Roesch A. (2006), Evaluation of surface albedo and snow cover in AR4 coupled climate models, J. Geophys. Res., 111, D15111, doi:10.1029/2005JD006473.
Just a few quick questions, I too had similar questions regarding the tropopause termperature inversion layer convection related to saturated adiabatic procceses. Upon exploring the available data I came across some interesting set of measurements. On the UCAR site under the COSMIC program appears to be a set of experiments regarding the air temperature associated with the various front movements in range of the Colorado microwave stations.
When I review the tropopause temperature change in relation to approaching cyclonic fronts it appears the -20 Deg. C isothem layer increases in altitude to around 11 Km. By the same token, if the data I have recently reviewed are correct, the mesopheric “cooling” appears to intrude into the Stratospheric range during anti-cyclonic events. I cannot recall them anymore; however, if I remember correctly there were a number of studies that discussed the apparent wave characteristics of approaching fronts in the early 1990’s regarding the compression of the stratospheric region and the effect it had on the transfer of tropospheric temperatures into space near terrestrial surface features.
Is it possible that just the atmospheric fronts themselves could result in waves or destabilization of the temperature inversion layer? By the same token if I look at polar regions with the concentration of frontal changes there seems to be a rapid rise of tropospheric water vapor invading the stratospheric range. This increased water vapor appears to be participating in the generation of PSCs which also affect the ztratospheric ozone layer with the introduction of denitritification (the formation of NAD and NAT) which reduces both the ozone content and reduces the removal of chlorine in the polar regions.
The initial CloudSat and Calipso images appear to demonstrate in the presence of strong thunderstorms in the US Plains that there appears to be Stratospheric clouds forming above them. (We are not talking about “sprites”, or the expansion of the the normal 5-7 km ice->precipitation transition zone.) This would make me wonder is the thermal inversion as strong as you indicate? If along with the reduction in stratospheric ozone there was also a reduction in the stratospheric temperature there is the possibility of the intrusion of tropospheric temperatures and water vapor laden air intruding in to the stratosphere isn’t there?
If I extend the physics regarding an earlier post by the kind folks here regrading the skin effect of the temperature inversion layer on the calm sea as preventing the transfere of the heat content of the top of the ocean back into space; If I add in the NOAA 0 Deg. C themal barrier rise from about 2300 meters to about 1700 meters; If I consider that the 20 Deg. C isothermic level in the pacific appeared to rise from an average of 400 meters to about 100 meters recently; I find myself wondering then how is it that the oceans heat content is dropping, the solar input appears to be consistant, that one of the GEWEX comitties appears to indicate that the atmospheric water vapor seems to be decreasing. I see much data that is conflicting.
Is the heat simply going into melting the surface ice or is it radiating into space? Granted it would take 2500 Joules of energy to convert each ccm of ice; however, there certainly is a much higher precentage of energy coming in than can be accounted for in the melt rate and is currently attributed to re-radiation. Even if the incoming 1364 watts/meter^2 (roughly +/- 7 watts) is indirect to the earths surface it certainly would be transferable to the atmosphere if the atmosphere contain a high amount of aerosol particles would it not?
My apologoies to the kind folks here. This actually is not a rant nor is it an attempt to deny the physics “everyone” has come to accept. This is simply an attempt to find answers, as it appears not all the experimental data supports the multitude of theoritical hypothesis so far. Regarding the original subject, convection, is it possible that there are multiple laws regarding black body radiation in relation to multiple refraction indexes that are just not documented yet?
#144, Alastair, Calculating the rate of GW is a complicated affair, I don’t think that most who have came up
with a trend have claimed absolute certainty. May be in the near future better methods will be used.
I agree that some estimates appear erroneous, I would suggest that the commonest of errors is to make a surface temperature trend, Surface temperatures around the world are not all taken at the same height, I kind of am a bit perplexed when I hear a surface temperature trend, GT’s are useful, but again based
on various temperature heights, sea surface temperature trends are for the most part the ones to analyze.
Going back to the atmosphere, there must be methods devised in calculating the total heat in the system
planet wide, that is the key, I see some efforts in finding Upper Air trends, that is better, but again flawed,
the Upper atmosphere constantly changes tenperatures throughout a vertical profile hour by hour , taking an average at 700 mb, may miss a strong cooling just below, or warming above. Taking the temperature of the entire atmosphere, however weird ithis may sound, is the thing to do.
Comment by brennan chamberlin — 30 Oct 2006 @ 2:23 AM
Re Martin’s #145
what is being underestimated is the greenhouse effect from water vapour. However, it is not a constant and it varies itself with temperature. This means that the water vapour greenhouse effect feedback depends on the surface specific heat, latitude and altitude; all of which affect temperature.
The models do give a a reasonable description of the climate as it is today, but that is no guarantee that they are predicting the future correctly.
Most importantly they do not reproduce the rapid climate change events that have happened in the past. Therefore it seems self evident to me that they are unable to predict the rapid climate change events that will happen in the future. Since those are certain to cause more disruption to the economic system than the slow changes presently envisaged, then for me that is a major failing.
Rapid climate changes can be to both warmer and to cooler conditions. The only driver for those types of event that has not already been eliminated by geological evidence is the main greenhouse gas: water vapour. IN FACT IT IS WELL KNOWN THAT DURING GLACIAL PERIODS THE CLIMATE WAS MUCH DRYER, AND DURING WARMER PERIODS MUCH WETTER.
You were right when you wrote “I submit, that if you can know these things, then we don’t need the models, we can just consult you.” But I am afflicted by the Cassandra syndrome. No one wants to believe me :-(
Pehaps it is because I insist in quoting Sergeant Frazer’s catch phrase “Waur all doomed!”
You have raised some interesting points there which I am not competent to answer but I will give you my thoughts.
The strosphere is warmed by the absorption by ozone of ultra violet rays, and so forms a cap (inversion layer) on the tropsphere. When the atmosphere became oxygenated by the evolution of photosynthetic life and the ozone layer formed, it seems that there may have been a Snowball Earth event, or at least severe climatic disturbance.
There are gravity waves in the atmosphere produced by solar heating and the gravitaional effects of both moon and sun, but because the diurnal heating effect is so strong, and the surface of the earth is so uneven, the gravitational effects are difficult to identify.
These tides, the stratospheric quasi-biennial oscillation (QBO) and the Arctic Oscillation are all probably linked. This may result in biennial outflows of cold fresh water from the Arctic, which account for the ocean surface cooling you mention. Alternatively, stronger winds over the mid latitudes may have increased evaporation and so lowered temperatures.
Since most of the science is done in the USA there is a tendencey to believe that conditions in North America apply throughout the world. The US standard atmosphere is treated as being correct globally, and IMHO this is one reason that the models do not work for the tropical lapse rate.
There is a huge amount of work going on by unsung scientists, and eventually the answer to all these questions will become clear. But will that happen before disaster strikes?
First, it appears no one has a good handle on how much convection is increasing, therefore realistically on how much albedo is increasing due to it.
Second, energy in vs. energy out of the system can hardly balance until we have a better handle on the circulation of the ocean (over 95% of the heat capacity of Earth) and its rate of heat uptake. E.g. El Nino. So no, energy conservation is not yet being realistically addressed either.
James Annan seems pretty sure that he is right with his 3K for x2 CO2. If he is right, then the models that claim 1.5K are out by a factor of 2, and those that claim 4.5K are out by 50%. You don’t seem to have been fooled by those scientists who claim infallibility :-)
Related to your arguments is an error in the models where the surface air temperature is the same as the surface temperature. If that were true, how come we get gound frosts and air frosts at different temperatures?
You are right that taking an average at 700 mb might miss a temperture change elsewhere, but the problem is not whether the atmosphere as a whole is warming. If the air in the boundary layer is warming and the statosphere is cooling so that the total heat in the atmosphere stays the same, then that is no comfort for us who are living on the surface!
In fact, since solar radiation has not changed, then there is no need for outgoing radiation to space from the atmosphere to change if it is to remain in balance. My theory is that increasing CO2 will raise the temperature within the boundary layer. That temperature change will be considerable amplified by the greenhouse effect of water vapour due to the greater humidity at the higher temperature. The condensation level at the top of the boundary layer will prevent this increased humidity reaching further up into the atmosphere, because it will be rained out from the clouds at the condensation level. Although this is an over simplified model, I believe it is closer to the truth than the current idea that a change in the height of layer of atmosphere near the tropopause, around 100 mb, can affect the temperature of the planet at the 1000 mb level. Gavin, how do you achieve this x10 amplification of heat?
I am curious that you would seem to indicate a stable boundary layer. I have been reviewing the latest lidar and microwave radar data and have not found a specifc boundary altitude. It also appears that the transition zone for the condensation level seems to vary greatly. I do not believe it can be as simple as you appear to want to portray. Hence my earlier questions.
Examples can be found under the following links. Note, the warning associated with the CloudSat, this apparently is a recent change to apparently limit the unauthorized distribution. In the meantime, the Calipso data is supposed to range between 0 and 30 km, there are recent examples in the SH temperal zone below the ITCZ of cloud/water vapor structures well above both the 5-7km normal condensation level, above the 15-17km normal Thunderstorm peak and into the 20-26km region of the upper stratosphere. The interesting thing in the latest images is that the stratospheric clouds do not appear to be related to Thunderstorms, I wonder if this is due to uplift in the region of the Andes? Anyway, I thought this may be of interest to you in your further research as you work towards developing your theory.
Regarding the earlier comment about the colostate.edu link for cloudsat. Disregard, I found that a NASA cloudsat link I had tried to execute prior had apparently locked up my browsers java console. It appears the the colostate code requires you to visit the site home page first then if you properly move from http://www.cloudsat.cira.colostate.edu to http://cloudsat.cira.colostate.edu/dpcstatusQL.php everthing works fine. If you attempt to go directly to the dpcstatusQL.php site the code will hang.
“…During a break, Trenberth said the milder 2006 season was due largely to an unexpected Pacific El NiÃ±o that suppressed hurricanes in the Atlantic.
Natural climate variability – including the periodic swings between El NiÃ±o and La NiÃ±a conditions in the Pacific – will sometimes overshadow global warming’s influence on hurricanes, Trenberth said.
“Global warming is still there,” he said. “But this year, natural variability, especially El NiÃ±o, overwhelmed the contribution from global warming.”
Roger Peilke Jr. last spring made much of a draft circulated by Dr. Hansen with an El Nino prediction. Unexpected? Natural variability? Attribution? I dunno.
Thanks for those links Dave. They tend to confirm what you wrote – it is more complicated than I described, but I was aware of that.
In fact most of the the diagrams there tend to show a line of clouds close to the ground. This line will have its base at the condensation level, which is the theoretical height at which relative humidity reaches 100%. In theory above that height the humidity remains at 100%, and so the greenhouse effect from water vapour in the region above the condensation level will not change.
This is of course a simplification, but from ‘your’ diagrams it does not seem as if the condensation level is often breached by much, and the tropopause is breached even less. Of course the scheme I described ignores clouds, but then the Manabe and Wetherald (1967) scheme ignored clouds too.
I am surprised to hear that there are clouds in the upper stratosphere. As I understand it that region is very dry. But the air does rise in the polar summer and travel via the stratosphere to the other pole where it subsides. If the Arctic ice is melting and more water vapour is rising into the stratosphere from there, then that might account for the clouds you are seeing, although they should soon start travelling north again.
But I am a bit out of my depth here, or should I say it is going over my head!
I think by inadvertently choosing a bad analogy I have exposed the problem with your assumptions. The sluice gates at the top of a dam are highly nonlinear. If the lake level is 1″ below the sluice gate edge, outflow=zero. If the water level rises a foot, the outflow becomes very large. That’s a very strong negative feedback, hence the water level tends to remain very constant near the top.
The temperature of the earth doesn’t work that way. There is no magic nonlinearity that causes convection to take off the moment temperature exceeds 289k. The local linearity of temperature can be described as deltaT=deltaR/lambda where deltaR is the change in forcing and lambda is radiative damping. Lambda is really a summary of a bunch of feedbacks, including radiation and convection but also clouds and water vapor. You can argue about lambda, which in a sense is what the whole debate is about, but the evidence just isn’t consistent with an overwhelming convection term.
A better water analogy is a sink. Turn on the taps, then vary the drain opening, and you can get a wide range of equilibrium water levels without violating conservation of matter.
Before you accuse GCMs of neglecting convection and conduction, you might try Google. They include both. A lot of convection is sub-grid-scale and thus must be parameterized, but there are also large-scale circulations that are directly simulated. Either way, convection is a feedback not a constant. I’m sure the models wouldn’t work without it. Conduction is mostly a surface phenomenon (i.e. at atmosphere interface with ocean, ice, etc.) and it is also accounted for.
#157, Alastair, I am not keen on Tropopause heights because they cause a warming below, it is rather
tropopause heights are higher because it is warmer below, there is also tropopause inversions triggered by higher Ozone concentrations, at the point where the tropopause starts you will invariably find the beginning of much higher ozone concentrations.
Sorry about my common usage of surface temperatures,
I should have written SAT’s. GT’s are second to none in meaning, but they do not correctly measure the warming of our atmosphere because they are taken at various heights all over the world. SAT GT’s have meaning only when comparing with previous years. They demonstrate a glimpse of a more complex warming of the atmosphere as a whole. Consider a measurement of GT’s as with a 6 Km
long thermometer bullb fixed vertically just above ground at a stationnary geographic point. The temperature read from this thermometer over time will give a rate of Global Warming much more accurate than any other method. Snce we don’t have 6 Km long thermometers, other ways of measuring atmospheric temperatures have to be devised (perhaps there are a few already in place).
James Annan estimate seems too low, only because the true rate , as you have written , has been underestimated. Unless Annan’s numbers are a SAT value while the models project a greater warming above a stephenson screen….
I am afraid the theory of saturation above the condensation level you refer to may not be the case. It appears that the condensation has a tendency to remove the water vapor from the air as it condenses on Condensation Nuclei, the result is the air above the condensation layer is normally very dry.
That is one of the reasons that clouds get ignored as they generally are not widespread or dense enough to be considered to have a significant impact on incoming solar radiant energy. Recent NASA programs such as the SAGE, TRMM, COSMIC, Calipso and Cloudsat have been engaged to research the effects they might have. Both in the polar and the ITCZ regions. (Where Calipso seems very useful in demonstrating the distribution of water vapor and its reflectivity value (Hence the association with temperature.) The CloudSat seems to be more valuable in defining the temperature gradiants along with vapor densities.)
I did have one question though, you seem to mention the apparent oceanic oscillations such as the NAO or the ENSO or PDO. I am concerned that many seem to use these phenomenon to define characteristics of patterns. It would seem that defining the root source of these oscillations would be a better move as the description of the process may better describe processes that make up the earth atmosphere. Have you seen anything yet in print that describes the source and the drivers of these patterns?
Comment by L. David Cooke — 30 Oct 2006 @ 11:57 PM
“There is no magic nonlinearity that causes convection to take off the moment temperature exceeds (whatever)”.
The point is that if there is *any* CO2 in the atmosphere there will be convection.
Your point of drains vs. weirs is okay, with similar restrictions. The original weir example had widths narrow enough and heights of flow such that all were flowing at the same time. Your drains need to be at differing elevations and of differing orifice sizes, but always submerged. More complex mathematically but perhaps closer to reality (perhaps – needs more thought).
Comment by Steve Hemphill — 31 Oct 2006 @ 12:02 AM
Re: Comment by Hank Roberts – 28 Oct 2006 @ 2:55 am (#137): in response to my 26 Oct 2006 @ 10:07 am (#126):
Hank (& Gavin)
I did the GCM math a year or so ago and I agree that your simplified explanation adequately describes the current models of global warming, with clarifications. One is that the mechanism for the GHG warming is that the radiated energy from the air is absorbed by the GHGs to heat the GHG molecule to 900+ degrees, then the energy is released within microseconds and a few centimeters back to the air by collisions with the air, to return the air & GHGs to equilibrium temperature. (ie the GHGs do NOT “TRAP” the energy like greenhouse glass does). But by adding extra GHGs, there are MORE absorptions, and the energy is resident in the GHGs for more microseconds, which raises the time that the energy is resident in the air during its transit to space, thus causing the GHG global warming.
I agree that the “decrease in energy-out ” is the source of the GHG global warming energy, AS DEFINED in the GCMs.
HOWEVER, when you apply the laws of physics to the new end state, ie globe warmed by a few degrees by GHGs, you get a situation where the new Wiens law value (higher driving temperature gives hotter energy spectrum out) and the new Stefan-Boltzmann value, ( ie HIGHER energy out ) disagree with the physical situation that the model REQUIRES – ie energy out =99.98 units which is LOWER. BUT, the energy-IN is still 100, or actually raised a tiny bit by the added solar. IT DOES NOT MAKE SENSE. The math does not add up. The GCMs global model FAILS to comply with the Laws of Physics. It is not modeling reality. Any discussion about not being at equilibrium yet (the usual response) , fails to notice that on a daily basis the temperature varies by 10-15 degrees and that these changes will force the ground/atmosphere to get to equilibrium within a day or two. You also get a new Ideal Gas law, where the physical change in density which is trivial (substitute 390 ppm of CO2 for O2), results in a temperature than has a trivial change, BUT by the model we are a few degrees hotter. AGAIN the GCMs are describing a model world where it does not now comply with the laws of physics. Either the GCMs are right or the Laws of Physics are right.
One possible way to fix this model failure is to account for the increased vertical flow of convection and conduction caused by the radiative GHG induced global warming, which apparently was, I assume, not adequately included in the GCMs. I have never seen any numbers for Convective feedback! ie The hotter air from GHG warming or forcing causes the total air (including the GHGs!) and electrons to rise faster. Hot air rises. Hotter air rises faster! The increased velocity of the air rising, reduces the time that the energy is resident in the air. which reduces the temperature. As long as there is an increase in the GHG induced air temp there will be an increase in convection/conduction as feedback, UNTIL they reach equilibrium, at the original temperature. This feedback will apply to any internally induced warming. eg albedo: less snow cover or clouds will absorb more energy into the ground, which heats up, which causes increase convection conduction and radiation by the multiple molecular collisions, which will increase the energy out of the ground, which will balance the energy in from outside, for NO change in temperature, and conservation of energy.
But for an external solar forcing, the increase in incoming energy, causes increased convection, conduction and radiation, which results in increased energy out which results in a new higher energy-in equals energy-out. ie the warming we see every morning, & likewise a lower energy-in results in decreased temperature, every night.
The implication is that any process that does NOT add or subtract external energy to change the temperature, will by itself cause a conductive,convective or even a radiative feedback, that will result in no change in the temperature. Mother Nature loves equilibriums. Only external forcings can cause global temperature changes. (ie Solar at this time, but try looking at changes in the Earth’s magnetic field energy which has been decreasing (where did the energy go to?) for the last century- This is apparently caused by the Sun or Jupiter’s changes in gravity and mag field effects on earth as relative orbit positions change) This then complies with the 2nd Law of Thermodynamics – ie Entropy, whereas the GCMs fail to comply again. The 2nd law of thermodynamics, says (in one of its forms) that you can’t get an entity (the globe) to raise its own temperature without adding work or energy from outside. Another version is: You can’t get something for nothing. OR There is no free lunch. GHGs causing warming is a free lunch (ie no external energy required).
This is basically why I am a skeptic, and do not want to pay increased taxes to solve a problem (GHG warming) that doesn’t even exist. GHG induced global warming is an apparition created by a computer program. It is a GIGO fraud. There is no GHG caused global greenhouse effect, because convective etc feedback negates it. There is no need for carbon taxes or exchanges or carbon sequestration or the Kyoto treaty or IPCC, or Al Gore or worrying about methane burps.. Global warming caused by the sun exists (a rise of 4w/m2 out of 1364 since 1700 increases solar insolation by~ 0.3% which is the observed 0.84 K rise of the 288K current absolute temperature), but its very difficult to change that, so I’ll adapt to it.
PS I am folowing Hank’s suggestion and not commenting on analogies – they have a tendency to break down. But for the record, the radiative releases are more like going thru a giant screen with holes that are getting smaller due to increases in GHGs, rather than going over a dam. Radiative transfer is Like standing in front of a giant wave. No matter how much you try to stop it, it is going to get around you one way or another.
& I agree with Steve, any tiny little delta increase due to GHGs gets a response from convective feedback. Also it works in reverse, any cooling effect gets a slowdown in convective feedback – eg at night there tends to be a very quiet no wind time at 3 am – ie the convection feedback has slowed the winds down below normal.
Also, if there is *no* CO2 in the atmosphere there will be convection. CO2’s direct influence on convection (via gas properties) is almost certainly negligible; it’s the radiative influence on the temperature gradient that matters. I think there’s still some discussion about convective parameterizations in GCMs, but whatever you choose has to be consistent with a variety of data. The problem with postulating an arbitrarily strong convection feedback, as John Dodds seems to be doing, is that it’s hard to see how to reconcile that with observed lapse rates etc.
Whether you like drains or weirs, my point was that there’s no mysterious violation of conservation laws when the lake level or temperature goes up – the increase is just the area under the curve when the inflow exceeds the outflow for a time (as in 140 above). A single drain at the bottom is sufficient, as long as the outflow isn’t turbulent (i.e. as long as the outflow is linearly dependent on water depth).
re: 162. “This is basically why I am a skeptic, and do not want to pay increased taxes to solve a problem (GHG warming) that doesn’t even exist. GHG induced global warming is an apparition created by a computer program. It is a GIGO fraud.”
And there we have it. Previously it was pure arrogance for a layman to assume they knew more than the literally thousands of climate scientist modelers. Now it moves to a “head in the sand, no more taxes” approach despite astonishing, overwhelming scientific data and evidence to the contrary. And GCMs analyses that have been peer-reviewed extensively. This is far beyond being a skeptic. It is flat out denial of science and the scientific method, plain and simple.
The GIGO fraud here is your contention that the outflow of energy from a system is less relevant than the inflow. Perhaps you could write down a few equations describing how the atmosphere as you see it works, so that we’d have something concrete to reflect on, not a mumbo jumbo stew of physical principles misapplied.
Re 164. Tom, I believe in the Laws of Physics and most of what is in the GCMs – I think they are a marvel of modern computing (& I used to do computer modeling for a living) – just that they MAY have mistakes in them or MAY be missing pieces such as convective feedback(?) (eg Gavin has said that the GISS Model does NOT believe in Leap Years. – ie it runs on 365.0 days per year. Now does this mess up any of the data that is time dependant? SO far Gavin says no. but it sure seems to have the potential to put springtime back in Christmas if you are not aware of it)
Please try to address my technical problems – eg 2nd large paragraph starting “However” above. Where is the misapplication? How can The physics (Stefan-Boltzmann) require the energy-out to be Larger for a hotter GHG caused temperature (this I accept), but the GCMs require a number lower than the energy-in to provide the source of the energy for global warming? To require a lower number for outflow means that we are NEVER in equilibrium, but it seems that the daily temp fluctuations will force us into equilibrium every day. – this implies that the computer program is wrong. IS IT? Where did I mess up? How do I resolve this apparent conflict?
IF convective feedback HAS been incorrectly applied or partly missed in the GCMs, then this MIGHT solve MY problem- but it will also eliminate GHG warming.
As for outflo being less relevant than inflow, I think that the inflow might dictate what the outflow MUST be (ie rules of entropy). and regardless of what barriers (GHGs, dams etc- I liked the dam analogy I would just prefer to use the real problem to talk about) we put in the way my gut tells me that the energy will find a way to get around the barrier to make the equilibrium work.
SO just HOW can we justify that that the outflow in the computer MUST be less than inflow for the 250 years of the computer run, when clearly the daily temperature cycle will reestablish the equilibrium (at least for the atmosphere & ground – not sure about deep ocean equilibrium, BUT I also know that there is MUCH MUCH MORE energy stored in the Land (eg solid iron core of earth) than in the ocean & the GCMs do NOT address this either). I have questions that make the computer model seem to NOT work. I am asking for answers.
Any GCMers out there who know for certain that the convective feedback identified in #142, is in there? (Hint hint Gavin!!) Since apparently the mechanism for adding energy for GHGs is by calculating the extra energy that GHGs add by staying in the air longer (ie increased transit time to space) was the equivalent (reduced transit time)for convective feedback included? OR does the program just calculate how much energy is moved to space by the fixed number for convection (& conduction – which apparently does not even have a number in IPCC docs) & ignore the reduced transit time via this pathway? Just asking???
Further thought about inflow=outflow. and 142 above.
IF the energy required by the GCMs to create the rise in GHG induced temperature comes from the outflow to space (per Hank’s model in 137, which I thought was pretty reasonable), BUT IF the GCMs are required to have inflow=outflow @TOA (ie equilibrium – per #142 & the formal publications’ descriptions of the GCMs from GISS etc,) THEN WHERE IN (rhetorical) HELL does the energy come from to create GHG Global warming?
Do we have a failure to conserve energy in the GCMs? By chance does the calculation to create the GHG energy absorbtion (ie GHG spectrum absorption etc which is as real as it gets) forget to create the mechanism to return this energy to space? Is it by chance the Convective feedback mechanism?
Does this explain why the Solar can seem to account for the full measured 0.8C increase in global temperature (see #126), BUT the GCMs say that GHG warming should be 5 times larger than solar (1.5 vs 0.3 forcing -see Hansen et al above at the top.)
ie the GCMs forgot to conserve energy???
Careful with the answer to this – it implies that GHGs may NOT cause global warming.
Sorry people, BUT I am getting more and more confused.
Re #166 and “they MAY have mistakes in them or MAY be missing pieces such as convective feedback”
I’ve explained to you before that convection has been modeled in atmosphere models since 1964. Either you didn’t read my posts or you’re deliberately ignoring them. In either case, stop repeating something that isn’t true!
Also, your 4 W/m2 calculation for the Sun causing 0.6-0.8 K warming is off by a factor of three. I know because I’ve done that calculation, more than once, right here on Realclimate. Have you read what’s been previously posted here?
Maybe I am taking things out of context in relation to your comment; however, I am curious as to your assertion. If I look at the standard accepted TOA value I get a measurement of approximately 1364watts/meter^2. If I look at ITCZ measurements on clear sky conditions I get a surface measurement of approximately 850 Watts/meter^2. If I look at the percentage reaching the ITCZ region of the TOA value it appears to be appoximatly 62.3%. If as the satellites indicate that during periods of soalr increases the value increases to approximately 1371 Watts/meter^2 and I multiply the standard of the 62.3% incoming then the 4 watts/meter^2 would be valid for direct incoming solar energy. What is wrong with this value? Are you suggesting that not all things remain constant and as the solar energy at the TOA increases that the direct energy penetration increases three fold?
Tom: Re 168 Yes OK the observed lapse rate is the wet adiabat. So what is the implication that I am not aware of?
When you add GHGs & the radiative absorbtion in the GCMs the calculated change in temperature changes the adiabat/lapse rate also. In order for the GCM to work, the temp profile has to change to increase the ground temp by 3+ degrees (assumed to be caused by the GHGs) BUT the TOA temp has to be the same to satisfy the energy-in=energy-out condition from BPL. All I am postulating is that the change from the GHGs does not exist due to convective feedback so the lapse rate is actually what it is today without the doubling of CO2. I do not understand the implications of your comment?
Re 163 . I object to the “arbitrarily strong” characterization. IF the rise in temperature is caused by the radiative absorbtion by 390ppm of GHG (I agree), then the proposed increase in convective feedback is spread out over a MILLION ppm of air. The observed change in the convection rate will so so small as to be virtually undetectable, (390/1000000) & probably within the uncertainty of any parameterization of convection. (that I was not aware of, thanks)
I do not understand how you can deny (impicitly) that an increase in ground/air temp by the GHG radiative effects (ie add 3+ degrees to the ground temp) will cause the convection/air velocity to increase. So my question remains (slightly modified & strengthened thanks to you) if convection was parameterized, and you radiatively increase the temperature by GHG absorbtion, then how do you account for the fact that increasing the air temp will increase the air velocity and convection and decrease the length of time that the convective air is in the globe? ie How to account for “convective feedback”?
BPL: re 169. I am agreeing that convection has been modeled, & I agree that the GCMs have included an equlibrium energy-in equals energy-out at the TOA. My question is did they model the CHANGES in convection due to the GHG warming? what I call convective feedback.
As for my calculation being off by a factor of 3, where is it off? It is a simple ratio of the change in solar insolation ( 4) to the value of the solar insolation (1364) taken from the IPCC chart ( http://www.grida.no/climate/ipcc_tar/wg1/245.htm). If you want to account for the geometry effects of the insolation hitting only one side of the spherical earth, then you have to do it to both the numerator and the denominator. If you want to use the GCM value of solar forcing of 0.3W/m2, then you have to compare it to the same thing, apples to apples.
I just wanted to share a interesting presentation I have just discovered. For the layman such as myself, I believe this presentation appears to encapsulate a number of resources and discusses their participation in the Earths Energy Budget. I think that much of the data here may address issues that do not appear to get much play in the popular press. I would be interested in hearing comments regarding the science discussed here.
In case there is confusion over the TSI as indicated in the fore mentioned presentation in which the average value is attributed to be around 240-247 watts/meter^2 the values can be much higher.
The reference below from the ARM.gov Western Pacific Solar Radiative Insolation site indicates that the values appear to be as great as 1200 Watts/meter^2 Total Solar Irradance (and that is just for the long wave value).
(The 850 watt estimate I used should be the average of the incoming full spectrum TSI during a clear sky day. If you average it across 24 hours and separate out the values acording to the various spectrum values then the average as indicated in the presentation above should be accurate.)
“… physical facts? We report experiments assessing people’s intuitive understanding of climate change. … The tasks require no mathematics, only an understanding of stocks and flows and basic facts about climate change. Overall performance was poor. Subjects often select trajectories that violate conservation of matter. Many believe temperature responds immediately to changes in CO2 emissions or concentrations. Still more believe that stabilizing emissions near current rates would stabilize the climate…. Such beliefs … violate basic laws of physics.”
Read the abstract; download and read the PDF. It will help understand why equilibrium isn’t reached until long, long after the carbon quits being added by burning fossil fuels. During that time, the planet heats up.
Re 171 etc. Unfortunately I don’t have time to fully respond at the moment (hope to before the thread closes) but a few notes in passing:
Re radiation vs. convection: the energy budget linked in 172 corresponds with others I’ve seen. (e.g. Fig. 2.1 in Global Warming: The Hard Science by L.D. Danny Harvey, Prentice Hall 2000) – the sensible and latent heat fluxes (conduction, convection, evapotranspiration) are 24 and 78 W m-2 vs. incoming solar of 342 and outgoing longwave of 235 W m-2. Thus the emphasis on radiation is understandable.
Re 167 I think 142 is right about convection but wrong about the TOA radiation balance constraint. Models may be tuned to get the TOA balance right in equilibrium, but TOA balance is emergent, not enforced (and given that the models have internal variability, equilibrium is a rather fuzzy notion). The correct way to ensure conservation of energy is at the grid cell level, i.e. you make sure that there’s no creation or destruction of energy within each atmospheric cell and in transport processes among cells. I agree that if models constrained inflow=outflow at TOA, there would be no source of heat to drive warming, but they don’t. So, if you instantaneously put a lot of GHGs into an atmosphere that starts in equilibrium, radiative outflow < inflow and temperature starts rising. As things warm up, outflow rises (more longwave, more convection) until equilibrium is reached at a higher temperature. Given the sensible & latent heat transport #s above, it doesn’t seem very plausible for convection & conduction to play a role comparable to radiation (especially because latent heat transport also puts more moister in the upper atm, and that water vapor feedback traps more radiation).
Hank, Tom, Re 174/5
Thanks for taking the time to respond.
First comment- You say there is a long heatup time to get to equilibrium – I say it is IN THE GCM. & I think the GCM ignores convective feedback which would eiminate the long buildup time. Nice logic circle we’re in!!
Why doesn’t the daily temp swing of 10-15 degrees on both sides of the equlibrium force the atmosphere to adopt the equilibrium in=out value? Since it passes thru the equilibrium energy-in equals energy-out point why would it go past it (except as driven by the daily forces less the inequilibrium part)? The daily temp cycles impact the entire atmosphere and a few inches of ground and ocean, there is no way to delay the effects of a GHG release until years out. (unless you hide it in the deep ocean & it takes time (more than a day) to get the effect down there) The daily fluctuations will eliminate/compensate for the imbalance in the first few inches of ocean depth within a few days at most.
Corollary, does “convective feedback ” exist or not? ie When you heatup the air by the valid GHG radiative absorbtion process, does this higher temperature create a natural increase (ie CHANGE) in convection or Not? (eg does GHG warming cause stronger hurricanes in one extreme)
If so, & I find it hard to deny, is it in the GCMs? I have not seen any numbers for it & if the logic works then it there should be a big negative convective/conductive feedback equal to the Radiative & GHG warming increase.
Re the study – I quote Lincoln – you can fool all the people some of the time, some of the people all the time etc etc. This study can NEVER be scientificly conclusive. BUT very interesting anyway. Sorry guys, it just proves how gullible people are. It also proves that if you repeat a story (true or not) enough (ie GHGs cause warming) then people will believe it. SOrry, not conclusive – stick to the science please (to quote Gavin).
In the GCM equilibrium is a fuzzy notion due to the modeling technique, In the world it is required for conservation of energy. It is absolute!
THe Hard Science book quotes fixed numbers for convection etc . DO they ever change as a result of the GHG warming process. ie is there a convective feedback? True radiative transport IS bigger so look there first, but how can you ignore a tiny change in a whole lot of convective molecules?
Now if you look at daily temp changes, when the sun raises the temp, the sun causes winds to change, hotter air to rise faster, and at night the winds die down as the earth cools off. Why wouldn’t the same process work for GHG caused warming? ie Convective feedback exists.
I can see that adding GHGs creates a longer transport time in the air, hence warming, BUT wouldn’t the warmer temp create a shorter transport time in convection? (it would be very tiny since a million molecules have to move faster to compensate for 390 extra absorptions) If this is true then there is no delay time in reaching equilibrium & the daily temp cycles don’t have to do much at all to restablish it.
Yhe only way I see out of this dilemma is to ask Gavin as a GCM expert a) is convective feedback in the GCM? Was it implemented when they added the GHG energy via the wavelength spectrum absorbtion process or was it assumed that the computer program took care of the conservation of energy. And B) what is its value? ie My understanding of convective feedback is that it was overlooked, & so when the GHG process was added it adds in the extra energy from GHG warming and forgot to subtract it out by convective feedback. SO we have an inadvertent net lack of conservation of energy.
In my view convective feedback is capable of solving all MY problems (so far) with the GCMs. but it also means that the GHG impact is simply a transfer of some of the transport process from radiative to convective. ie Mother Nature (physics) has a perfect feedback mechanism that maintains energy equilibrium instantaneously regardless of how we mess up the atmosphere.
AND since my version of the % rise in solar accounts for the observed 0.6 to 0.8 observed rise then the system would work if convective feedback exists. CAN you comment on my calculation of % temp rise in 126? It is extremly simple! Where can it be wrong? If it is correct then how can the GHG warming be a ratio of forcings (ie 1.5 over 0.3) higher which does NOT agree with the observed temperature increase.
The emission temperature of a planet, the temperature as measured from some distance away, can be found with this equation:
Te = (S (1 – A) / (4 sigma)) ^ 0.25
where Te is in kelvins, S is the Solar constant, A the Earth’s bolometric Bond albedo, and sigma the Stefan-Boltzmann constant. S at Earth’s orbit averages 1367.6 Watts per square meter, the Earth’s albedo is about 0.3 (assume this is exact for the moment), and sigma has the value 5.6704 x 10^-8 in the SI, which gives an emission temperature for Earth of 254.9 K.
Global warming since 1880 or so has been about 0.6 K. How much would the Solar constant have to have risen to provide that much of an increase? Solving for S, we have
S = 4 sigma Te^4 / (1 – A)
Plugging our results for Te back into this equation, it gives S = 1367.9 (which shows the problems of using significant digits). If we take Te = 254.9 – 0.6 = 254.3, we get S = 1355.1. In other words, the Solar constant would have to have increased by 12.8 Watts per square meter to get the observed warming. The Solar constant has, in fact, risen by about 1 Watt per square meter over this time period. Solar can’t do it alone without violating conservation of energy.
There may be some feedback in the Earth system that “multiplies” changes in the Solar constant. But until the Solar freaks identify what that feedback is, their theory fails on basic scientific grounds.
[Response: Paul, the argument is basically sound (and the answer qualitatively correct) but one needs to take into account that that the earth is really a ‘gray body’ and not a ‘black body’ to get the right answer. This increases the sensitivity to solar irradiance changes just as it increases the sensitivity to longwave (i.e., greenhouse) radiative forcing. There is a very nice site here at NYU which provides a simple energy balance model where you can tweak the longwave emission parameters away from their blackbody values (for example, to accomodate the existence of the greenhouse effect) and its easy to do simple experiments where you change the solar constant by a small amount, etc. I highly recommend this for folks who are interested (Matlab required!). One can learn quite a bit by playing around w/ simple models and getting a feeling for how the radiative balances work. -mike]
Re #176 and “THe Hard Science book quotes fixed numbers for convection etc . DO they ever change as a result of the GHG warming process. ie is there a convective feedback?”
Why don’t you get ahold of John Houghton’s “The Physics of Atmospheres” or Grant W. Petty’s “An Introduction to Atmosphere Science” and find out? The equation for the dry adiabatic lapse rate is very simple and has nothing to do with how warm the air is. The equation for the saturated lapse rate depends on how much water vapor is in the air and does have a temperature term, but doing the math shows that the effect of a few degrees temperature change on the lapse rate is trivial. You’ve got a qualitative theory (“changes in the lapse rate offset warming from CO2”) about what should be a quantitative problem. Do the math!
However, the saturated adiabatic temperature differential at pressure does not appear trivial. At approximately 500mb most condensation appears to occur in relation to Stratus cloud formation. 300 mb appears to define the Cumulus Nimbus formation with peaks forming near 200mb. The temperature deviation at 500mb runs from -20Deg. C to over -40 Deg C. in the upper ranges (200mb) and the change in altitude can vary as much as 6 km over a wide spread area exceeding a column of 25 sq km. I find it kind of difficult to try to simplify this characteristic, which is why I was curious if maybe you could provide some clarification?
(As an aside comment, there appears to be a curious association between atmospheric phenomenon and plate tectonics. As a natural phenomenon the size of the convective plume rivals the size of plate tectonic “hot spot” though as to temperature the relationship is not even close. (However, this is likely related to density of the material. If I remember correctly the heat content of the mass is related to the density at pressure, in addition to the latent heat capacity.) And if you look at the plate boundary physics you could almost see an allusion to the atmospheric fronts. It would almost seem that we have a physical model below our feet to help us to understand what is happening over our heads. The problem is that the transfer of heat between the layers in solid/plastic form and the liquid/gas form may be much different as the layers in the less dense material appear to deform to accommodate the exchange or transfer of energy.)
My thanks for your explanation. It seems so simple when you do the equations. For me I see far too many variables to be able to comfortably base my confidence of a lack of effect on a simple extraction for a change on the order you are indicating. Especially, when I consider the mechanics of the heat transfer remain to be defined. However, that is why we have scientists, to separate out the pepper from the fly spec and to keep things in perspective.
Re 177, Hank et al
Your denial that hotter air rises faster is totally unbelievable. ie Convective feedback must exist. The sun on a daily basis warms the air & the hot air rises to compensate ( & the atmosphere expands). My statement on the daily temerature cycling (it causes both vertical AND horizontal convection/weather) was intended to point out that the GCM contention that an energy-imbalance can exist for 100s of years, is impossible. If in exceeds out and the diffential MUST exist from top to bottom of the atmosphere, then before the hotter air can migrate to the deep ocean, the daily temerature cycling will force the hotter air at the bottom into an overall equlibrium ie hotter air will rise – or more correctly since GHGs have heated the air up more at the bottom, then the sun induced daily warming will add more heat to the top, & less at the bottom to force the equilibrium – ie effectively hot air rising even if not in actuality. This establishes the energy-in = out equilibrium EVERY DAY so a 100 year dis-equilibrium as calculated by the GCMs is impossible. YET another in the line of physics failures that the GCMs require. Note that IF the GCMs add in convective feedback equal to GHG warming then the problem goes away.
IF daily solar heating causes (effective)convective feedback – to reesablish in=out, then why doesn’t GHG warming which adds 3+ degrees to the ground temp ALSO cause convective feedback. ie hotter air/energy to rise faster. & then WHERE IS IT IN THE GCMs?
[Response: This is probably a mistake, but here goes. The long term imbalance is due to energy imblances at the surface of the ocean – not in the middle of the atmosphere. And these occur because it takes a long time for surface temperature anomalies to affect the deep ocean. Convection of course acts in the GCMs, and that is the principle reason why the atmosphere (particularly in the tropics) stays near a moist adiabat. This of course implies relatively constant relative humidity is therefore a big part in the water-vapour feedback. Thus when you ask ‘where is the convective feedback in GCMs’, the answer is in the water vapour feedback. -gavin]
A different perspective on the same problem: (try expanding your views a little)
THE GREENHOUSE EFFECT:
The is no argument that extra GHG absorbtion causes warming within the radiative transport mechanism.
The mechanism as I have been taught (painfully) in the site, is because the addition of GHG absorbtion causes the energy to stay in the GHG for a few extra microseconds of residence time before the energy is (mostly) returned to the air by molecular collisions, as the energy is transported from ground to space an a series of millions(?) of absorbtions.
NOW if you look at all three transport mechanisms, convection, conduction and radiation, it is actually the sum of the residence times in ALL three that determines what the extra temperature rise is due to transport thru the atmosphere. (Note that this INCLUDES GHG warming of the greenhouse gas heating effect as just a single part.)
ie The greenhouse effect says that the absorbtion by GHGs causes the actual ground temp (288K) to be higher than the theoretical – 255K.
In fact the greenhouse effect is mis named. since it is actually the TOTAL residence time of the energy as it is transported from ground to space that causes the difference in temperature. ie Convection residence time plus conduction RT plus radiation residence time.
I propose that the total residence time is dictated by the in=out equilibrium, and the quantity ie density vs distance of the air.- ie ideal gas law?) (is this valid/reasonable?) regardless of individual constituents (limited by density changes) Because, in all the transport processes the energy can move between transport mechanisms in the air by molecular collisions every few centimeters. ie energy can easily move from conduction to radiative transport & back again etc.
Thus if you change the air constituents (ie add GHGs) then the energy transported by radiative effects will increase, but the increase in GHG residence time will cause a feedback and decrease in conduction etc residence time (ie hotter air rising faster). ie Increasing GGs causes convective feedback to decrease the convection residence time. INORDER TO make the air conform to the ideal gas law.
IF you can just add GHGs and add energy (by forcing a failure to comp;y with in=out equilibrium for 100s or years) then the air no longer complies with the ideal gas law. because the temp increase is larger than the trivial increase calculated by the density change of substiuting 390 ppm of CO2 for O2. ie Just adding GHG warming in the GSMs forces the ideal gas law to fail.
Now go reread #126.
If conductive feedback exists nearly equal to GHG warming, then this GCM failure to comply with the Laws of Physics ceases to exist., but so does warming,
(Sorry I told you I was loking at the same problem!!)
In all seriousness,
Please think about it before you fire off a no it can’t be comment.
Conservation of energy and equilibrium are not the same thing. If the world required equilibrium in order to conserve energy, a pendulum couldn’t oscillate. Equilibrium is a convenient notion for comparative static analysis in both physics and economics, but the atmosphere and the rest of the world are in more or less constant disequilibrium, and GCMs reflect that.
The convection numbers I cited were from an energy balance diagram, not a model. The point is that a small flow (convection) can’t make up for changes in a large flow (radiation) unless you postulate an unrealistically high gain. To draw another bad analogy, it’s like assuming that a rise in the GDP of Delaware could compensate for a fall in California.
Instead of badgering Gavin, why not just Google & read a few papers on convective parameterizations in GCMs? There are lots.
“….. a significant increase in the height of the tropopause – the boundary between the turbulent troposphere, which is the atmosphere’s lowest layer, and the more stable stratosphere that lies above it…..
The team’s results show that human-induced (anthropogenic) changes in well-mixed greenhouse gases, which are fairly evenly distributed in the atmosphere, and ozone, a greenhouse gas that is found in higher concentrations in the stratosphere, are the primary causes of the approximately 200-meter rise in the tropopause that has occurred since 1979. In their research, team members used advanced computer models of the climate system to estimate changes in the tropopause height that likely result from anthropogenic effects. They then searched for, and positively identified, these model-predicted ‘fingerprints’ in observations of tropopause height change.”
The tropopause is among the most fundamental structures in the atmosphere. It is the interface between the water vapour-rich and dynamically active troposphere below and the ozone-rich and relatively quiescent stratosphere above. Traditional radiative-dynamical theories suggest that the position of the tropopause is determined by radiative-convective equilibrium in tropical latitudes. But this theory is not really consistent in the extratropics and we require a new theory to explain the position of the tropopause here. Dr I.Held (GFDL, Princeton) has already suggested that the position of the extratropical tropopause is determined by baroclinic eddies, which are the predominant form of large-scale motion in the extratropical troposphere. Professor Haynes and I are using numerical models, new diagnostic tools, and analytical techniques to try to assess Held’s suggestion. ….”
You can _see_ the tropopause as well as read about it.
“When the rising cumulus columns meet the tropopause, or base of the stratosphere, at about 15,000 kilometers (50,000 feet), they reach a ceiling and can no longer rise buoyantly by convection. The stable temperature of the stratosphere suppresses further adiabatic ascent of moisture that has been driven through the troposphere by the 5-6.8 degree/kilometer (8-11 degree/mile) lapse rate.”
Re response to 182.
Hi Gavin, yes it was a mistake to start talking to me again. :) BUT I appreciate knowing that you read all this stuff. As you well know I am stubborn BUT I WILL learn when you show me the physics, or the process or my errors.
If convective feedback is in the WV feedback (ie the negative convective feedback reduces the positive WV feedback ), then where is the convective feedback for the increased vertical velocity for the O2 & N2 in the air? GHG absorbtion results in GHG collisions with ALL the air, thus raising its temperature. (ie Hotter AIR, not just GHGs, rises faster. )
Your statement is not very convincing – what are the values for it? How was it calculated. Did you calculate just the energy transport or did you calculate the reduced temp effect from the reduced residence time in the AIR also? (ie the reverse of the GHG residence time effect)
Any comment on 183 as a new way to view the greenhouse effect, which is really a residence time effect NOT limited to GHGs?
Continuing your thought, the mean residence time is the harmonic sum of the individual residence times, so tau = 1/(1/tauRad+1/tauConv+1/tauCond). Conductivity of air is lousy so assume for the moment that tauCond is large so its term disappears. In equilibrium, the radiative and convective heat flows are Q/tauRad and Q/tauConv, where Q is heat. If the average flows are as in 175, 24 W m-2 for sensible and 235 W m-2 for outbound longwave, then tauConv must be about 10x tauRad, so for any change in tauRad (from GHGs) you need tauConv to change about 10x as much to compensate. Even if you take the convective flux as 24+78 W m-2 (including latent) you need better than a 2:1 change in convection, and some way to explain away the radiative effect of moving more water vapor up.
I think things get even worse when you consider the real physics rather than the linear back of the envelope. Convection depends on the temperature gradient, not absolute temperature, so tauConv is only going to change if the gradient changes. But strong convective feedback would tend to hold the gradient constant, so radiation would again dominate.
Why doesn’t the daily temp swing of 10-15 degrees on both sides of the equlibrium force the atmosphere to adopt the equilibrium in=out value? Since it passes thru the equilibrium energy-in equals energy-out point why would it go past it (except as driven by the daily forces less the inequilibrium part)?
The argument that diurnal variations pass through the equilibrium point is incorrect. The equilibrium point varies with the forcing, which is low when the sun is down and high when it’s up. You are correct in noting that a first-order system won’t pass through its equilibrium point, but that’s not what’s happening here. A change in GHGs will change the balance of time during the day that temperature is above/below its equilibrium point, and thus change the average energy balance over the day.
I think I can see Mr. Dodds point though, the transfer of energy is not a constant 300 +/- 50 watts/meter^2. The value for the Western Pacific ARM.goiv site demonstrates a down welling longwave value of between 500 and 1000 watts for apprioximately 9 hours per day. The shortwave down welling irradiance appears to run about 400 watts 7×24 with peaks around 450 watts.
Would this not indicate a total TSI at the surface at the Western Pacific center of approximately 1200 watts for 9 hours/day and around 400 watts for the remaining 15 hours? Would not the 800 watts of energy difference cause a stong convective rise of atmospheric gasses / vapors for some period of time every day? This then would also be followed by a period of relative lack of convective forces.
The character of the convective energy would seem to be an excellent application of a Granger statistical test for causuality to determine the relationship between the total incoming energy and the total outgoing energy. (It would be even more interesting as the incoming shortwave would now be emitted as longwave. Hence, the combination of the downwelling longwave reradiated back out and the shortwave energy converted to longwave radiated energy should signify a major imbalance in which the total energy balance is maintained; however, the character of the measure has now changed significantly.)
If we take that even farther we now have increased the longwave upwelling imbalance even further as the surface energy with the first 10-25 meters will act as a feedback making the downwelling Longwave a product of not only the TSI; but, will include a portion of the reflected upwelling.
Oh well, I guess this has all been taken into consideration so neither Mr. Dodds nor I need not worry.
Re #188 and “Would this not indicate a total TSI at the surface at the Western Pacific center of approximately 1200 watts for 9 hours/day and around 400 watts for the remaining 15 hours?”
No, it wouldn’t. There is no Solar Irradiance at night.
If you want the mean figure for Solar energy going into the Earth system, it’s
F = (1/4) S (1 – A)
where F is the flux in question, S the Solar constant (1367.6 Watts per square meter on average), and A is the Earth’s bolometric Bond albedo (Goode’s 1998 estimate is 0.298). F is thus about 239.3 Watts per square meter. The 1/4 factor integrates the fact that the Sun is down at night and that the Earth is a sphere and some irradiation thus “glances off,” so to speak.
The .25 value in relation to the area of direct energy seems a little contrived. If I look at most of the UV-A/B warnings they useually indicate that between local hours of 10AM through 2PM are the usual time of the highest incoming energy. This indicates that 4 hours are the point of highest gain or 1/6th the rotation of the earth. Are you suggesting that this value should be expanded to 9AM through 3PM? (It does seem interesting that in Oct. that the Western Pacific Radatiative Pyrometer seems to indicate that the incoming longwave energy exceeds 800 watts for nearly 8 hours per day or 1/3rd the rotation.)
You are correct that using the single 1 sq. km column definition that there would not be a constant TSI; however, on a global scale that is not accurate as the black (gray) body has a constant TSI of roughly 40% and a constant radiative sink for about 60% of its surface. The fact that the black (gray) body is rotating must play a part in the difference in the radiative transfer for if it were not rotating the total heat transfer would be much different. (It becomes kind of confusing when I am discussing the apparent input and output of energy based on one model and your formula applies to the evidence under a different model; but, that was my fault, my apologies.)
If you were to place an IR sensor in space, opposite the sun, to view the earth’s surface you would clearly see a glowing surface of higher intensity on the left with a very low level emission on the right. This would indicate that the emission levels are not constant, as Mr. Dodds was attempting to indicate. Also this would seem to indicate that the emission center of highest intensity is not direct back at the source but offset by about 25% from the angle of incidence. (I wonder if there is a chance that standingwaves or saturation can play a part in limiting the incoming energy?)
I begin to wonder if a simple modeling radiative analysis is useful for anything other then obtaining a ball park value, with a variation of up to roughly 40% from the apparent median. It just concerns me that with this high a variation that a level of confidence, as you seem to share, can be so high. Then again it could just be a matter of you are dumbing the data down so that a layman such as I can understand it. My thanks for your contribution.
I don’t in any way dispute that daily energy flows vary. I was just pointing out that daily variation doesn’t restore balance by driving the system through it’s equilibrium point. Even if the atmosphere is roughly first order, the equilibrium point varies with daily insolation, and radiation varies as much as convection.
Granger testing convection is an interesting idea, but I suspect that there aren’t many good direct measurements of convection to work with.
I think the downwelling/upwelling radiation you discuss is fully accounted for.
All these quantitative analyses are academic. One has to remember that most of Earth’s surface is ocean, and we really don’t have nearly enough of an idea about ocean circulation. Until we do and can say what the rate of heat uptake by the ocean is, there’s no realistic computation possible about joules in vs. joules out (in the centuries or less time frame) which is, after all, the ultimate measure of “global warming”, correct? The atmosphere holds about as much heat as the top two meters of ocean.
Then, after we have a handle on that, we need to consider the changes that surface heat concentration will induce in evaporation, which will affect cloudiness, iterate…
Thanks for the clarification, are you suggesting that the equilibrium point is based globally or on a column model? If the cloud aerosol contribution is as low as I have seen it attributed, would this not say that globally the equilibrium point should be fairly stable? Or is it that the aerosol/cloud contributions are much higher and the equilibrium is much more variable then seems attributed in most GCM?
As to calculating values for convection, I can see that would be very difficult, as you apparently could not use cloud height and dimensions or water vapor from satellite sampling. However, as you said it is very difficult to determine the various photometric energy balances, between reflection, frequency transformation, absorption, radiation, convection, conduction, forcing (direct and indirect), feedbacks (direct and indirect), etc., …
I have to admit the current state of the art in regards to climate change source data seems sparse and yet to see so many intelligent individuals seem so adamant about the physical processes that I wonder if the reported hypothesisâ��s are worth the read. I guess that is why there are so many questioning the premises in regards to Climate Change?
Would you all define “equilibrium point” as you’re using it?
I used it meaning — the final temperature reached after the planet stabilizes its heat balance, the way it’s used in defining climate sensitivity (the change in temperature starting at equilibrium, then doubling CO2 and waiting a few centuries til equilibrium is reached, with the planet three degrees or so warmer).
I think others are using it meaning daytime local temperature?
As we are using it in this case, the equilibrium point should be the residual global temperature median for a given total solar energy flux level for 1 rotation of the earth for a given angular relationship between the equator of the earth and the sun in relation to the standard planetary orbital plane.
You apparently are using it as a projection into the future based on changes in the value of the radiative flux. Your application would seem to lack descriptive value in that you would not know what the “standard” equilibrium point would be comprised of. Is this the desired use of the term?
As a standard, to define the change in the trend line, may be a better application, IMHO. It would seem to make more sense to use it as a comparative analysis of the equatorial angular relationship, year to year temperature median as it changes from year to year regardless of the flux in the contributing elements.
This then could be the standard measure you then can modify in the models by changing the values of or add values for the contributing elements of the equilibrium standard. This way the equilibrium point would be very descriptive as it would be tied to a specific model or time period. I will leave it up to you to define what you believe should be the standard use of the term and comply with the consensus in the future.
But is climate not the trend in weather? Is it climate that defines change in weather over time or the change in weather over time that defines climate. I suspect it is the later and hence the energy balance over time due to the various influences that drive the day to day weather is what becomes climate. Are you suggesting that weather and climate are not related? (Mirriam and Websters define climate as: 2 a : the average course or condition of the weather at a place usually over a period of years as exhibited by temperature, wind velocity, and precipitation.) Are you suggesting that meteorologists would not make good climatologists?
(Actually, I believe what you are refering to, is as your link pointed out, the IPCC definition of equilibrium climate sensitivity, and not simple solar radiative equilibrium, as we have been discussing in these last few posts.) Does this help reduce the confusion?
I meant equilibrium in the normal sense, i.e. a condition in which the states of the system are constant, with inflows=outflows. The reason I originally said that this was a fuzzy notion in GCMs (and the real system) is that they’re not low-order systems with constant average forcings, and equilibria might not even exist. Nevertheless its sometimes helpful to think about them as if they’re low-order systems and to talk about equilibrium, under the assumption that the envelope of variable behavior moves in a predictable, low-order way. But it’s important not to confuse the map with the territory. In particular, it’s not OK to invoke variability arguments in a 1st order mental model – you need to explicitly model what’s going on if you want to talk about the details, which is why there are GCMs.
Would it not be a reasonable approach to establish a gold standard of earth Solar Energy Flux equilibrium. Why not start with a pure theoritical 24 hour rotating blackbody with 1/2 recieving full spectrum 1370 watts, from an object of approximately 30 arc min. and 1/2 of the blackbody exposed to a less then 3 Deg. K and establish a gold standard equilibrium point.
Once this has been established why not establish the current observed base value of the real thing. By taking a known full spectrum Solar TOA value and you can measure with the same instrument the emitted or upwelling TOA value you should be able to discern the residual energy added to the earth. If you performed a Granger Casuality analysis to check for the trend or lead/lag variability you could then clearly state the current equilibrium point.
Once you have established these two values you can then add in the IPCC contributors and the variability of their values plus their forcing or feedback values to determine if the contributor values result in a positive or negative value in relation between the gold standard and the observational data. If as everyone suspects there should be a positive value then you can specify once and for all a specific measure that is only refuteable by varing the contributors.
Once this is established we can start to ascertain the contributor values in a clear and concise manner and place them in the GCMS. We then increase the accuracy of the value and variability or period of each contributor as the measurements improve. The end result would be a very good model at least of the equilibrium and establish the standards of measure as we move towards the final product.
Maybe this is already being done and the data simply is not being published. Maybe no one wants to measure this data this accuratly? Then again it is possible that is these standards and a standard approach were applied then maybe the opportunity for an individual team to shine is lost. If we don’t consider bringing this approach to the table I am afraid that there may be political forces that will and they will directly tie the future research funding to a program such as this. If the community is not out in front of a project like this it is possible that it could lose it right to choose. Is that what we really want? If a government mandated program for a GCM became a political priority I am afraid this could become a crushing blow to the many diverse programs now being investigated.
Re #201 and “Maybe this is already being done and the data simply is not being published.”
Maybe it’s already being done and you’re simply not aware of where it’s been published. Most stuff about radiative equilibrium and planetary temperatures can be found in introductory astronomy texts as well as climatology papers and texts.
Thanks, that is something I plan to explore further. If the satellite is at the L1 point and my memory serves me correctly, it should place the object directly in line between the earth and the sun. I wonder if they are going to use a flat mirror and a set of tuned/filtered pyrometers or a set of prisims and multiple pyrometers? I think the former would reduce the probability of deviation between detectors though the later is likely less expensive.
If you have any suggestions for links this would be valuable. I have been researching the data sets for nearly 9 years now and have not found data in this basic format yet with a level of confidence that exceeds 97% or a margin of error less then 3%. If you have a reference please insure it is publically available as I would not qualify for most, as I am not in the profession. My thanks for your assistance.
I wrote the NASA Project Manager to see if they have an update on the status. Apparently this was one of the lost projects in the payload roster. Reviewing the instruments in the pdf was very interesting, the high sensitivity/resolution for the CCD and a lack of shielding in the wake of possible ICME solar flux in the next few years may be worrisome. It is too bad they have probably already built the craft. If they could come up with a cheap booster alternative or hitchhike a ride for a throwaway version to at least grab 12 months of data for a baseline would be welcome…
In light of your link, I wonder if there is a measured source as had been suggested by another poster. Who knows there might have been other birds that have experiments that have the tools to at least capture samples that can be extrapolated.
Perhaps I was too tongue-in-cheek, because I meant that Motl goofed up the math in the process of pointing out the logarithmic effect. Since he’s a theoretical physicist we can excuse his innumeracy. :) I suspect that Motl sources his 1.0C and 0.76C numbers (2x vs. present forcing) from Lindzen, rather than using his own equation. Ironically, he cites this RC post as backing for his views, even though it clearly points out the error of neglecting thermal inertia. I guess we have to excuse his illiteracy, too.
Motl also misstates other work he cites. For example, he says that Annan’s reply to Hegerl concludes that the actual sensitivity is about 5 times smaller than the Hegerl et al. upper bound, but reading the actual reply, it’s clear that the upper bound refers to Hegerl et al.’s naive prior, not their final result, and that the 5x should be at most 4x, even if you think it makes sense to compare upper bounds to means (less than 2x otherwise).
The thermal inertia lag is nontrivial – it means that current temperature is less than the equilibrium temperature expected from current forcing by a factor of tau*g, where tau = time constant of thermal inerta and g = growth rate of emissions. That could easily be 50%, which means that even if atmospheric CO2 levels off today, there’s as much warming in the pipeline as we’ve already seen. Of course, emissions themselves are above uptake, so the equilibrium temperature implied by today’s emissions is more like 4x current (1/44%/50%). And then there’s the inertia in the economy….
On forcings, from the GISS 2005 paper cited above (I think, this is a very long thread).
“A CO2 standard seems better not only for the practical reason given above, but because actual solar forcing is complex and the climate response to it is not well known. Solar irradiance change has a strong spectral dependence [Lean, 2000], and resulting climate changes may include indirect effects of induced ozone change [RFCR; Haigh, 1999; Shindell et al., 1999a] and conceivably even cosmic ray effects on clouds [Dickinson, 1975]. Furthermore, it has been suggested that an important mechanism for solar influence on climate is via dynamical effects on the Artic Oscillation [Shindell et al., 2001, 2003b]. Our understanding of these phenomena and our ability to model them are primitive, which argues against using solar forcing as a standard for comparing simulated climate effects. ”
Doesn’t this imply that the CO2 forcing is really a forcing for CO2 and all other factors not explained by the other forcings? In particular, wouldn’t it include solar forcings that resulted from solar effects not included by the assumptions on solar forcing? Specifically, that the true solar effect would expected to be lagged, frequency dependent and non-linear?
Re 187 Tom
I see the reasoning for why the magnitude of the Convective (+ conductive) feedback is NOT sufficient to compensate for the added GHG forcing (1/tau analysis), however, lets consider that my misidentified original “convective feedback” concept should really be defined as the response to the added GHG warming @ ground & the energy dis-equilibrium at TOA. Any temp increase results in a feedback of Convection & conduction AND radiation in order to return to equilibrium as required by the Stefan -Boltzmann equ applied to the GHG warming effect.. So I can see that the combination of all three would have the magnitude to compensate.
My question then becomes WHEN does this return to equilibrium accur? The GCMs say that it takes many many years. & I am NOT sure why it takes so long. It would seem to me that whenever a single added GHG raises the temperature by delta T, then the Stefan-Boltzmann (SB)feedback effect (above) would immediately respond by compensating with the feedback (Conv & cond AND radiation) that would return the earth system to equilibrim as FAST as the radiative (& convective)effects can transfer the GHG warming delta T to space. Isn’t the SB Feedback FASTER than the added GHG warming? I do not under stand why it would take years – ie your last paragraph in 187 & the GCM results – One related question the energy dis-equilibrium (& GHG warming) accumulates to get the many years & 3+ degrees effect, Does this change if you run a 1880-2000 case or a 1750-2000 case or what about the 13,000+ year case which is actually how long the GHGs have been increasing since the last ice age. Does the dis-equilibrium really last that long?
BUT I leave this question for another day. I need to think about it a little. Thanks for the consideration.
[…] is the Bad, exemplified by two papers by Scafetta and West that have been discussed on RealClimate here and here. This is just normally bad science, in the sense that there is something wrong in the […]