In a recent paper in *Geophysical Research Letters*, Scafetta & West (S&W) estimate that as much as 25-35% of the global warming in the 1980-2000 period can be attributed changes in the solar output. They used some crude estimates of ‘climate sensitivity’ and estimates of Total Solar Irradiance (TSI) to calculate temperature signal (in form of anomalies). They also argue that their estimate, which is based on statistical models only, has a major advantage over physically based considerations (theoretical models), because the latter would require a perfect knowledge about the underlying physical and chemical mechanisms.

In their paper, they combine Lean et al (1995) proxy data for the TSI with recent satellite TSI composites from either Willson & Mordvinov (2003) [which contains a trend] and of Fröhlich & Lean (1998) [data from the same source, but the analysis doesn’t contain a trend, henceforth referred to as ‘FL98’]. From 1980 and afterwards, they see a warming associated with solar forcing, even when basing their calculations on the FL98 data. The fact that the FL98 data doesn’t contain any trend makes this finding seem a bit odd. Several independent indices on solar activity – which are direct modern measurement rather than estimations – indicate that there has been no trend in the level of solar activity since 1950s.

But, S&W have assumed a lagged response (which they state is tS4~4.3 years), so that the increase prior to 1980 seems to have a delayed effect on the temperature. The delayed action is a property of the climate system, which also affects greenhouse gases, and is caused by the oceans which act as a flywheel due to their great heat capacity and thermal inertia. The oceans thus cause a planetary imbalance. When the forcing levels off, the additional response is expected to taper off as a decaying function of time. In contrast, the global mean temperature, however, has increased at a fairly steady rate (Fig. 1). The big problem is to explain a lag of more than 30 years when direct measurements of quantities (galactic cosmic rays, 10.7 cm solar radio, magnetic index, level of sunspot numbers, solar cycle lengths) do not indicate any trend in the solar activity since the 1950s.

*Fig. 1. Global mean temperature from GISS.*

In order to shed light on these inconsistencies, we need to look more closely at the methods and results in the GRL paper. The S&W temperature signal, when closely scrutinised (their Fig. 3), starts at the 0K anomaly-level in 1900, well above the level of the observed 1900 temperature anomalies, which lie in the range -3K < T < -1K in Fig. 1. In 1940, their temperature [anomaly] reconstruction intercepts the temperature axis near 0.12K, which is slightly higher than the GISS-curve in Fig. 1 suggests. The S&W temperature peaks at 0.3K in 1960, and diverge significantly from the observations. By not plotting the curves on the same graph, the reader may easily get the wrong impression that the construction follows the observations fairly closely. The differences between the curves have not been discussed in the paper, nor the time difference for when the curves indicate maxima (global mean temperature peaks in 1945, while the estimated solar temperature signal peaks in 1960). Hence, the decrease in global temperature in the period 1945 – 1960 is inconsistent with the continued rise in the calculated solar temperature signal.

Another more serious weakness is a flawed approach to obtain their ‘climate sensitivity’, and especially so for ‘Z_{eq}‘ in their Equation 4. They assume a linear relationship between the response and the forcing Z_{eq}=288K/1365Wm^{-2}. For one thing, the energy balance between radiative forcing and temperature response gives a non-linear relation between the forcing, F, and temperature to the fourth power, T^{4} (the Stefan-Boltzmann law). This is standard textbook climate physics as well as well-known physics. However, there is an additional shortcoming due to the fact that the equilibrium temperature is also affected by the ratio of the Earth’s geometrical cross-section to its surface area as well as how much is reflected, the planetary albedo (A). The textbook formulae for a simple radiative balance model is:

F (1-A)/4 = s T^{4}, where ‘s’ here is the Boltzmann constant (~5.67 x 10^{-8} J/s m^{2}K^{4}).

(**‘=’ moved after Scafetta pointed out this error. **)

S&W’s sun-climate sensitivity (Z_{eq} =0.21K/Wm^{-2}), on which the given solar influence estimates predominantly depend, is thus based solely on a very crude calculation that contradicts the knowledge of climate physics. The “equilibrium” sensitivity of the global surface temperature to solar irradiance variations, which is calculated simply by dividing the absolute temperature on the earth’s surface (288K) by the solar constant (1365Wm-2), is based on the assumption that the climate response is linear in the whole temperature band starting at the zero point. This assumption is far from being true. S&W argue further that this sensitivity does not only represent the direct solar forcing, but includes all the feedback mechanisms. It is well known, that these feedbacks are highly non-linear. Let’s just mention the ice-albedo feedback, which is very different at (hypothetically) e.g. 100K surface temperature with probably ‘snowball earth’ and at 300K with no ice at all. In their formula for the calculation of the sun-related temperature change, the long-term changes are determined by Z_{eq}, while their ‘climate transfer sensitivity to slow secular solar variations’ (ZS4) is only used to correct for a time-lag. The reason for this remains unclear.

In order to calculate the terrestrial response to more ephemeral solar variations, S&W introduce another type of ‘climate sensitivity’ which they calculate separately for each of two components representing frequency ranges 7.3-14.7 and 14.7-29.3 year ranges respectively. They take the ratios of the amplitude of band-passed filtered global temperatures to similarly band-passed filtered solar signal as the estimate for the ‘climate sensitivity’. This is a very unusual way of doing it, but S&W argue that similar approach has been used in another study. However, it’s not as simple as that calculating the climate senstivity (see here, here, here, and here). Hence, there are serious weaknesses regarding how the ‘climate sensitivities’ for the 11-year and the 22-year signals were estimated. For linear systems, different frequency bands may be associated with different forcings having different time scales, but chaotic systems and systems with convoluted response are usually characterised with broad power spectra. Furthermore, it’s easy to show that band-pass filtering of two unrelated series of random values can produce a range of different values for the ratio of their amplitudes just by chance (Fig. 2). As an aside, it is also easy to get an apparent coherence between two band-pass filtered stochastic series of finite extent which are unrelated by definition – a common weakness in many studies on solar-terrestrial climate connection. There is little doubt that the analysis involved noisy data.

The fact that there is poor correspondence between the individual amplitudes of the band-passed filtered signals (Fig. 4 in Scafetta & West, 2005) is another sign indicating that the fluctuations associated with a frequency band in temperature is not necessarily related to solar variability. In fact, the 7.3-14.7 and 14.7-29.3 frequency bands may contain contributions from El Niño Southern Oscillation (ENSO), although the time scale of ENSO is from 3-8 years. The fact that the amplitude of the events vary from time to time implies slower variations, just like modulations of the sunspot number has led to the proposition of the Gleissberg cycles (80-90 years). There is also volcanic activity, and the last major eruption in 1982 and 1991 are almost 10 years apart, and may contribute to the variance in the 7.3-14.7 year frequency range. S&W argue that their method eliminates influences of ENSO and volcanoes because their calculated sensitivity in the higher frequency band is similar to the one derived by Douglass and Clader (2002) by regression analysis (0.11 K/Wm^{-2}). This conclusion is not valid. Having signals of different frequencies in the 7-15 years band, the amplitude of the signal in the higher band may correspond roughly to the 11-year signal by accident, but that doesn’t mean that there are no other influences.

S&W combined two different types of data, and it is well-known that such combinations in themselves may introduces spurious trends. The paper does not address this question.

From regression analysis cited by the authors (Douglass and Clader 2002, White et al. 1997), it seems possible that the sensitivity of global surface temperature to variations of total solar irradiance might be about 0.1K/Wm^{-2}. S&W do not present any convincing result that would point to noticeably higher sensitivities to long-term variations. Their higher values are based on unrealistic assumptions. If they would use a more realistic climate transfer sensitivity of 0.11K/Wm^{-2}, or even somewhat higher (0.12 or 0.13) for the long-term, and use trends instead of smooth curve points, they would end up with solar contributions of 10% or less for 1950-2000 and near 0% and about 10% in 1980-2000 using the PMOD and ACRIM data, respectively.

We have alread discussed the connection between solar activity (here , here, here, and here), and this new analysis does not alter our previous conclusions: that there is not much evidence pointing to the sun being responsible for the warming since the 1950s.

**Acknowledgement:
Thanks to Urs Neu for comments and inputs.**

Gary Uderhill says

Hi,

My first post,

sorry for my ignorance and lack of the English language. I have only recently found this amazing web site. Would like to say thank you to all that is involved for trying to give a complete fair picture, and sticking to true pure science. All my knowledge so far has been self taught, as my trade in the UK is a Chef, soon to change i might add. My passion for our blue marble has begun to run so deep within me that i need to know and understand what is happening.

I have been reading so much from many sites, i’m just finding it a little hard to grasp a few things.

Firstly, referring to text book climate physics. can some one give me a reference for the uk so that i can begin to understand the basics of the physics going on.

So as you can imagine, as I don’t have an understanding of this, I have found the above article a little hard to chew, I think I get the overall picture. .Is solar influence research part of the overall picture of global dimming? And also, is most of the data backing global dimming turning out to be unreliable.

I received this email today, regarding Antarctica’s ice cap. would like confirmation of the source, and also what they are highlighting

http://news.bbc.co.uk/1/hi/sci/tech/4857832.stm

thanks for your time and efforts

Gary Underhill

Timothy says

Gary – the above topic goes into some detail on what appears to be inappropriate use of statistical techniques on climate data, they aren’t normally so hard for a lay person to follow.

As to the news story you link to… Antarctica is a very data sparse region, and the news story is giving the impression that a very strong conclusion is being drawn from the analysis of the available observational data. There are all sorts of quality control issues with analysing long-term climate records [ie are there biases introduced by changes in the measuring instruments used?], so the upper air warming signal might not actually exist, it’s hard to say.

However, Dr Ridley is correct when he says that the models have some problems with Antarctica, although this has little relevance* on how good they are for 21st century predictions for the globe as a whole.

* It is possible, though, that they cause models to slightly overestimate the ice-albedo feedback. Dr Ridley mentions the winds that come off Antarctica not mixing in the models, and this would cause the sea-ice to extend further north, reflecting more of the spring sunshine. The problem arises because most of this sea ice will melt in future global warming scenarios and the warming signal will be taken as the difference between the control [which perhaps has too much sea-ice] and the sea-ice free future. Consequently the difference is exaggerated. I would have thought it would be easy to see whether this was the case though, and it has probably been shown that the effect is not too large.

Steve Latham says

Rasmus, when you say FL98 contains no trend, what do you really mean? Did the authors’ analysis fail to reject the null hypothesis? It wouldn’t be the first time that an examination of apparently trendless data found something highly significant when some correlate was taken into account. From your post it sounds as though S&W is just a data mining exercise, and I think that is a stronger criticism than the lack of trend in FL98.

[

Response:There has been a contention between FL98 and Willson & Mordvinov (2003) about the issue of trend -long time change – in their respective analysis. It boils down to how they sew together the various bits of data from different satellites (a bit like the MSU-trend story). The jury is still out on which analysis is the most correct one, although other solar activity indeces suggest that there has not been much of a trend (http://www.agu.org/pubs/crossref/2005…/2005GL023621.shtml). The reason for the ‘trend’ in the S&W after 1980 may be due to adding the FL98 series to the Lean et al. (1995) and due to their model assuming a lagged response. A link to the WM2003 paper is http://pubs.giss.nasa.gov/docs/2003/2003_WillsonMordvinov.pdf (see Fig. 2). When it comes to statistical significance and hypothis testing, I do not recall whether the trends have been tested against a null hypothesis, but the short-term variability is quite high compared to the trend in the WM2003 case and the series is short, so I doubt the ‘trend’ is significant (just by eyeballing). -rasmus]Nicola Scafetta says

Dr Benestad has written an interesting critique to our paper Geophysical Research Letters, Scafetta & West (S&W). In my opinion Dr Benestad’s critique is very poor for several reasons I cannot fully and extensively explain here. But I will give few examples.

A reader of RealClimate wrote me asking to reply publicly to the Dr. Benestad’s critique on this open web-site. While I would have preferred a more appropriate scientific forum for a discussion, I believe that I should not disappoint this reader as well as other readers of RealClimate.org that might be confused by Dr Benestad’s statements.

[

Response:I appreciate Dr. Scafetta’s response, and I think that a blog like RealClimate is an appropriate forum for discussions like these. -rasmus]First critique: mysteries about the temperature patterns?

About our supposed inconsistencies Dr. Benestad starts: “The S&W temperature signal, when closely scrutinised (their Fig. 3), starts at the 0K anomaly-level in 1900, well above the level of the observed 1900 temperature anomalies, which lie in the range -3K < T < -1K in Fig. 1.”

So, where does the mysterious “0K anomaly-level in 1900” come from?

Well, Dr. Benestad should look more carefully at the Y label of our figure 3. In fact, we are plotting the function “f(t)=T_{sun}(t)-T_{sun}(1900)”. What is the value of the function “f(t)” in 1900? Well, we have f(1900)=T(1900)-T(1900)=0K, right?

So, the first mystery is easily solved. The answer is that in figure 3 we are plotting the solar induced temperature anomaly relative to the year 1900, and not an anomaly relative to the mean 1960-1990, as it is usually done for the temperature as fig 1 show.

In figure 3 we plotted the function “f(t)=T(t)-T(1900)” because in this way it is easier to visually estimate the warming induced by the sun since 1900, that is all.

Similar explanation clarifies the difference about the amplitudes of the peaks in the interval 1945-1960 that in figure 3 is at 0.3K while in figure 1 is at 0.12K.

[

Response:OK, but regardless whether the anomalies are with respect to 1900 or any other period, the curves do not match. -rasmus]About the time-shift of the peaks between 1945 and 1960. This is a more interesting issue. It can be explained in several ways. One way is that the peak we found around 1960 in the solar temperature induced signal is due to the fact that Leanâ??s solar irradiance proxy reconstruction, we have used, presents such a peak around 1960, however other TSI proxies present such a peak in 1945, such as the Hoyt and Schatten’s reconstruction. In fact, in literature there are several TSI proxy reconstructions and they are all different. Some of these reconstructions are here

http://www.grida.no/climate/ipcc_tar/wg1/fig6-5.htm

Thus, the reader can easily realize that there is a controversy about when the sun’s peak occurred, whether in 1945 or in 1960. We used Lean’s TSI because it is a good average among the several reconstructions, but we never intended that Lean’s reconstruction is perfect in every pattern, nor we were interested in our paper to discuss in detail the 1945-1960 solar peak controversy.

Second critique: further mysteries about sensitivities.

Dr Benestad talks about climate sensitivity, Stefan-Boltzmann law, non-linear physics, and I think he makes a great confusion. Well, let us clarify the issues.

We are referring to parameters “Z” as “climate sensitivity transfer parameters” . I stress the adjective ‘transfer’ because it is what Dr. Benestad did not notice in our paper. Our “climate sensitivity transfer parameters” do not have anything to do with what in the climate textbooks is referred to as “climate sensitivity” parameters that are calculated in a different way. In other words we are using a different definition. Dr. Benestad has not realized it and thought it was a mistake. Dr. Benestad might not like our definitions, but he cannot criticize them because they are definitions and must be taken for what they are.

To better explain this, first let us look more carefully at the Stefan-Boltzmann law.

Dr. Benestad states: “The textbook formulae for a simple radiative balance model is:

F = (1-A)/4 s T4, where ‘s’ here is the Boltzmann constant (~5.67 x 10-8 J/s m2K4).”

First of all, Dr. Benestad’s equation is wrong. The right equation is:

(1) F = (1-A)/4 I = s T^4

[

Response:This is correct. Thanks for pointing this out. -rasmus]where A=~0.3 is the albedo, I=1365W/m^2 is the solar irradiance, s=5.67 x 10-8 W m2K4 is the Stefan-Boltzmann constant, T=288K is the average Earth temperature. The rationale of the above equation is easy: the term “F = (1-A)/4 I” refers to the amount of solar irradiance that is absorbed by the earth surface after considering that 30% of the input irradiance I is reflected away by the albedo and what remains spreads on the spheric surface of the earth (the factor “4”). The second part of the equation is the Stefan-Boltzmann law.

Now let us calculate both sides of the above equation (1) with the above values, we obtain:

(2) (1-A)/4 I = 239 W/m^2

(3) s T^4 = 390 W/m^2

Why is there such a big difference? What kind of mystery is this? Well, the answer is simple, the Stefan-Boltzmann law works for a “black-body”, while, as everybody knows, the Earth is not “black”!!!!

[

Response:The black body radiation law still applies, albeit in a more complicated settings – there are many other processes at work. Thus it would be correct to say that the Earth is notjusta black body -rasmus]Thus, Dr Benestad’s equation above, even after my correction, cannot be applied to the Earth climate system.

But let us make some further interesting calculation. Let us suppose that the earth is a black-body and use the Stefan-Boltzmann law (1) to calculate the hypothetical temperature T given the solar input of I=1365W/m^2. We get

(4) T = 255K (black-body approximation)

Now, let us reason a little bit. The black-body approximation gives T=255K, this would mean that everything on earth would be frozen because ice melts at T=0C=273K. Now the mystery is why the climate is much warmer and the average temperature of the earth is 288K, almost 33K higher. The answer is easy, the atmosphere of the earth is full of so called green-house gases (water vapor above all, CO2, CH4, etc) that warm the atmosphere to a temperature of T=288K. In fact, green-house gases cause a powerfull positive feedback to solar input and warm the climate to the actual 288K.

I believe that any reader has now understood where the problem is with Dr Benestad’s argument. The Stefan-Boltzmann law does not take into consideration the feedback warming effects of the green-house gases, so it cannot be used to study the real earth climate. So we have to use a different approach. There are two possibilities: 1) using a climate model, this implies a perfect knowledge of all involved climatic mechanisms, and nobody has such a knowledge yet; 2) use a simpler phenomenological approach. We adopted the second approach and use a transfer methodology that defines (I stress “defines”) at equilibrium the value as

(5) Z

_{eq}=T/I=288/1365=0.21 K/W/m^2[

Response:This implies a linear response between F and T, although you do not state so. Furthermore, this estimate does not involve a small interval over which the response can be aproximate as being linear. Thus, I do not believe that this transfer function can be applied. -rasmus]A curiosity, what would Z

_{eq}be if the earth were a black-body and the Stefan-Boltzmann law worked? The answer is easy, with a little algebra it is(6) Z

_{eq}= T/I = (1-A)/4 /(sT^3)= 0.13 K/W/m^2 (black-body approximation)Thus, why is the value in (5) larger than the value in (6)? Answer: because in (6) according to the black-body approximation positive feedbacks due to the greenhouse gas effects do not exist.

Dr. Benestad states: “It is well known, that these feedbacks are highly non-linear. Let’s just mention the ice-albedo feedback, which is very different at (hypothetically) e.g. 100K surface temperature with probably ‘snowball earth’ and at 300K with no ice at all.”

In this statement there is much confusion due mostly to the fact that Dr. Benestad writes much but does not do any calculation. Our estimates and calculations are supposed to study solar effects on the climate within a very small temperature interval of approximately 1K around the average of 288K. In this small interval our linear-like assumption in Eq. (5) is perfectly fine.

In fact, if in Eq. (5) instead of T=288K we put T=289K or T=287K, the changes are very small. Also about the ice-albedo feedback within 1K temperature oscillation the albedo will change of, let us say, 10%, so for an increase of 1K the albedo will decrease from A=0.3 to A=0.27. But putting the latter value in Eq. (6) the value of Z would change of approximately 1-3%, which is a very small change and can be neglected.

Moreover, we have never stated that the value Z

_{eq}is linear or constant at any temperature from 100K to 300K as Dr. Benestad claims. Z_{eq}is the equilibrium climate transfer sensitivity to solar input at a given temperature, that in our case is T=288K, and at a given solar irradiance I=1365W/m^2. Of course Z_{eq}will significantly change for a large change of the temperature. So, Dr. Benestad should not misquote us to build his argument, we never said that Z_{eq}is linear or constant with temperature at any value of the temperature.Dr. Benestad states: “In their formula for the calculation of the sun-related temperature change, the long-term changes are determined by Zeq, while their ‘climate transfer sensitivity to slow secular solar variations’ (ZS4) is only used to correct for a time-lag. The reason for this remains unclear.”

Perhaps, the “reason is unclear” to Dr. Benestad is because Dr. Benestad should have read our paper more carefully. He would have realized that the use that we make of Z

_{eq}is very limited. It is only a constant that is taken off when in Fig 3 we calculate “f(t)=T(t)-T(1900)”. Contrary to what Dr. Benestad states, we have adopted Z_S4 as our transfer climate sensitivity for the slow solar secular variation, and not only for correcting a time-lag as he states. This is clearly stated in our paper in Eq. 3. Again, Dr. Benestad should not misquote us to build his argument.Third critique: “solar climate transfer sensitivity” or “climate sensitivity”?

Dr. Benestad states: “They take the ratios of the amplitude of band-passed filtered global temperatures to similarly band-passed filtered solar signal as the estimate for the ‘climate sensitivity’. This is a very unusual way of doing it, but S&W argue that similar approach has been used in another study. However, it’s not as simple as that calculating the climate senstivity.”

The reply to this comment is simple. As we have said in our paper and above in this reply we are not using the tradition “climate sensitivity” definition commonly found in the climate textbooks, but we have introduced a novel sensitivity called “solar climate transfer sensitivity”. The adoption of the word “transfer” should mark the difference. Because we are using a different definition than what Dr. Benestad’s knows, Dr. Benestad should first quote correctly our paper and then simply do a little afford to understand our definition and accept it. In fact, we are free to use the definition that we wish and do the calculation in accordance with it. A definition is a definition and cannot be criticized by making a different definition.

[

Response:There is no guarantee that such definitions really are representative of the natural processes. I argue that it is not. -rasmus]About our estimates of the climate transfer sensitivity to solar variations at 11 years and 22 years, Dr. Benestad makes again a great confusion by misquoting and misunderstanding our paper. Let us see why.

In fact, our finding is based indeed on three different ways to do the calculations. In our 2005 paper we present a way based on wavelet band-passed filtered signals, but we also referenced other two works: one by Douglass and Clader (2002) and another one by White et al. (1997). Douglass uses a multivariate linear regression analysis that explicitly takes into consideration Enso signal and volcano signal. White et al. adopt a Fourier band pass filter on an interval 1900-1991. All three methods agree with what we have called transfer sensitivity to 11 year cycles Z11y=0.11 +/- 0.02 K/W/m^2. Thus, our conclusion was that the phenomenological climate transfer sensitivity to the 11-year solar cycle is likely given by Z11y=0.11 +/- 0.02 K/W/m^2.

The above finding reinforces our interpretation. In fact, Dr. Benestad reasons in general, while we reason in the particular case we are analyzing where the techniques work correctly also because in 1980-2002 the ENSO oscillations are quite fast almost 2-4 years and are cut off by the filter, and the two volcano eruptions have a limited effect of 3-4 years as well. In fact, if Douglass and Clader by explicitly taking off the ENSO and the volcano signals find solar induced oscillations of 0.1K and we with another method find the same thing, we have to conclude that everything works sufficiently well. In any case the important thing is the value of the sensitivity at 11-year solar cycle and this is given by Z11y=0.11 +/- 0.02 K/W/m^2.

Fourth critique: “sensitivity at slower trends” and “spurious trends”?

Dr. Benestad states: “From regression analysis cited by the authors (Douglass and Clader 2002, White et al. 1997), it seems possible that the sensitivity of global surface temperature to variations of total solar irradiance might be about 0.1K/Wm-2. S&W do not present any convincing result that would point to noticeably higher sensitivities to long-term variations. Their higher values are based on unrealistic assumptions.”

Perhaps, Dr. Benestad would be more convinced after a more carefully reading of our paper. About the transfer sensitivity to 22years, Z22y=0.17+/-0.06 K/W/m^2, we have clearly explained in our paper that this is approximately 1.5 times larger than Z11y and this is in agreement with theoretical energy balance model estimates such as Wigley(1988) or Foukal et al (2004). (The paper by Foukal et al. 2004 is extremely clear on this larger sensitivity of climate to slower secular solar variations, see their figure 1). In fact for slower solar variation, the climate sensitivity should be stronger than for the 11year sensitivity because of the frequency dependency of the ocean thermal inertia and general out-of-equilibrium thermodynamics effects. Moreover, with an alternative method White et al (1997) have calculated something like 0.15 K/W/m^2 for the 22 year cycle. Thus, there are sufficient studies, both theoretical and phenomenological, confirming our results that for slower variations the climate sensitivity is stronger.

Dr. Benestad finally states: “we have already discussed the connection between solar activity, and this new analysis does not alter our previous conclusions: that there is not much evidence pointing to the sun being responsible for the warming since the 1950s.”

Well, we have shown that the sun was responsible for ~25-35% of the warming since the 1950s if we adopt the Lean’s proxy reconstruction and PMOD and ACRIM satellite composites. Dr. Benestad’s reasoning is based on the erroneous assumption that if there are no significant trends in some proxies for the solar activity since 1950s the sun is not contributing to the global warming.

This is wrong for two reasons. First, all TSI proxy reconstructions present a clear upward trend during the period 1900-2000 (as a reader can see here http://www.grida.no/climate/ipcc_tar/wg1/fig6-5.htm). Second, because the 1900-1950 TSI value was lower than the 1950-2000 TSI value, this would induce by alone a solar induced climate warming of the atmosphere during 1950-2000 even if during the period 1950-2000 the sun was perfectly constant. In fact, as a reader can easily understand if I put a pot with cold water on fire, the temperature of the water will slowly increase even if the temperature of the heater (the fire) is perfectly constant. This is elementary out-of-equilibrium thermodynamics everybody knows.

[

Response:Thanks for this interesting thought. One question is then howto explain why the climate system takes so long to reach equilibrium – i.e. catch up, since at one point the water in your kettle will reach a stable state where heat gained equals heat loss. One can look to other periods in history, and see if there could be similar lag then. -rasmus]I hope the above comments might be of help.

Nicola Scafetta, PhD

Duke University

[

Response:Thank you for taking time to write this response. -rasmus]Adam says

Just a quick point, if a comment is first written using MS Word, “smartquotes” needs to be turned off in the options as MS inserts non-standard quote characters which do not render in browsers other than IE (eg on Macs and Firefox) making some parts of those comments difficult to read.

I’m guessing that’s what’s happened above. It might be a good idea to put a note of that on the comments form.

Nicola Scafetta says

I am adding few commments because my previous response to Dr. Benestad has been partially cut.

[

Response:It wasn’t cut – you used raw < symbols which confuse the software into thinking it’s html – I’ve fixed the text above and deleted the repetition. -gavin]About Dr. Benestad additional short replies:

>>>[Response:OK, but regardless whether the anomalies are with respect to 1900 or any other period, the curves do not match. -rasmus]

The curves do not have to match perfectly because the sun is not driving 100% of the climate change. In any case during the century it is possible to see a good correlation: both TSI and temperature increase during the first half of the century, decrease within approximately 1950-1975 and increase again afterward.

>>>[Response:The black body radiation law still applies, albeit in a more complicated settings – there are many other processes at work. Thus it would be correct to say that the Earth is not just a black body -rasmus]

Black body radiation law applies but only after severe corrections. In fact, it is one component among several others.

>>>[Response:This implies a linear response between F and T, although you do not state so. Furthermore, this estimate does not involve a small interval over which the response can be aproximate as being linear. Thus, I do not believe that this transfer function can be applied. -rasmus]

This does NOT imply a linear response between F and T. If I take the ration between two components I am not implying their mutual linearity. The reason is explained one paragraph later in the above my reply. If we assume a black body approximation the dependency of F/T on T is described by the above Eq. 6.

>>>[Response:Thanks for this interesting thought. One question is then howto explain why the climate system takes so long to reach equilibrium – i.e. catch up, since at one point the water in your kettle will reach a stable state where heat gained equals heat loss. One can look to other periods in history, and see if there could be similar lag then. -rasmus]

The reason because the climate takes a significant time to reach an equilibrium with the sun is because the ocean is heated from above and not from below like a kettle. Moreover, water has a low heat conductance. This means that there is the need of time (several years) to heat the deep ocean and to reach a new thermodynamical equilibrium. These things are basic climate thermodynamics that any serious energy balance model contains; see for example Wigley [1988].

Ann Church says

While most of these comments have been technical, I wanted to bring up the fact that the future of renewable energy is increasingly at risk. In 2004, an estimated 350 new coal-fired power plants were expected to be online by 2012 in the US, India and China. Nearly 100 of these are expected to be built in the US. The output from these power plants would dwarf any greenhouse gas emission savings from the Kyoto Protocol. Co-op America is currently undertaking an action regarding this to tell three major corporations in the US – Peabody, Sempra, and Dominion that coal is NOT the answer to affordable power and that they should be investing much more in solar and other renewable energy technologies. This action can be found at: http://www.coopamerica.org/takeaction/coalpower/. I urge you all to take it, let people know about it, and really help raise awareness about this issue. Imagine if those billions could be invested in solar!

Urs Neu says

Re 6

I cannot understand your argumentation. Just two major points:

your formula (5) Zeq=T/I=288/1365=0.21:

If you have a non-liner funtction f(F)=T you can do a linear approximation for a certain interval (F1,F2) by assuming T=kF and k=(f(F2)-f(F1)/(F2-F1). What you pretend do to is calculating a linear approximation for the interval (1364W/m2, 1366W/2). What you really do, and that’s your formula, is a linear approximation for the interval (0 W/m2, 1365W/m2). It is very unlikely that a linear approximation over the whole range and for a small range is the same in a highly non-linear function.

Formula (3) in your paper shows clearly that Zeq is the determining factor for the longterm trend. Term 1 multiplies Zeq with the low-frequency signal, which is a function of time, and thus is not constant. It determines the long-term trend. Term 2 multiplies Z4s with the difference between the low-frequency signal at time t minus the low-frequency signal at time (t plus time-lag). This represents a correction for the time-lag and does not influence the long-term trend. Z4s therefore has no influence on the long-term trend in contrast to Zeq.

John says

Thanks, Dr. Scafetta, for your response to the incorrect critique of your paper. I look forward to reading your future work.

[

Response:I still maintain my critique of the S&W paper. -rasmus]John says

My apologies … It appeared to me that he answered each of your concerns.

Hank Roberts says

Is this news? (I’m not sure if this is a confirmation of what’s been assumed in past years, from the abstract alone)

http://www.agu.org/pubs/crossref/2006/2006GL025921.shtml

“it is confirmed with the new palaeomagnetic series that the Sun spends only 2â��3% of the time in a state of high activity, similar to the modern episode. This strengthens the conclusion that the modern high activity level is very unusual during the last 7000 years.”

[

Response:I haven’t had the time to read this paper yet, but the line ‘… strengthens the conclusion that the modern high activity level is very unusual during the last 7000 years’ suggests it’s not really breaking news. The past studies have found the highest solar activity levels in (since) the 1940’s based on proxy data, but modern instrumental measurements do not show any trend since 1950’s. -rasmus]Hank Roberts says

I am revisiting and still trying to understand this (to be honest I am merely hoping the climate scientists will come back and continue discussion).

I think it could be more than theoretically interesting, because — if political opinion reaches a ‘tipping point’ — one obvious quick/dirty “fix” is to dump a lot of dust at the L1 Lagrange Point (where the SOHO satellite is now).

http://www.physics.montana.edu/faculty/cornish/lagrange.html

Why? Because stuff there is only temporarily “stable” and drifts out of position in less than a month, so it could be tried to “see if it helps”; and because we’ve just watched a nearby comet fall apart and learned much about reaching and fragmenting them. Because it’s the sort of big gesture any of several governments are currently capable of — it wouldn’t take all that much dust at L1 to give a temporary quick drop in insolation onto Earth (like the 9/11-13 contrail pause in 2001).

I’d sooner see the scientists in some agreement about what doing that kind of thing would accomplish before it becomes a field experiment.

—

But, science fantasy aside — I do hope you all who have been willing to actually engage on the science here will come back and talk to us more soon.