Hi,
My first post,
sorry for my ignorance and lack of the English language. I have only recently found this amazing web site. Would like to say thank you to all that is involved for trying to give a complete fair picture, and sticking to true pure science. All my knowledge so far has been self taught, as my trade in the UK is a Chef, soon to change i might add. My passion for our blue marble has begun to run so deep within me that i need to know and understand what is happening.

I have been reading so much from many sites, i’m just finding it a little hard to grasp a few things.

Firstly, referring to text book climate physics. can some one give me a reference for the uk so that i can begin to understand the basics of the physics going on.

So as you can imagine, as I don’t have an understanding of this, I have found the above article a little hard to chew, I think I get the overall picture. .Is solar influence research part of the overall picture of global dimming? And also, is most of the data backing global dimming turning out to be unreliable.

I received this email today, regarding Antarctica’s ice cap. would like confirmation of the source, and also what they are highlighting http://news.bbc.co.uk/1/hi/sci/tech/4857832.stm
thanks for your time and efforts
Gary Underhill

Gary – the above topic goes into some detail on what appears to be inappropriate use of statistical techniques on climate data, they aren’t normally so hard for a lay person to follow.

As to the news story you link to… Antarctica is a very data sparse region, and the news story is giving the impression that a very strong conclusion is being drawn from the analysis of the available observational data. There are all sorts of quality control issues with analysing long-term climate records [ie are there biases introduced by changes in the measuring instruments used?], so the upper air warming signal might not actually exist, it’s hard to say.

However, Dr Ridley is correct when he says that the models have some problems with Antarctica, although this has little relevance* on how good they are for 21st century predictions for the globe as a whole.

* It is possible, though, that they cause models to slightly overestimate the ice-albedo feedback. Dr Ridley mentions the winds that come off Antarctica not mixing in the models, and this would cause the sea-ice to extend further north, reflecting more of the spring sunshine. The problem arises because most of this sea ice will melt in future global warming scenarios and the warming signal will be taken as the difference between the control [which perhaps has too much sea-ice] and the sea-ice free future. Consequently the difference is exaggerated. I would have thought it would be easy to see whether this was the case though, and it has probably been shown that the effect is not too large.

Rasmus, when you say FL98 contains no trend, what do you really mean? Did the authors’ analysis fail to reject the null hypothesis? It wouldn’t be the first time that an examination of apparently trendless data found something highly significant when some correlate was taken into account. From your post it sounds as though S&W is just a data mining exercise, and I think that is a stronger criticism than the lack of trend in FL98.

[Response:There has been a contention between FL98 and Willson & Mordvinov (2003) about the issue of trend -long time change - in their respective analysis. It boils down to how they sew together the various bits of data from different satellites (a bit like the MSU-trend story). The jury is still out on which analysis is the most correct one, although other solar activity indeces suggest that there has not been much of a trend (http://www.agu.org/pubs/crossref/2005.../2005GL023621.shtml). The reason for the 'trend' in the S&W after 1980 may be due to adding the FL98 series to the Lean et al. (1995) and due to their model assuming a lagged response. A link to the WM2003 paper is http://pubs.giss.nasa.gov/docs/2003/2003_WillsonMordvinov.pdf (see Fig. 2). When it comes to statistical significance and hypothis testing, I do not recall whether the trends have been tested against a null hypothesis, but the short-term variability is quite high compared to the trend in the WM2003 case and the series is short, so I doubt the 'trend' is significant (just by eyeballing). -rasmus]

Dr Benestad has written an interesting critique to our paper Geophysical Research Letters, Scafetta & West (S&W). In my opinion Dr Benestad’s critique is very poor for several reasons I cannot fully and extensively explain here. But I will give few examples.

A reader of RealClimate wrote me asking to reply publicly to the Dr. Benestad’s critique on this open web-site. While I would have preferred a more appropriate scientific forum for a discussion, I believe that I should not disappoint this reader as well as other readers of RealClimate.org that might be confused by Dr Benestad’s statements.

[Response:I appreciate Dr. Scafetta's response, and I think that a blog like RealClimate is an appropriate forum for discussions like these. -rasmus]

First critique: mysteries about the temperature patterns?

About our supposed inconsistencies Dr. Benestad starts: “The S&W temperature signal, when closely scrutinised (their Fig. 3), starts at the 0K anomaly-level in 1900, well above the level of the observed 1900 temperature anomalies, which lie in the range -3K < T < -1K in Fig. 1.”

So, where does the mysterious “0K anomaly-level in 1900″ come from?
Well, Dr. Benestad should look more carefully at the Y label of our figure 3. In fact, we are plotting the function “f(t)=T_{sun}(t)-T_{sun}(1900)”. What is the value of the function “f(t)” in 1900? Well, we have f(1900)=T(1900)-T(1900)=0K, right?
So, the first mystery is easily solved. The answer is that in figure 3 we are plotting the solar induced temperature anomaly relative to the year 1900, and not an anomaly relative to the mean 1960-1990, as it is usually done for the temperature as fig 1 show.
In figure 3 we plotted the function “f(t)=T(t)-T(1900)” because in this way it is easier to visually estimate the warming induced by the sun since 1900, that is all.
Similar explanation clarifies the difference about the amplitudes of the peaks in the interval 1945-1960 that in figure 3 is at 0.3K while in figure 1 is at 0.12K.

[Response:OK, but regardless whether the anomalies are with respect to 1900 or any other period, the curves do not match. -rasmus]

About the time-shift of the peaks between 1945 and 1960. This is a more interesting issue. It can be explained in several ways. One way is that the peak we found around 1960 in the solar temperature induced signal is due to the fact that Leanâ??s solar irradiance proxy reconstruction, we have used, presents such a peak around 1960, however other TSI proxies present such a peak in 1945, such as the Hoyt and Schatten’s reconstruction. In fact, in literature there are several TSI proxy reconstructions and they are all different. Some of these reconstructions are here http://www.grida.no/climate/ipcc_tar/wg1/fig6-5.htm
Thus, the reader can easily realize that there is a controversy about when the sun’s peak occurred, whether in 1945 or in 1960. We used Lean’s TSI because it is a good average among the several reconstructions, but we never intended that Lean’s reconstruction is perfect in every pattern, nor we were interested in our paper to discuss in detail the 1945-1960 solar peak controversy.

Second critique: further mysteries about sensitivities.

Dr Benestad talks about climate sensitivity, Stefan-Boltzmann law, non-linear physics, and I think he makes a great confusion. Well, let us clarify the issues.

We are referring to parameters “Z” as “climate sensitivity transfer parameters” . I stress the adjective ‘transfer’ because it is what Dr. Benestad did not notice in our paper. Our “climate sensitivity transfer parameters” do not have anything to do with what in the climate textbooks is referred to as “climate sensitivity” parameters that are calculated in a different way. In other words we are using a different definition. Dr. Benestad has not realized it and thought it was a mistake. Dr. Benestad might not like our definitions, but he cannot criticize them because they are definitions and must be taken for what they are.

To better explain this, first let us look more carefully at the Stefan-Boltzmann law.

Dr. Benestad states: “The textbook formulae for a simple radiative balance model is:
F = (1-A)/4 s T4, where ‘s’ here is the Boltzmann constant (~5.67 x 10-8 J/s m2K4).”

First of all, Dr. Benestad’s equation is wrong. The right equation is:

(1) F = (1-A)/4 I = s T^4

[Response:This is correct. Thanks for pointing this out. -rasmus]

where A=~0.3 is the albedo, I=1365W/m^2 is the solar irradiance, s=5.67 x 10-8 W m2K4 is the Stefan-Boltzmann constant, T=288K is the average Earth temperature. The rationale of the above equation is easy: the term “F = (1-A)/4 I” refers to the amount of solar irradiance that is absorbed by the earth surface after considering that 30% of the input irradiance I is reflected away by the albedo and what remains spreads on the spheric surface of the earth (the factor “4″). The second part of the equation is the Stefan-Boltzmann law.

Now let us calculate both sides of the above equation (1) with the above values, we obtain:

(2) (1-A)/4 I = 239 W/m^2

(3) s T^4 = 390 W/m^2

Why is there such a big difference? What kind of mystery is this? Well, the answer is simple, the Stefan-Boltzmann law works for a “black-body”, while, as everybody knows, the Earth is not “black”!!!!

[Response:The black body radiation law still applies, albeit in a more complicated settings - there are many other processes at work. Thus it would be correct to say that the Earth is not just a black body -rasmus]

Thus, Dr Benestad’s equation above, even after my correction, cannot be applied to the Earth climate system.

But let us make some further interesting calculation. Let us suppose that the earth is a black-body and use the Stefan-Boltzmann law (1) to calculate the hypothetical temperature T given the solar input of I=1365W/m^2. We get

(4) T = 255K (black-body approximation)

Now, let us reason a little bit. The black-body approximation gives T=255K, this would mean that everything on earth would be frozen because ice melts at T=0C=273K. Now the mystery is why the climate is much warmer and the average temperature of the earth is 288K, almost 33K higher. The answer is easy, the atmosphere of the earth is full of so called green-house gases (water vapor above all, CO2, CH4, etc) that warm the atmosphere to a temperature of T=288K. In fact, green-house gases cause a powerfull positive feedback to solar input and warm the climate to the actual 288K.

I believe that any reader has now understood where the problem is with Dr Benestad’s argument. The Stefan-Boltzmann law does not take into consideration the feedback warming effects of the green-house gases, so it cannot be used to study the real earth climate. So we have to use a different approach. There are two possibilities: 1) using a climate model, this implies a perfect knowledge of all involved climatic mechanisms, and nobody has such a knowledge yet; 2) use a simpler phenomenological approach. We adopted the second approach and use a transfer methodology that defines (I stress “defines”) at equilibrium the value as

(5) Z_{eq}=T/I=288/1365=0.21 K/W/m^2

[Response:This implies a linear response between F and T, although you do not state so. Furthermore, this estimate does not involve a small interval over which the response can be aproximate as being linear. Thus, I do not believe that this transfer function can be applied. -rasmus]

A curiosity, what would Z_{eq} be if the earth were a black-body and the Stefan-Boltzmann law worked? The answer is easy, with a little algebra it is

Thus, why is the value in (5) larger than the value in (6)? Answer: because in (6) according to the black-body approximation positive feedbacks due to the greenhouse gas effects do not exist.

Dr. Benestad states: “It is well known, that these feedbacks are highly non-linear. Let’s just mention the ice-albedo feedback, which is very different at (hypothetically) e.g. 100K surface temperature with probably ‘snowball earth’ and at 300K with no ice at all.”

In this statement there is much confusion due mostly to the fact that Dr. Benestad writes much but does not do any calculation. Our estimates and calculations are supposed to study solar effects on the climate within a very small temperature interval of approximately 1K around the average of 288K. In this small interval our linear-like assumption in Eq. (5) is perfectly fine.
In fact, if in Eq. (5) instead of T=288K we put T=289K or T=287K, the changes are very small. Also about the ice-albedo feedback within 1K temperature oscillation the albedo will change of, let us say, 10%, so for an increase of 1K the albedo will decrease from A=0.3 to A=0.27. But putting the latter value in Eq. (6) the value of Z would change of approximately 1-3%, which is a very small change and can be neglected.
Moreover, we have never stated that the value Z_{eq} is linear or constant at any temperature from 100K to 300K as Dr. Benestad claims. Z_{eq} is the equilibrium climate transfer sensitivity to solar input at a given temperature, that in our case is T=288K, and at a given solar irradiance I=1365W/m^2. Of course Z_{eq} will significantly change for a large change of the temperature. So, Dr. Benestad should not misquote us to build his argument, we never said that Z_{eq} is linear or constant with temperature at any value of the temperature.

Dr. Benestad states: “In their formula for the calculation of the sun-related temperature change, the long-term changes are determined by Zeq, while their ‘climate transfer sensitivity to slow secular solar variations’ (ZS4) is only used to correct for a time-lag. The reason for this remains unclear.”

Perhaps, the “reason is unclear” to Dr. Benestad is because Dr. Benestad should have read our paper more carefully. He would have realized that the use that we make of Z_{eq} is very limited. It is only a constant that is taken off when in Fig 3 we calculate “f(t)=T(t)-T(1900)”. Contrary to what Dr. Benestad states, we have adopted Z_S4 as our transfer climate sensitivity for the slow solar secular variation, and not only for correcting a time-lag as he states. This is clearly stated in our paper in Eq. 3. Again, Dr. Benestad should not misquote us to build his argument.

Third critique: “solar climate transfer sensitivity” or “climate sensitivity”?

Dr. Benestad states: “They take the ratios of the amplitude of band-passed filtered global temperatures to similarly band-passed filtered solar signal as the estimate for the ‘climate sensitivity’. This is a very unusual way of doing it, but S&W argue that similar approach has been used in another study. However, it’s not as simple as that calculating the climate senstivity.”

The reply to this comment is simple. As we have said in our paper and above in this reply we are not using the tradition “climate sensitivity” definition commonly found in the climate textbooks, but we have introduced a novel sensitivity called “solar climate transfer sensitivity”. The adoption of the word “transfer” should mark the difference. Because we are using a different definition than what Dr. Benestad’s knows, Dr. Benestad should first quote correctly our paper and then simply do a little afford to understand our definition and accept it. In fact, we are free to use the definition that we wish and do the calculation in accordance with it. A definition is a definition and cannot be criticized by making a different definition.

[Response:There is no guarantee that such definitions really are representative of the natural processes. I argue that it is not. -rasmus]

About our estimates of the climate transfer sensitivity to solar variations at 11 years and 22 years, Dr. Benestad makes again a great confusion by misquoting and misunderstanding our paper. Let us see why.

In fact, our finding is based indeed on three different ways to do the calculations. In our 2005 paper we present a way based on wavelet band-passed filtered signals, but we also referenced other two works: one by Douglass and Clader (2002) and another one by White et al. (1997). Douglass uses a multivariate linear regression analysis that explicitly takes into consideration Enso signal and volcano signal. White et al. adopt a Fourier band pass filter on an interval 1900-1991. All three methods agree with what we have called transfer sensitivity to 11 year cycles Z11y=0.11 +/- 0.02 K/W/m^2. Thus, our conclusion was that the phenomenological climate transfer sensitivity to the 11-year solar cycle is likely given by Z11y=0.11 +/- 0.02 K/W/m^2.

The above finding reinforces our interpretation. In fact, Dr. Benestad reasons in general, while we reason in the particular case we are analyzing where the techniques work correctly also because in 1980-2002 the ENSO oscillations are quite fast almost 2-4 years and are cut off by the filter, and the two volcano eruptions have a limited effect of 3-4 years as well. In fact, if Douglass and Clader by explicitly taking off the ENSO and the volcano signals find solar induced oscillations of 0.1K and we with another method find the same thing, we have to conclude that everything works sufficiently well. In any case the important thing is the value of the sensitivity at 11-year solar cycle and this is given by Z11y=0.11 +/- 0.02 K/W/m^2.

Fourth critique: “sensitivity at slower trends” and “spurious trends”?

Dr. Benestad states: “From regression analysis cited by the authors (Douglass and Clader 2002, White et al. 1997), it seems possible that the sensitivity of global surface temperature to variations of total solar irradiance might be about 0.1K/Wm-2. S&W do not present any convincing result that would point to noticeably higher sensitivities to long-term variations. Their higher values are based on unrealistic assumptions.”

Perhaps, Dr. Benestad would be more convinced after a more carefully reading of our paper. About the transfer sensitivity to 22years, Z22y=0.17+/-0.06 K/W/m^2, we have clearly explained in our paper that this is approximately 1.5 times larger than Z11y and this is in agreement with theoretical energy balance model estimates such as Wigley(1988) or Foukal et al (2004). (The paper by Foukal et al. 2004 is extremely clear on this larger sensitivity of climate to slower secular solar variations, see their figure 1). In fact for slower solar variation, the climate sensitivity should be stronger than for the 11year sensitivity because of the frequency dependency of the ocean thermal inertia and general out-of-equilibrium thermodynamics effects. Moreover, with an alternative method White et al (1997) have calculated something like 0.15 K/W/m^2 for the 22 year cycle. Thus, there are sufficient studies, both theoretical and phenomenological, confirming our results that for slower variations the climate sensitivity is stronger.

Dr. Benestad finally states: “we have already discussed the connection between solar activity, and this new analysis does not alter our previous conclusions: that there is not much evidence pointing to the sun being responsible for the warming since the 1950s.”

Well, we have shown that the sun was responsible for ~25-35% of the warming since the 1950s if we adopt the Lean’s proxy reconstruction and PMOD and ACRIM satellite composites. Dr. Benestad’s reasoning is based on the erroneous assumption that if there are no significant trends in some proxies for the solar activity since 1950s the sun is not contributing to the global warming.
This is wrong for two reasons. First, all TSI proxy reconstructions present a clear upward trend during the period 1900-2000 (as a reader can see here http://www.grida.no/climate/ipcc_tar/wg1/fig6-5.htm). Second, because the 1900-1950 TSI value was lower than the 1950-2000 TSI value, this would induce by alone a solar induced climate warming of the atmosphere during 1950-2000 even if during the period 1950-2000 the sun was perfectly constant. In fact, as a reader can easily understand if I put a pot with cold water on fire, the temperature of the water will slowly increase even if the temperature of the heater (the fire) is perfectly constant. This is elementary out-of-equilibrium thermodynamics everybody knows.

[Response:Thanks for this interesting thought. One question is then howto explain why the climate system takes so long to reach equilibrium - i.e. catch up, since at one point the water in your kettle will reach a stable state where heat gained equals heat loss. One can look to other periods in history, and see if there could be similar lag then. -rasmus]

I hope the above comments might be of help.

Nicola Scafetta, PhD
Duke University

[Response:Thank you for taking time to write this response. -rasmus]

Just a quick point, if a comment is first written using MS Word, “smartquotes” needs to be turned off in the options as MS inserts non-standard quote characters which do not render in browsers other than IE (eg on Macs and Firefox) making some parts of those comments difficult to read.

I’m guessing that’s what’s happened above. It might be a good idea to put a note of that on the comments form.

I am adding few commments because my previous response to Dr. Benestad has been partially cut.

[Response: It wasn't cut - you used raw < symbols which confuse the software into thinking it's html - I've fixed the text above and deleted the repetition. -gavin]

About Dr. Benestad additional short replies:

>>>[Response:OK, but regardless whether the anomalies are with respect to 1900 or any other period, the curves do not match. -rasmus]

The curves do not have to match perfectly because the sun is not driving 100% of the climate change. In any case during the century it is possible to see a good correlation: both TSI and temperature increase during the first half of the century, decrease within approximately 1950-1975 and increase again afterward.

>>>[Response:The black body radiation law still applies, albeit in a more complicated settings - there are many other processes at work. Thus it would be correct to say that the Earth is not just a black body -rasmus]

Black body radiation law applies but only after severe corrections. In fact, it is one component among several others.

>>>[Response:This implies a linear response between F and T, although you do not state so. Furthermore, this estimate does not involve a small interval over which the response can be aproximate as being linear. Thus, I do not believe that this transfer function can be applied. -rasmus]

This does NOT imply a linear response between F and T. If I take the ration between two components I am not implying their mutual linearity. The reason is explained one paragraph later in the above my reply. If we assume a black body approximation the dependency of F/T on T is described by the above Eq. 6.

>>>[Response:Thanks for this interesting thought. One question is then howto explain why the climate system takes so long to reach equilibrium - i.e. catch up, since at one point the water in your kettle will reach a stable state where heat gained equals heat loss. One can look to other periods in history, and see if there could be similar lag then. -rasmus]

The reason because the climate takes a significant time to reach an equilibrium with the sun is because the ocean is heated from above and not from below like a kettle. Moreover, water has a low heat conductance. This means that there is the need of time (several years) to heat the deep ocean and to reach a new thermodynamical equilibrium. These things are basic climate thermodynamics that any serious energy balance model contains; see for example Wigley [1988].

Comment by Nicola Scafetta — 5 Apr 2006 @ 10:37 AM

While most of these comments have been technical, I wanted to bring up the fact that the future of renewable energy is increasingly at risk. In 2004, an estimated 350 new coal-fired power plants were expected to be online by 2012 in the US, India and China. Nearly 100 of these are expected to be built in the US. The output from these power plants would dwarf any greenhouse gas emission savings from the Kyoto Protocol. Co-op America is currently undertaking an action regarding this to tell three major corporations in the US – Peabody, Sempra, and Dominion that coal is NOT the answer to affordable power and that they should be investing much more in solar and other renewable energy technologies. This action can be found at: http://www.coopamerica.org/takeaction/coalpower/. I urge you all to take it, let people know about it, and really help raise awareness about this issue. Imagine if those billions could be invested in solar!

I cannot understand your argumentation. Just two major points:

your formula (5) Zeq=T/I=288/1365=0.21:
If you have a non-liner funtction f(F)=T you can do a linear approximation for a certain interval (F1,F2) by assuming T=kF and k=(f(F2)-f(F1)/(F2-F1). What you pretend do to is calculating a linear approximation for the interval (1364W/m2, 1366W/2). What you really do, and that’s your formula, is a linear approximation for the interval (0 W/m2, 1365W/m2). It is very unlikely that a linear approximation over the whole range and for a small range is the same in a highly non-linear function.

Formula (3) in your paper shows clearly that Zeq is the determining factor for the longterm trend. Term 1 multiplies Zeq with the low-frequency signal, which is a function of time, and thus is not constant. It determines the long-term trend. Term 2 multiplies Z4s with the difference between the low-frequency signal at time t minus the low-frequency signal at time (t plus time-lag). This represents a correction for the time-lag and does not influence the long-term trend. Z4s therefore has no influence on the long-term trend in contrast to Zeq.

“it is confirmed with the new palaeomagnetic series that the Sun spends only 2â��3% of the time in a state of high activity, similar to the modern episode. This strengthens the conclusion that the modern high activity level is very unusual during the last 7000 years.”

[Response:I haven't had the time to read this paper yet, but the line '... strengthens the conclusion that the modern high activity level is very unusual during the last 7000 years' suggests it's not really breaking news. The past studies have found the highest solar activity levels in (since) the 1940's based on proxy data, but modern instrumental measurements do not show any trend since 1950's. -rasmus]

I am revisiting and still trying to understand this (to be honest I am merely hoping the climate scientists will come back and continue discussion).

I think it could be more than theoretically interesting, because — if political opinion reaches a ‘tipping point’ — one obvious quick/dirty “fix” is to dump a lot of dust at the L1 Lagrange Point (where the SOHO satellite is now).

Why? Because stuff there is only temporarily “stable” and drifts out of position in less than a month, so it could be tried to “see if it helps”; and because we’ve just watched a nearby comet fall apart and learned much about reaching and fragmenting them. Because it’s the sort of big gesture any of several governments are currently capable of — it wouldn’t take all that much dust at L1 to give a temporary quick drop in insolation onto Earth (like the 9/11-13 contrail pause in 2001).

I’d sooner see the scientists in some agreement about what doing that kind of thing would accomplish before it becomes a field experiment.

—
But, science fantasy aside — I do hope you all who have been willing to actually engage on the science here will come back and talk to us more soon.

[...] on this particular study here). S&W later suggested 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

[...] on this particular study here). S&W later suggested 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

[...] you got some times to research more about the topic, you might want to try RealClimate. It is another interesting resource. I haven’t browse through it a lot, but so far… [...]

[...] (Urs Neu comments on this particular study here), or 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

[...] (Urs Neu comments on this particular study here), or 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

[...] Third, even the very few analyses that conclude the sun was a significant contributor in the past century find that the sun’s impact relative to carbon dioxide has been shrinking (since, of course, greenhouse gas emissions and concentrations have been soaring). So, a statement that up to about 50% of the warming in the last hundred years can be explained by the sun turns into at most 25% to 35% of the warming since 1980 can be explained by the sun in Scarfetta and West’s 2006 paper, which, in any case, was debunked by RealClimate here. [...]

Hi,

My first post,

sorry for my ignorance and lack of the English language. I have only recently found this amazing web site. Would like to say thank you to all that is involved for trying to give a complete fair picture, and sticking to true pure science. All my knowledge so far has been self taught, as my trade in the UK is a Chef, soon to change i might add. My passion for our blue marble has begun to run so deep within me that i need to know and understand what is happening.

I have been reading so much from many sites, i’m just finding it a little hard to grasp a few things.

Firstly, referring to text book climate physics. can some one give me a reference for the uk so that i can begin to understand the basics of the physics going on.

So as you can imagine, as I don’t have an understanding of this, I have found the above article a little hard to chew, I think I get the overall picture. .Is solar influence research part of the overall picture of global dimming? And also, is most of the data backing global dimming turning out to be unreliable.

I received this email today, regarding Antarctica’s ice cap. would like confirmation of the source, and also what they are highlighting

http://news.bbc.co.uk/1/hi/sci/tech/4857832.stm

thanks for your time and efforts

Gary Underhill

Comment by Gary Uderhill — 31 Mar 2006 @ 11:39 AM

Gary – the above topic goes into some detail on what appears to be inappropriate use of statistical techniques on climate data, they aren’t normally so hard for a lay person to follow.

As to the news story you link to… Antarctica is a very data sparse region, and the news story is giving the impression that a very strong conclusion is being drawn from the analysis of the available observational data. There are all sorts of quality control issues with analysing long-term climate records [ie are there biases introduced by changes in the measuring instruments used?], so the upper air warming signal might not actually exist, it’s hard to say.

However, Dr Ridley is correct when he says that the models have some problems with Antarctica, although this has little relevance* on how good they are for 21st century predictions for the globe as a whole.

* It is possible, though, that they cause models to slightly overestimate the ice-albedo feedback. Dr Ridley mentions the winds that come off Antarctica not mixing in the models, and this would cause the sea-ice to extend further north, reflecting more of the spring sunshine. The problem arises because most of this sea ice will melt in future global warming scenarios and the warming signal will be taken as the difference between the control [which perhaps has too much sea-ice] and the sea-ice free future. Consequently the difference is exaggerated. I would have thought it would be easy to see whether this was the case though, and it has probably been shown that the effect is not too large.

Comment by Timothy — 31 Mar 2006 @ 12:17 PM

Rasmus, when you say FL98 contains no trend, what do you really mean? Did the authors’ analysis fail to reject the null hypothesis? It wouldn’t be the first time that an examination of apparently trendless data found something highly significant when some correlate was taken into account. From your post it sounds as though S&W is just a data mining exercise, and I think that is a stronger criticism than the lack of trend in FL98.

[

Response:There has been a contention between FL98 and Willson & Mordvinov (2003) about the issue of trend -long time change - in their respective analysis. It boils down to how they sew together the various bits of data from different satellites (a bit like the MSU-trend story). The jury is still out on which analysis is the most correct one, although other solar activity indeces suggest that there has not been much of a trend (http://www.agu.org/pubs/crossref/2005.../2005GL023621.shtml). The reason for the 'trend' in the S&W after 1980 may be due to adding the FL98 series to the Lean et al. (1995) and due to their model assuming a lagged response. A link to the WM2003 paper is http://pubs.giss.nasa.gov/docs/2003/2003_WillsonMordvinov.pdf (see Fig. 2). When it comes to statistical significance and hypothis testing, I do not recall whether the trends have been tested against a null hypothesis, but the short-term variability is quite high compared to the trend in the WM2003 case and the series is short, so I doubt the 'trend' is significant (just by eyeballing). -rasmus]Comment by Steve Latham — 31 Mar 2006 @ 12:52 PM

Dr Benestad has written an interesting critique to our paper Geophysical Research Letters, Scafetta & West (S&W). In my opinion Dr Benestad’s critique is very poor for several reasons I cannot fully and extensively explain here. But I will give few examples.

A reader of RealClimate wrote me asking to reply publicly to the Dr. Benestad’s critique on this open web-site. While I would have preferred a more appropriate scientific forum for a discussion, I believe that I should not disappoint this reader as well as other readers of RealClimate.org that might be confused by Dr Benestad’s statements.

[

Response:I appreciate Dr. Scafetta's response, and I think that a blog like RealClimate is an appropriate forum for discussions like these. -rasmus]First critique: mysteries about the temperature patterns?

About our supposed inconsistencies Dr. Benestad starts: “The S&W temperature signal, when closely scrutinised (their Fig. 3), starts at the 0K anomaly-level in 1900, well above the level of the observed 1900 temperature anomalies, which lie in the range -3K < T < -1K in Fig. 1.”

So, where does the mysterious “0K anomaly-level in 1900″ come from?

Well, Dr. Benestad should look more carefully at the Y label of our figure 3. In fact, we are plotting the function “f(t)=T_{sun}(t)-T_{sun}(1900)”. What is the value of the function “f(t)” in 1900? Well, we have f(1900)=T(1900)-T(1900)=0K, right?

So, the first mystery is easily solved. The answer is that in figure 3 we are plotting the solar induced temperature anomaly relative to the year 1900, and not an anomaly relative to the mean 1960-1990, as it is usually done for the temperature as fig 1 show.

In figure 3 we plotted the function “f(t)=T(t)-T(1900)” because in this way it is easier to visually estimate the warming induced by the sun since 1900, that is all.

Similar explanation clarifies the difference about the amplitudes of the peaks in the interval 1945-1960 that in figure 3 is at 0.3K while in figure 1 is at 0.12K.

[

Response:OK, but regardless whether the anomalies are with respect to 1900 or any other period, the curves do not match. -rasmus]About the time-shift of the peaks between 1945 and 1960. This is a more interesting issue. It can be explained in several ways. One way is that the peak we found around 1960 in the solar temperature induced signal is due to the fact that Leanâ??s solar irradiance proxy reconstruction, we have used, presents such a peak around 1960, however other TSI proxies present such a peak in 1945, such as the Hoyt and Schatten’s reconstruction. In fact, in literature there are several TSI proxy reconstructions and they are all different. Some of these reconstructions are here

http://www.grida.no/climate/ipcc_tar/wg1/fig6-5.htm

Thus, the reader can easily realize that there is a controversy about when the sun’s peak occurred, whether in 1945 or in 1960. We used Lean’s TSI because it is a good average among the several reconstructions, but we never intended that Lean’s reconstruction is perfect in every pattern, nor we were interested in our paper to discuss in detail the 1945-1960 solar peak controversy.

Second critique: further mysteries about sensitivities.

Dr Benestad talks about climate sensitivity, Stefan-Boltzmann law, non-linear physics, and I think he makes a great confusion. Well, let us clarify the issues.

We are referring to parameters “Z” as “climate sensitivity transfer parameters” . I stress the adjective ‘transfer’ because it is what Dr. Benestad did not notice in our paper. Our “climate sensitivity transfer parameters” do not have anything to do with what in the climate textbooks is referred to as “climate sensitivity” parameters that are calculated in a different way. In other words we are using a different definition. Dr. Benestad has not realized it and thought it was a mistake. Dr. Benestad might not like our definitions, but he cannot criticize them because they are definitions and must be taken for what they are.

To better explain this, first let us look more carefully at the Stefan-Boltzmann law.

Dr. Benestad states: “The textbook formulae for a simple radiative balance model is:

F = (1-A)/4 s T4, where ‘s’ here is the Boltzmann constant (~5.67 x 10-8 J/s m2K4).”

First of all, Dr. Benestad’s equation is wrong. The right equation is:

(1) F = (1-A)/4 I = s T^4

[

Response:This is correct. Thanks for pointing this out. -rasmus]where A=~0.3 is the albedo, I=1365W/m^2 is the solar irradiance, s=5.67 x 10-8 W m2K4 is the Stefan-Boltzmann constant, T=288K is the average Earth temperature. The rationale of the above equation is easy: the term “F = (1-A)/4 I” refers to the amount of solar irradiance that is absorbed by the earth surface after considering that 30% of the input irradiance I is reflected away by the albedo and what remains spreads on the spheric surface of the earth (the factor “4″). The second part of the equation is the Stefan-Boltzmann law.

Now let us calculate both sides of the above equation (1) with the above values, we obtain:

(2) (1-A)/4 I = 239 W/m^2

(3) s T^4 = 390 W/m^2

Why is there such a big difference? What kind of mystery is this? Well, the answer is simple, the Stefan-Boltzmann law works for a “black-body”, while, as everybody knows, the Earth is not “black”!!!!

[

Response:The black body radiation law still applies, albeit in a more complicated settings - there are many other processes at work. Thus it would be correct to say that the Earth is notjusta black body -rasmus]Thus, Dr Benestad’s equation above, even after my correction, cannot be applied to the Earth climate system.

But let us make some further interesting calculation. Let us suppose that the earth is a black-body and use the Stefan-Boltzmann law (1) to calculate the hypothetical temperature T given the solar input of I=1365W/m^2. We get

(4) T = 255K (black-body approximation)

Now, let us reason a little bit. The black-body approximation gives T=255K, this would mean that everything on earth would be frozen because ice melts at T=0C=273K. Now the mystery is why the climate is much warmer and the average temperature of the earth is 288K, almost 33K higher. The answer is easy, the atmosphere of the earth is full of so called green-house gases (water vapor above all, CO2, CH4, etc) that warm the atmosphere to a temperature of T=288K. In fact, green-house gases cause a powerfull positive feedback to solar input and warm the climate to the actual 288K.

I believe that any reader has now understood where the problem is with Dr Benestad’s argument. The Stefan-Boltzmann law does not take into consideration the feedback warming effects of the green-house gases, so it cannot be used to study the real earth climate. So we have to use a different approach. There are two possibilities: 1) using a climate model, this implies a perfect knowledge of all involved climatic mechanisms, and nobody has such a knowledge yet; 2) use a simpler phenomenological approach. We adopted the second approach and use a transfer methodology that defines (I stress “defines”) at equilibrium the value as

(5) Z

_{eq}=T/I=288/1365=0.21 K/W/m^2[

Response:This implies a linear response between F and T, although you do not state so. Furthermore, this estimate does not involve a small interval over which the response can be aproximate as being linear. Thus, I do not believe that this transfer function can be applied. -rasmus]A curiosity, what would Z

_{eq}be if the earth were a black-body and the Stefan-Boltzmann law worked? The answer is easy, with a little algebra it is(6) Z

_{eq}= T/I = (1-A)/4 /(sT^3)= 0.13 K/W/m^2 (black-body approximation)Thus, why is the value in (5) larger than the value in (6)? Answer: because in (6) according to the black-body approximation positive feedbacks due to the greenhouse gas effects do not exist.

Dr. Benestad states: “It is well known, that these feedbacks are highly non-linear. Let’s just mention the ice-albedo feedback, which is very different at (hypothetically) e.g. 100K surface temperature with probably ‘snowball earth’ and at 300K with no ice at all.”

In this statement there is much confusion due mostly to the fact that Dr. Benestad writes much but does not do any calculation. Our estimates and calculations are supposed to study solar effects on the climate within a very small temperature interval of approximately 1K around the average of 288K. In this small interval our linear-like assumption in Eq. (5) is perfectly fine.

In fact, if in Eq. (5) instead of T=288K we put T=289K or T=287K, the changes are very small. Also about the ice-albedo feedback within 1K temperature oscillation the albedo will change of, let us say, 10%, so for an increase of 1K the albedo will decrease from A=0.3 to A=0.27. But putting the latter value in Eq. (6) the value of Z would change of approximately 1-3%, which is a very small change and can be neglected.

Moreover, we have never stated that the value Z

_{eq}is linear or constant at any temperature from 100K to 300K as Dr. Benestad claims. Z_{eq}is the equilibrium climate transfer sensitivity to solar input at a given temperature, that in our case is T=288K, and at a given solar irradiance I=1365W/m^2. Of course Z_{eq}will significantly change for a large change of the temperature. So, Dr. Benestad should not misquote us to build his argument, we never said that Z_{eq}is linear or constant with temperature at any value of the temperature.Dr. Benestad states: “In their formula for the calculation of the sun-related temperature change, the long-term changes are determined by Zeq, while their ‘climate transfer sensitivity to slow secular solar variations’ (ZS4) is only used to correct for a time-lag. The reason for this remains unclear.”

Perhaps, the “reason is unclear” to Dr. Benestad is because Dr. Benestad should have read our paper more carefully. He would have realized that the use that we make of Z

_{eq}is very limited. It is only a constant that is taken off when in Fig 3 we calculate “f(t)=T(t)-T(1900)”. Contrary to what Dr. Benestad states, we have adopted Z_S4 as our transfer climate sensitivity for the slow solar secular variation, and not only for correcting a time-lag as he states. This is clearly stated in our paper in Eq. 3. Again, Dr. Benestad should not misquote us to build his argument.Third critique: “solar climate transfer sensitivity” or “climate sensitivity”?

Dr. Benestad states: “They take the ratios of the amplitude of band-passed filtered global temperatures to similarly band-passed filtered solar signal as the estimate for the ‘climate sensitivity’. This is a very unusual way of doing it, but S&W argue that similar approach has been used in another study. However, it’s not as simple as that calculating the climate senstivity.”

The reply to this comment is simple. As we have said in our paper and above in this reply we are not using the tradition “climate sensitivity” definition commonly found in the climate textbooks, but we have introduced a novel sensitivity called “solar climate transfer sensitivity”. The adoption of the word “transfer” should mark the difference. Because we are using a different definition than what Dr. Benestad’s knows, Dr. Benestad should first quote correctly our paper and then simply do a little afford to understand our definition and accept it. In fact, we are free to use the definition that we wish and do the calculation in accordance with it. A definition is a definition and cannot be criticized by making a different definition.

[

Response:There is no guarantee that such definitions really are representative of the natural processes. I argue that it is not. -rasmus]About our estimates of the climate transfer sensitivity to solar variations at 11 years and 22 years, Dr. Benestad makes again a great confusion by misquoting and misunderstanding our paper. Let us see why.

In fact, our finding is based indeed on three different ways to do the calculations. In our 2005 paper we present a way based on wavelet band-passed filtered signals, but we also referenced other two works: one by Douglass and Clader (2002) and another one by White et al. (1997). Douglass uses a multivariate linear regression analysis that explicitly takes into consideration Enso signal and volcano signal. White et al. adopt a Fourier band pass filter on an interval 1900-1991. All three methods agree with what we have called transfer sensitivity to 11 year cycles Z11y=0.11 +/- 0.02 K/W/m^2. Thus, our conclusion was that the phenomenological climate transfer sensitivity to the 11-year solar cycle is likely given by Z11y=0.11 +/- 0.02 K/W/m^2.

The above finding reinforces our interpretation. In fact, Dr. Benestad reasons in general, while we reason in the particular case we are analyzing where the techniques work correctly also because in 1980-2002 the ENSO oscillations are quite fast almost 2-4 years and are cut off by the filter, and the two volcano eruptions have a limited effect of 3-4 years as well. In fact, if Douglass and Clader by explicitly taking off the ENSO and the volcano signals find solar induced oscillations of 0.1K and we with another method find the same thing, we have to conclude that everything works sufficiently well. In any case the important thing is the value of the sensitivity at 11-year solar cycle and this is given by Z11y=0.11 +/- 0.02 K/W/m^2.

Fourth critique: “sensitivity at slower trends” and “spurious trends”?

Dr. Benestad states: “From regression analysis cited by the authors (Douglass and Clader 2002, White et al. 1997), it seems possible that the sensitivity of global surface temperature to variations of total solar irradiance might be about 0.1K/Wm-2. S&W do not present any convincing result that would point to noticeably higher sensitivities to long-term variations. Their higher values are based on unrealistic assumptions.”

Perhaps, Dr. Benestad would be more convinced after a more carefully reading of our paper. About the transfer sensitivity to 22years, Z22y=0.17+/-0.06 K/W/m^2, we have clearly explained in our paper that this is approximately 1.5 times larger than Z11y and this is in agreement with theoretical energy balance model estimates such as Wigley(1988) or Foukal et al (2004). (The paper by Foukal et al. 2004 is extremely clear on this larger sensitivity of climate to slower secular solar variations, see their figure 1). In fact for slower solar variation, the climate sensitivity should be stronger than for the 11year sensitivity because of the frequency dependency of the ocean thermal inertia and general out-of-equilibrium thermodynamics effects. Moreover, with an alternative method White et al (1997) have calculated something like 0.15 K/W/m^2 for the 22 year cycle. Thus, there are sufficient studies, both theoretical and phenomenological, confirming our results that for slower variations the climate sensitivity is stronger.

Dr. Benestad finally states: “we have already discussed the connection between solar activity, and this new analysis does not alter our previous conclusions: that there is not much evidence pointing to the sun being responsible for the warming since the 1950s.”

Well, we have shown that the sun was responsible for ~25-35% of the warming since the 1950s if we adopt the Lean’s proxy reconstruction and PMOD and ACRIM satellite composites. Dr. Benestad’s reasoning is based on the erroneous assumption that if there are no significant trends in some proxies for the solar activity since 1950s the sun is not contributing to the global warming.

This is wrong for two reasons. First, all TSI proxy reconstructions present a clear upward trend during the period 1900-2000 (as a reader can see here http://www.grida.no/climate/ipcc_tar/wg1/fig6-5.htm). Second, because the 1900-1950 TSI value was lower than the 1950-2000 TSI value, this would induce by alone a solar induced climate warming of the atmosphere during 1950-2000 even if during the period 1950-2000 the sun was perfectly constant. In fact, as a reader can easily understand if I put a pot with cold water on fire, the temperature of the water will slowly increase even if the temperature of the heater (the fire) is perfectly constant. This is elementary out-of-equilibrium thermodynamics everybody knows.

[

Response:Thanks for this interesting thought. One question is then howto explain why the climate system takes so long to reach equilibrium - i.e. catch up, since at one point the water in your kettle will reach a stable state where heat gained equals heat loss. One can look to other periods in history, and see if there could be similar lag then. -rasmus]I hope the above comments might be of help.

Nicola Scafetta, PhD

Duke University

[

Response:Thank you for taking time to write this response. -rasmus]Comment by Nicola Scafetta — 4 Apr 2006 @ 1:29 PM

Just a quick point, if a comment is first written using MS Word, “smartquotes” needs to be turned off in the options as MS inserts non-standard quote characters which do not render in browsers other than IE (eg on Macs and Firefox) making some parts of those comments difficult to read.

I’m guessing that’s what’s happened above. It might be a good idea to put a note of that on the comments form.

Comment by Adam — 5 Apr 2006 @ 4:16 AM

I am adding few commments because my previous response to Dr. Benestad has been partially cut.

[

Response:It wasn't cut - you used raw < symbols which confuse the software into thinking it's html - I've fixed the text above and deleted the repetition. -gavin]About Dr. Benestad additional short replies:

>>>[Response:OK, but regardless whether the anomalies are with respect to 1900 or any other period, the curves do not match. -rasmus]

The curves do not have to match perfectly because the sun is not driving 100% of the climate change. In any case during the century it is possible to see a good correlation: both TSI and temperature increase during the first half of the century, decrease within approximately 1950-1975 and increase again afterward.

>>>[Response:The black body radiation law still applies, albeit in a more complicated settings - there are many other processes at work. Thus it would be correct to say that the Earth is not just a black body -rasmus]

Black body radiation law applies but only after severe corrections. In fact, it is one component among several others.

>>>[Response:This implies a linear response between F and T, although you do not state so. Furthermore, this estimate does not involve a small interval over which the response can be aproximate as being linear. Thus, I do not believe that this transfer function can be applied. -rasmus]

This does NOT imply a linear response between F and T. If I take the ration between two components I am not implying their mutual linearity. The reason is explained one paragraph later in the above my reply. If we assume a black body approximation the dependency of F/T on T is described by the above Eq. 6.

>>>[Response:Thanks for this interesting thought. One question is then howto explain why the climate system takes so long to reach equilibrium - i.e. catch up, since at one point the water in your kettle will reach a stable state where heat gained equals heat loss. One can look to other periods in history, and see if there could be similar lag then. -rasmus]

The reason because the climate takes a significant time to reach an equilibrium with the sun is because the ocean is heated from above and not from below like a kettle. Moreover, water has a low heat conductance. This means that there is the need of time (several years) to heat the deep ocean and to reach a new thermodynamical equilibrium. These things are basic climate thermodynamics that any serious energy balance model contains; see for example Wigley [1988].

Comment by Nicola Scafetta — 5 Apr 2006 @ 10:37 AM

While most of these comments have been technical, I wanted to bring up the fact that the future of renewable energy is increasingly at risk. In 2004, an estimated 350 new coal-fired power plants were expected to be online by 2012 in the US, India and China. Nearly 100 of these are expected to be built in the US. The output from these power plants would dwarf any greenhouse gas emission savings from the Kyoto Protocol. Co-op America is currently undertaking an action regarding this to tell three major corporations in the US – Peabody, Sempra, and Dominion that coal is NOT the answer to affordable power and that they should be investing much more in solar and other renewable energy technologies. This action can be found at: http://www.coopamerica.org/takeaction/coalpower/. I urge you all to take it, let people know about it, and really help raise awareness about this issue. Imagine if those billions could be invested in solar!

Comment by Ann Church — 6 Apr 2006 @ 2:38 PM

Re 6

I cannot understand your argumentation. Just two major points:

your formula (5) Zeq=T/I=288/1365=0.21:

If you have a non-liner funtction f(F)=T you can do a linear approximation for a certain interval (F1,F2) by assuming T=kF and k=(f(F2)-f(F1)/(F2-F1). What you pretend do to is calculating a linear approximation for the interval (1364W/m2, 1366W/2). What you really do, and that’s your formula, is a linear approximation for the interval (0 W/m2, 1365W/m2). It is very unlikely that a linear approximation over the whole range and for a small range is the same in a highly non-linear function.

Formula (3) in your paper shows clearly that Zeq is the determining factor for the longterm trend. Term 1 multiplies Zeq with the low-frequency signal, which is a function of time, and thus is not constant. It determines the long-term trend. Term 2 multiplies Z4s with the difference between the low-frequency signal at time t minus the low-frequency signal at time (t plus time-lag). This represents a correction for the time-lag and does not influence the long-term trend. Z4s therefore has no influence on the long-term trend in contrast to Zeq.

Comment by Urs Neu — 7 Apr 2006 @ 7:24 AM

Thanks, Dr. Scafetta, for your response to the incorrect critique of your paper. I look forward to reading your future work.

[

Response:I still maintain my critique of the S&W paper. -rasmus]Comment by John — 18 Apr 2006 @ 8:18 PM

My apologies … It appeared to me that he answered each of your concerns.

Comment by John — 19 Apr 2006 @ 11:14 PM

Is this news? (I’m not sure if this is a confirmation of what’s been assumed in past years, from the abstract alone)

http://www.agu.org/pubs/crossref/2006/2006GL025921.shtml

“it is confirmed with the new palaeomagnetic series that the Sun spends only 2â��3% of the time in a state of high activity, similar to the modern episode. This strengthens the conclusion that the modern high activity level is very unusual during the last 7000 years.”

[

Response:I haven't had the time to read this paper yet, but the line '... strengthens the conclusion that the modern high activity level is very unusual during the last 7000 years' suggests it's not really breaking news. The past studies have found the highest solar activity levels in (since) the 1940's based on proxy data, but modern instrumental measurements do not show any trend since 1950's. -rasmus]Comment by Hank Roberts — 29 Apr 2006 @ 4:04 PM

I am revisiting and still trying to understand this (to be honest I am merely hoping the climate scientists will come back and continue discussion).

I think it could be more than theoretically interesting, because — if political opinion reaches a ‘tipping point’ — one obvious quick/dirty “fix” is to dump a lot of dust at the L1 Lagrange Point (where the SOHO satellite is now).

http://www.physics.montana.edu/faculty/cornish/lagrange.html

Why? Because stuff there is only temporarily “stable” and drifts out of position in less than a month, so it could be tried to “see if it helps”; and because we’ve just watched a nearby comet fall apart and learned much about reaching and fragmenting them. Because it’s the sort of big gesture any of several governments are currently capable of — it wouldn’t take all that much dust at L1 to give a temporary quick drop in insolation onto Earth (like the 9/11-13 contrail pause in 2001).

I’d sooner see the scientists in some agreement about what doing that kind of thing would accomplish before it becomes a field experiment.

—

But, science fantasy aside — I do hope you all who have been willing to actually engage on the science here will come back and talk to us more soon.

Comment by Hank Roberts — 1 May 2006 @ 11:00 AM

[...] on this particular study here). S&W later suggested 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

Pingback by Understanding Global Warming « Understanding Global Warming — 30 Dec 2007 @ 3:47 AM

[...] on this particular study here). S&W later suggested 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

Pingback by Understanding Global Warming — 6 Jan 2008 @ 9:38 PM

Re: A cold spell soon to replace global warming…1. That is not what this paper says..It says IF these assumptions are true th……

Trackback by tribe.net: www.realclimate.org — 19 Jan 2008 @ 4:40 PM

[...] you got some times to research more about the topic, you might want to try RealClimate. It is another interesting resource. I haven’t browse through it a lot, but so far… [...]

Pingback by beyond today! › GW for Global Warming, GC for …? — 10 Feb 2008 @ 7:33 AM

[...] (Urs Neu comments on this particular study here), or 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

Pingback by Understanding the Basics of Global Holocene Climate Change « Understanding Global Warming — 16 Mar 2009 @ 8:57 PM

[...] (Urs Neu comments on this particular study here), or 25-35% from 1980-2000. Dr. Rasmus Benestad notes that they “used some crude estimates of ‘climate sensitivity’ and estimates of [...]

Pingback by Understanding Global Warming — 17 Mar 2009 @ 2:04 PM

[...] Third, even the very few analyses that conclude the sun was a significant contributor in the past century find that the sun’s impact relative to carbon dioxide has been shrinking (since, of course, greenhouse gas emissions and concentrations have been soaring). So, a statement that up to about 50% of the warming in the last hundred years can be explained by the sun turns into at most 25% to 35% of the warming since 1980 can be explained by the sun in Scarfetta and West’s 2006 paper, which, in any case, was debunked by RealClimate here. [...]

Pingback by Climate Progress » Blog Archive » Inhofe recycles long-debunked denier talking points — will the media be fooled (again)? — 3 Apr 2009 @ 6:24 PM