In an earlier post, we discussed a review article by Frohlich et al. on solar activity and its relationship with our climate. We thought that paper was quite sound. This September saw a new article in the Geophysical Research Letters with the title «Phenomenological solar signature in 400 years of reconstructed Northern Hemisphere temperature record» by Scafetta & West (henceforth referred to as SW). This article has now been cited by US Senator James Inhofe in a senate hearing that took place on 25 September 2006 . SW find that solar forcing accounts for ~50% of 20C warming, but this conclusion relies on some rather primitive correlations and is sensitive to assumptions (see recent post by Gavin on attribution). We said before that peer review is a necessary but not sufficient condition. So what wrong with it…?
The greatest flaw, I think, lies in how their novel scale-by-scale transfer sensitivity model (they call it “SbS-TCSM”) is constructed. Coefficients, that they call transfer functions, are estimated by taking the difference between the mean temperature of the 18th and 17th centuries, and then dividing this by the difference in the averages of the total solar irradiances for the corresponding centuries. Thus:
Z = [ T(18th C.) – T(17th C.) ] / [ I(18th C.) – I(17th C.) ]
Here T(.) is the temperature average for the century while I(.) is the irradiance average. If the two terms, I(18th C.) & I(17th C.), in the denominator have very similar values, then the problem is ill-conditioned: small variations in the input values lead to large changes in the answers; which implies very large
error bounds. In my physics undergraduate course, we learned that one should stay away from analyses based on the difference between two large but almost equal numbers, especially when their accuracy is not exceptional. And using differences of two large and similar figures in a denominator is asking for trouble.
So when SW repeated the exercise for the differences between the 19th and 17th centuries, and for three different estimates of the total solar irradiance, the results gave a wide range of different values for the transfer functions: from 0.20 to 0.57! The problem is really that SW assume that all climatic fluctuations in the 17th to the 19th centuries to solar activity, and hence neglect factors (natural forcings) such as landscape changes (that the North America and Europe underwent large-scale de-forestation), volcanism (see IPCC TAR Fig 6-8), and internal variations due to chaotic dynamics. It is, however, possible to select two intervals over which the average total solar irradiance is the same but not so for the temperature. When the difference in the denominator of their equation is small (the changes in the total solar irradiance are small), then the model blows up because other factors also affect the temperature (i.e. the difference in temperature is not zero). Thus their model is likely to exaggerate the importance the solar activity.
To show that the equation is close to blowing up (being “ill-defined”) their exercise can be repeated for the differences between 19th and 18th centuries (which was not done in the SW paper). A simple calculation for the 19th and 18th centuries is quickly and easily done using results from their table 1 and figures 1-2: A back-of-the envelope calculation based on the 19th and 18th centuries suggests that the transfer functions now would yield an increase of almost 1K for the period 1900-2000, most of which should have been realized by 1980! One problem seems to be that now the reconstruction based on solar activity increases faster than the actual temperature proxy. That would be difficult to explain physically (without invoking a negative forcing).
The SW paper does discuss effects of changes in land-use, but only to argue that the recently observed warming in the Northern Hemisphere may be over-estimated due to e.g. heat-island effects. SW fails to mention effects that may counter-act warming trends, such as irrigation, better shielding of the thermometers, and increased aerosol loadings, in addition to forgetting the fact that forests were cut down on a large scale in both Europe and North America in the earlier centuries. Another weakness is that the SW analysis relies on just one paleoclimatic temperature reconstruction, but using other reconstructions is likely to yield other results.
Looking at the SW curves in more detail (their Fig. 2), one of the most pronounced changes in their solar-based temperature predictions is a cooling at the beginning of the record (before 1650), but a corresponding drop is not seen in the temperature curve before 1650. It is of roughly similar magnitude as the increase between 1900 and 1950, but it is not discussed in the paper. As in their earlier papers, the solar-based reconstructions are not in phase with the proxy data. However, SW argue that by using different data for the solar irradiance, the peaks in 1940 (SW claim it is in 1950) and 1960 would be in better agreement. So why not show it then? Why use lesser data?
The curves in Figure 2 (Fig. 2 here shows the essential details of their figure) of the SW paper suggests that their reconstruction increases from -0.4 to 0K between 1900 and 2000, whereas the the proxy data for the temperature from Moberg et al. (2005) changes from -0.4 to more than +0.6K (by rough eye-balling). One statement made both in the abstract of the SW paper and the Discussion and Conclusions (and cited in the senate hearing) is that «the sun might have contributed to approximately 50% of the total global surface since 1900 [Scafetta and West, 2006 – an earlier paper this year])». But the figure in the SW paper would suggest at the most 40%! So why quote another figure? The older Scafetta and West (2006) paper which they cite is discussed here (also published in Geophysical Research Letters), and I’m not convinced that the figures from that paper are correct either.
There are some reasons to think that solar activity may have played some role in the past (at least before 1940), but I must admit, I’m far from convinced by this paper because of the method adopted. It is no coincidende why regression is a more widely used approach, especially in cases where many factors may play a role. The proper way to address this question, I think, would be to identify all the physical relationships, and if possible set up the equation with right dimensions and with all appropriate non-linear terms, and then apply a regression analysis (eg. used in “finger print” methods). Last week, we discussed the importance of a physical model in making attributions because statistical correlations are incapable of distinguishing between forcings with similar trends. Here is an example of a paper that has exactly that problem.
There is also a new paper out on the relationship between galactic cosmic rays and low clouds by Svensmark. We will write a post on this paper shortly.