Sometimes on Realclimate we discuss important scientific uncertainties, and sometimes we try and clarify some subtle point or context, but at other times, we have a little fun in pointing out some of the absurdities that occasionally pass for serious ‘science’ on the web and in the media. These pieces look scientific to the layperson (they have equations! references to 19th Century physicists!), but like cuckoo eggs in a nest, they are only designed to look real enough to fool onlookers and crowd out the real science. A cursory glance from anyone knowledgeable is usually enough to see that concepts are being mangled, logic is being thrown to the winds, and completely unjustified conclusions are being drawn – but the tricks being used are sometimes a little subtle.
Two pieces that have recently drawn some attention fit this mold exactly. One by Christopher Monckton (a viscount, no less, with obviously too much time on his hands) which comes complete with supplementary ‘calculations’ using his own ‘M’ model of climate, and one on JunkScience.com (‘What Watt is what’). Junk Science is a front end for Steve Milloy, long time tobacco, drug and oil industry lobbyist, and who has been a reliable source for these ‘cuckoo science’ pieces for years. Curiously enough, both pieces use some of the same sleight-of-hand to fool the unwary (coincidence?).
But never fear, RealClimate is here!
The two pieces both spend a lot of time discussing climate sensitivity but since they don’t clearly say so upfront, it might not at first be obvious. (This is possibly because if you google the words ‘climate sensitivity’ you get very sensible discussions of the concept from Wikipedia, ourselves and the National Academies). We have often made the case here that equilibrium climate sensitivity is most likely to be around 0.75 +/- 0.25 C/(W/m2) (corresponding to about a 3°C rise for a doubling of CO2).
Both these pieces instead purport to show using ‘common sense’ arguments that climate sensitivity must be small (more like 0.2 °C/ W/m2, or less than 1°C for 2xCO2). Our previous posts should be enough to demonstrate that this can’t be correct, but it worth seeing how they arithimetically manage to get these answers. To save you having to wade through it all, I’ll give you the answer now: the clue is in the units of climate sensitivity – °C/(W/m2). Any temperature change (in °C) divided by any energy flux (in W/m2) will have the same unit and thus can be ‘compared’. But unless you understand how radiative forcing is defined (it’s actually quite specific), and why it’s a useful diagnostic, these similar seeming values could be confusing. Which is presumably the point.
Readers need to be aware of at least two basic things. First off, an idealised ‘black body’ (which gives of radiation in a very uniform and predictable way as a function of temperature – encapsulated in the Stefan-Boltzmann equation) has a basic sensitivity (at Earth’s radiating temperature) of about 0.27 °C/(W/m2). That is, a change in radiative forcing of about 4 W/m2 would give around 1°C warming. The second thing to know is that the Earth is not a black body! On the real planet, there are multitudes of feedbacks that affect other greenhouse components (ice alebdo, water vapour, clouds etc.) and so the true issue for climate sensitivity is what these feedbacks amount to.
So here’s the first trick. Ignore all the feedbacks – then you will obviously get to a number that is close to the ‘black body’ calculation. Duh! Any calculation that lumps together water vapour and CO2 is effectively doing this (and if anyone is any doubt about whether water vapour is forcing or a feedback, I’d refer them to this older post).
As we explain in our glossary item, climatologists use the concept of radiative forcing and climate sensitivity because it provides a very robust predictive tool for knowing what model results will be, given a change of forcing. The climate sensitivity is an output of complex models (it is not decided ahead of time) and it doesn’t help as much with the details of the response (i.e. regional patterns or changes in variance), but it’s still quite useful for many broad brush responses. Empirically, we know that for a particular model, once you know its climate sensitivity you can easily predict how much it will warm or cool if you change one of the forcings (like CO2 or solar). We also know that the best definition of the forcing is the change in flux at the tropopause, and that the most predictable diagnostic is the global mean surface temperature anomaly. Thus it is natural to look at the real world and see whether there is evidence that it behaves in the same way (and it appears to, since model hindcasts of past changes match observations very well).
So for our next trick, try dividing energy fluxes at the surface by temperature changes at the surface. As is obvious, this isn’t the same as the definition of climate sensitivity – it is in fact the same as the black body (no feedback case) discussed above – and so, again it’s no surprise when the numbers come up as similar to the black body case.
But we are still not done! The next thing to conviently forget is that climate sensitivity is an equilibrium concept. It tells you the temperature that you get to eventually. In a transient situation (such as we have at present), there is a lag related to the slow warm up of the oceans, which implies that the temperature takes a number of decades to catch up with the forcings. This lag is associated with the planetary energy imbalance and the rise in ocean heat content. If you don’t take that into account it will always make the observed ‘sensitivity’ smaller than it should be. Therefore if you take the observed warming (0.6°C) and divide by the estimated total forcings (~1.6 +/- 1W/m2) you get a number that is roughly half the one expected. You can even go one better – if you ignore the fact that there are negative forcings in the system as well (cheifly aerosols and land use changes), the forcing from all the warming effects is larger still (~2.6 W/m2), and so the implied sensitivity even smaller! Of course, you could take the imbalance (~0.33 +/- 0.23 W/m2 in a recent paper) into account and use the total net forcing, but that would give you something that includes 3°C for 2xCO2 in the error bars, and that wouldn’t be useful, would it?
And finally, you can completely contradict all your prior working by implying that all the warming is due to solar forcing. Why is this contradictory? Because all of the above tricks work for solar forcings as well as greenhouse gas forcings. Either there are important feedbacks or there aren’t. You can’t have them for solar and not for greenhouse gases. Our best estimates of solar are that it is about 10 to 15% the magnitude of the greenhouse gas forcing over the 20th Century. Even if that is wrong by a factor of 2 (which is conceivable), it’s still less than half of the GHG changes. And of course, when you look at the last 50 years, there are no trends in solar forcing at all. Maybe it’s best not to mention that.
There you have it. The cuckoo has come in and displaced the whole field of climate science. Impressive, yes? Errrr…. not really.