There is a new critique of IPCC climate projections doing the rounds of the blogosphere from two ‘scientific forecasters’, Kesten Green and Scott Armstrong, who claim that since the IPCC projections are not ‘scientific forecasts’ they must perforce be wrong and that a naive model of no change in future is likely to be more accurate that any IPCC conclusion. This ignores the fact that IPCC projections have already proved themselves better than such a naive model, but their critique is novel enough to be worth a mention.
The authors of this paper actually have a much larger agenda, and that is to improve the quality of forecasting used in public policy and business everywhere – by the use of ‘scientific forecasting principles’ (of which they have enumerated 140). Most of these principles seem commonsensicial (don’t overfit a statistical model, test models on out of sample data etc.) and are listed on one of their many websites. Basically, you just assign a subjective numerical score for reflecting how well you match a particular principle and at the end you get a ‘scientific’ number that says how well you are doing.
Armstrong helped set up a journal dedicated to this goal, as well as running yearly meetings for scientific forecasters. However, in a recent review of progress he notes: “the diffusion of useful forecasting methods has been disappointing”, and that “forecasting meets resistance from academics and practitioners”. This seems surprising – why wouldn’t people want better forecasts?
G+A’s recent foray into climate science might therefore be a good case study for why their principles have not won wide acceptance. In the spirit of their technique, we’ll use a scientific methodology – let’s call it ‘the principles of cross-disciplinary acceptance’ (TM pending). For each principle, we assign a numerical score between -2 and 2, and the average will be our ‘scientific’ conclusion…
Principle 1: When moving into a new field, don’t assume you know everything about it because you read a review and none of the primary literature.
Principle 2: Talk to people who are doing what you are concerned about.
Of the roughly 20 climate modelling groups in the world, and hundreds of associated researchers, G+A appear to have talked to none of them. Strike 2.
Principle 3: Be humble. If something initially doesn’t make sense, it is more likely that you’ve mis-understood than the entire field is wrong.
For instance, G+A appear to think that climate models are not tested on ‘out of sample’ data (they gave that a ‘-2’). On the contrary, the models are used for many situations that they were not tuned for, paleo-climate changes (mid Holocene, last glacial maximum, 8.2 kyr event) being a good example. Similarly, model projections for the future have been matched with actual data – for instance, forecasting the effects of Pinatubo ahead of time, or Hansen’s early projections. The amount of ‘out of sample’ testing is actually huge, but the confusion stems from G+A not being aware of what the ‘sample’ data actually consists of (mainly present day climatology). Another example is that G+A appear to think that GCMs use the history of temperature changes to make their projections since they suggest leaving some of it out as a validation. But this is just not so, as we discussed more thoroughly in a recent thread.
Principle 4: Do not ally yourself with rejectionist rumps with clear political agendas if you want to be taken seriously by the rest of the field.
The principle climatologist that G+A appear to have talked to is Bob ‘global warming stopped in 1998’ Carter, who doesn’t appear to think that the current CO2 rise is even anthropogenic. Not terribly representative…
Principle 5: Submit your paper to a reputable journal whose editors and peer reviewers will help improve your text and point out some of these subtle misconceptions.
Energy and Environment. Need we say more?
Principle 6: You can ignore all the above principles if you are only interested in gaining publicity for a book.
In summary, G+A get a rather disappointing (but scientific!) score of -1.66. This probably means that the prospects for a greater acceptance of forecasting principles within the climate community are not good. Kevin Trenberth feels the same way. Which raises the question of whether they are really serious or simply looking for a little public controversy. It may well be that there is something worth learning from the academic discipline of scientific forecasting (though they don’t seem to have come across the concept of physically-based modelling), but this kind of amateur blundering does their cause nothing but harm.
In association with their critique, G+A have also launched a very poorly thought out ‘climate challenge‘ that is essentially a bet on year to year weather noise. No one is likely to take them up on that, and they don’t seem to be interested in the rather better thought through bets on offer from James Annan and Brian Schmidt. Thus again, the conclusion must be that they are not serious about their stated goals. That’s a shame.
Shorter Armstrong and Green: If our publications are not cited, climate sensitivity is zero.
‘Shorter’ concept by Daniel Davies and Elton Beard