Dennis Bray and Hans von Storch have been making surveys of climate scientists for a number of years with the reasonable aim of seeing what the community thinks (about IPCC, climate change, attribution etc). They have unfortunately not always been as successful as one might like – problems have ranged from deciding who is qualified to respond; questions that were not specific enough or that could be interpreted in very different ways; to losing control of who answered the questionnaire (one time the password and website were broadcast on a mailing list of climate ‘sceptics’). These problems have meant that the results were less useful than they could have been and in fact have occasionally been used to spread disinformation. How these surveys are used obviously plays into how willing scientists are to participate, since if your answers are misinterpreted once, you will be less keen next time. Others have attempted similar surveys, with similar problems.
As people should know, designing truly objective surveys is very tricky. However, if you are after a specific response, it’s easy to craft questions that will favour your initial bias. We discussed an egregious example of that from Steven Milloy a while ago. A bigger problem is not overt bias, but more subtle kinds – such as assuming that respondents have exactly the same background as the questioners and know exactly what you are talking about, or simply using questions that don’t actually tell you want you really want to know. There are guides available to help in crafting such surveys which outline many of the inadvertent pitfalls.
Well, Bray and von Storch have sent out a new survey.
The questions can be seen here (pdf) (but no answers, so you can’t cheat!), and according to Wikipedia, the survey respondents are controlled so that each anonymised invite can only generate one response. Hopefully therefore, the sampling will not be corrupted as in past years (response rates might still be a problem though). However, the reason why we are writing this post is to comment on the usefulness of the questions. Unfortunately, our opinion won’t change anything (since the survey has already gone out), but maybe it will help improve the interpretations, and any subsequent survey.
There are too many questions in this survey to go over each one in detail, and so we’ll just discuss a few specific examples (perhaps the comments can address some of the others). The series of questions Q15 through Q17, typify a key issue – precision. Q15 asks whether the “current state of scientific knowledge is developed well enough to allow for a reasonable assessment of the effects of turbulence, surface albedo, etc..”. But the subtext “well enough for what?” is not specified. Global energy balance? regional weather forecasting? Climate sensitivity? Ocean circulation? Thus any respondent needs to form their own judgment about what the question is referring to. For instance, turbulence is clearly a huge scientific challenge, but how important is it in determining climate sensitivity? or radiative transfer? Not very. But for ocean heat transports, it might very well be key. By aggregating multiple questions in one and not providing enough other questions to determine what the respondent means exactly, the answers to these questions will be worth little.
The notion of ‘temperature observations’ used in Q16 and Q17 is similarly undefined. Do they mean the global average temperature change over the 20th Century, or the climatology of temperature at a regional or local scale? Or it’s variability? You might think the first is most relevant, but the question is also asked about ‘precipitation observations’ for which a century-scale global trend simply doesn’t exist. Therefore it must be one of the other options. But which one? Asking about what the ability of models is for modelling the next 10 years is similarly undefined, and in fact unanswerable (since we don’t know how well they will do). Implicit is an assumption that models are producing predictions (which they aren’t – though at least that is vaguely addressed in questions 45 and 46). What ‘extreme events’ are being referred to in the last part? Tornadoes? (skill level zero), heat waves (higher), drought (lower), Atlantic hurricanes (uncertain). By being imprecise the likely conclusion that respondents feel that global climate models lack the ability to model extreme events is again meaningless.
Q52 is a classic example of a leading question. “Some scientists present extreme accounts of catastrophic impacts related to climate change in a popular format with the claim that it is their task to alert the public. How much do you agree with this practice?” There is obviously only one sensible answer (not at all). However, the question neither defines what the questioners mean by ‘extreme’ or ‘catastrophic’, or who those ‘scientists’ might be or where they have justified such practices. The conclusion will be that the survey shows that most scientists do not approve of presenting extreme accounts of catastrophic impacts in popular formats with the aim of alerting the public. Surprise! A much more nuanced question could have been asked if actual examples were used. That would have likely found that what is considered ‘extreme’ varies widely and that there is plenty of support for public discussions of potential catastrophes (rapid sea level rise for instance) and the associated uncertainties. The implication of this question will be that no popular summaries can do justice to the uncertainties inherent in the science of abrupt change. Yet this is not likely to have been the answer had that question been directly addressed. Instead, a much more nuanced (and interesting) picture would have emerged.
Two questions of some relevance to us are Q61 and Q62, which ask whether making discussions of climate science open to potentially everyone through the use of “blogs on the w.w.w.” is a good or bad idea, and whether the level of discussion on these blogs is any good. These questions are unfortunately very poorly posed. Who thinks that anyone has any control over what gets discussed on blogs in general? The issue is not whether that discussion should take place (it surely will), it is whether scientists should participate or not. If all blogs are considered, then obviously the quality on average is abysmal (sorry blogosphere!). If the goal of the question was to be able to say that the level of discussion on specific blogs is good or not, then specific questions should have been asked (for instance a list of prominent blogs could have been rated). As it is, the conclusion will be that discussion of climate science on blogs on the w.w.w. is a good idea but the discussion is thought to be poor. But that is hardly news.
One set of questions (Q68+Q69) obviously come from a social rather than a climate scientist: Q68 asks whether science has as its main activity to falsify or verify existing hypothesis or something else; and Q69 whether the role of science tends towards the deligitimization or the legitimization of existing ‘facts’ or something else. What is one to make of them? There are shades of Karl Popper and social constructivism in there, but we’d be very surprised if any working scientist answered anything other than ‘other’. Science and scientists generally want to find out things that people didn’t know before – which mostly means choosing between hypotheses and both examining old ‘facts’ as well as creating new ones. Even the idea that one fact is more legitimate than another is odd. If a ‘fact’ isn’t legitimate, then why is it a fact at all? Presumably this is all made clear in some science studies text book (though nothing comes up in google), but our guess is that most working scientists will have no idea what is really behind this. You would probably want to have a whole survey just devoted to how scientists think about what they do to get anything useful from this.
To summarise, we aren’t in principle opposed to asking scientists what they think, but given the track history of problems with these kinds of surveys (and their remaining flaws), we do suggest that they be done better in future. In particular, we strongly recommend that in setting up future surveys, the questions should be openly and widely discussed – on a wiki or a blog – before the surveys are sent out. There are a huge number of sensible people out there whose expertise could help in crafting the questions to improve both their precision and usefulness.