A new survey of scientists

Dennis Bray and Hans von Storch have been making surveys of climate scientists for a number of years with the reasonable aim of seeing what the community thinks (about IPCC, climate change, attribution etc). They have unfortunately not always been as successful as one might like – problems have ranged from deciding who is qualified to respond; questions that were not specific enough or that could be interpreted in very different ways; to losing control of who answered the questionnaire (one time the password and website were broadcast on a mailing list of climate ‘sceptics’). These problems have meant that the results were less useful than they could have been and in fact have occasionally been used to spread disinformation. How these surveys are used obviously plays into how willing scientists are to participate, since if your answers are misinterpreted once, you will be less keen next time. Others have attempted similar surveys, with similar problems.

As people should know, designing truly objective surveys is very tricky. However, if you are after a specific response, it’s easy to craft questions that will favour your initial bias. We discussed an egregious example of that from Steven Milloy a while ago. A bigger problem is not overt bias, but more subtle kinds – such as assuming that respondents have exactly the same background as the questioners and know exactly what you are talking about, or simply using questions that don’t actually tell you want you really want to know. There are guides available to help in crafting such surveys which outline many of the inadvertent pitfalls.

Well, Bray and von Storch have sent out a new survey.

The questions can be seen here (pdf) (but no answers, so you can’t cheat!), and according to Wikipedia, the survey respondents are controlled so that each anonymised invite can only generate one response. Hopefully therefore, the sampling will not be corrupted as in past years (response rates might still be a problem though). However, the reason why we are writing this post is to comment on the usefulness of the questions. Unfortunately, our opinion won’t change anything (since the survey has already gone out), but maybe it will help improve the interpretations, and any subsequent survey.

There are too many questions in this survey to go over each one in detail, and so we’ll just discuss a few specific examples (perhaps the comments can address some of the others). The series of questions Q15 through Q17, typify a key issue – precision. Q15 asks whether the “current state of scientific knowledge is developed well enough to allow for a reasonable assessment of the effects of turbulence, surface albedo, etc..”. But the subtext “well enough for what?” is not specified. Global energy balance? regional weather forecasting? Climate sensitivity? Ocean circulation? Thus any respondent needs to form their own judgment about what the question is referring to. For instance, turbulence is clearly a huge scientific challenge, but how important is it in determining climate sensitivity? or radiative transfer? Not very. But for ocean heat transports, it might very well be key. By aggregating multiple questions in one and not providing enough other questions to determine what the respondent means exactly, the answers to these questions will be worth little.

Page 1 of 3 | Next page