{"id":296,"date":"2006-04-21T15:37:45","date_gmt":"2006-04-21T19:37:45","guid":{"rendered":"\/?p=296"},"modified":"2008-03-26T14:03:10","modified_gmt":"2008-03-26T19:03:10","slug":"how-not-to-write-a-press-release","status":"publish","type":"post","link":"https:\/\/www.realclimate.org\/index.php\/archives\/2006\/04\/how-not-to-write-a-press-release\/","title":{"rendered":"How not to write a press release"},"content":{"rendered":"<div class=\"kcite-section\" kcite-section-id=\"296\">\n<p>A recent BBC radio <a href=\"http:\/\/news.bbc.co.uk\/2\/hi\/uk_news\/magazine\/4923504.stm\">documentary<\/a> on the possible over-selling of climate change, focussed on the link between high profile papers appearing in <em>Nature<\/em> or <em>Science<\/em>, the press releases and the subsequent press coverage. One of the examples chosen was the <a href=\"http:\/\/www.climateprediction.net\/science\/pubs\/nature_first_results.pdf\">Stainforth et al<\/a> climateprediction.net paper that reported the ranges of climate sensitivity within their super-ensemble of perturbed physics runs.  While there was a lot of interesting science in this paper (the new methodology, the range of results etc.) which fully justified its appearance in <em>Nature<\/em>, we were <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2005\/01\/climatepredictionnet-climate-challenges-and-climate-sensitivity\/\">quite critical<\/a> of their basic conclusion &#8211; that climate sensitivities significantly higher than the standard range (1.5 &#8211; 4.5\u00baC) were plausible &#8211; because there is significant other data, predominantly from paleo-climate, that pretty much rule those high numbers out (as we discussed again <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2006\/03\/climate-sensitivity-plus-a-change\/\">recently<\/a>).  The press coverage of the paper mostly picked up on the very high end sensitivities (up to 11\u00baC) and often confused the notion of an equilibirum sensitivity with an actual prediction for 2100 and this lead to some pretty way-out headlines. I think all involved would agree that this was not a big step forward in the public understanding of science.<\/p>\n<p>Why did this happen? Is it because the scientists were being &#8216;alarmist&#8217;, or was it more related to a certain naivety in how public relations and the media work? And more importantly, what can scientists do to help ensure that media coverage is a fair reflection of their work?<!--more--><\/p>\n<p>A point that shouldn&#8217;t need repeating is that the media like a dramatic statement, and stories that say something is going to be worse than previously thought get more coverage than those which say it&#8217;s not going to be as bad. It&#8217;s not quite a fair comparison, but witness the difference in coverage for the recent Hegerl et al paper, which presented evidence that really high sensitivies are unlikely (a half dozen stories), and the Stainforth et al paper  (hundreds of stories). (As an aside, a comment in the documentary that the recent <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2006\/03\/climate-sensitivity-plus-a-change\/\">Annan and Hargreaves<\/a> paper was deliberately ignored by the media is without foundation &#8211; GRL is not Nature, <strike>and no press release was issued<\/strike> (a press release <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2006\/04\/how-not-to-write-a-press-release\/#comment-12111\">was issued <\/a>&#8211; apologies). Expecting mainstream press coverage in such circumstances would be extremely optimistic).<\/p>\n<p>Secondly, the scientists also need to appreciate that most journalists will only read the press release, and possibly only the first couple of paragraphs of the press release. Very, very few will read the whole paper. This implies that the press release itself is the biggest determinant of quality of the press coverage, and of course, the press release is generally not written directly by the scientists.<\/p>\n<p>Thirdly, though we are trying to do something about it here, most journalists are not experienced enough in scientific topics to be able to place new results in context without outside help.  Often they have a small number of preconceived frames into which they will place the story &#8211; common ones involve forecasts of possible disasters, conflict within the community (the more personal the better), plucky Galileos fighting the establishment,  and of course anything that interacts directly with politics, or political interference with science. This can be helpful if the scientific story fits neatly into one the boxes, but can cause big problems if the story is either more complex or orthogonal to the obvious frames. Scientists are aware of this, but often are not pro-active enough in preventing obvious mis-framing. This implies that even if a press release is 100% scientifically accurate refection of the original paper, the press coverage can still be terrible.<\/p>\n<p>So what went wrong with Stainforth et al paper? The press release is available <a href=\"http:\/\/climateprediction.net\/science\/pubs\/climateprediction_press_release.pdf\">here<\/a>. The only science result in the press release refered to the 11\u00baC outlier but the release itself is not incorrect. However, both the title &#8216;Bleak first results&#8230;&#8217; and the first paragraphs do not provide any context that would correctly lead a (relatively ignorant) journalist to appreciate that there was even a distribution function of climate sensitivities.  I&#8217;m pretty sure that the point that was trying to be made was that relatively small tweaks to climate models can change the sensitivity a lot, and that you can&#8217;t rule out high sensitivities based on model results alone, but that was not clear for people who didn&#8217;t already know the context.<\/p>\n<p>Myles Allen, for whom I have the utmost respect, I think made a rather poor argument in the BBC program. He stated that &#8220;if journalists embroider the press release without reference to the original paper, [the scientists] are not responsible for that&#8221;. I disagree. Looking at the press release, one could have predicted with high confidence that much of the coverage would focus solely on the 11\u00baC number and that they would assume that this was a new prediction. As scientists, I would argue that we have to take responsibility for how our work is portrayed &#8211; and if that means we need to provide better context, then we need to insist that that is included in the release. Myles is on much stronger ground when he argued that the mean model response (~3\u00baC sensitivity) wasn&#8217;t terribly interesting because it is just a reflection of the basic model they started with before any perturbations, which is true. However, without some statement about the relative likelihood of any of the high-end numbers, I find it hard to see how the journalists could have got the message right. Having said that, implications aired in the program that the scientists deliberately misled the journalists or said things that knew would be mis-understood are completely without foundation. (<strong>Update:<\/strong> Please see the response of the journalists listed in <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2006\/04\/how-not-to-write-a-press-release\/#comment-12122\">this comment<\/a> below to really underline that).<\/p>\n<p>What can we learn from this? The first and most fundamental lesson is that scientists should not relinquish control of the press releases. Public relations professionals are talented and useful when it comes to writing releases for media consumption, but the scientists have to be fully involved in the process. If there are obvious frames that the scientists want to avoid, they need to be specific within the press release what their results <strong>do not<\/strong> imply as well as what they might. A clear statement in the Stainforth et al release that placed the 11 C result in context of how unlikely it was and specifically stated that it wasn&#8217;t a prediction would have gone a long way to allay some of the worst coverage.<\/p>\n<p>For an example of how this can work, the Solanki et al paper on solar sunspot reconstructions had a specific statement that their results did not contradict ideas of strong greenhouse warming in recent decades, neatly heading off simplistic (and erroneous) interpretations of their paper. On the other hand, much of the poor reporting related to the &#8216;methane from plants&#8217; <a href=\"http:\/\/www.realclimate.org\/index.php\/archives\/2006\/01\/scientists-baffled\/\">story<\/a> could have been avoided if the authors had been more upfront in their release that their work was not related to greenhouse gas <em>changes<\/em> and had no significant implications for reforestation credits under Kyoto! <\/p>\n<p>In summary, I would emphasise that the scientists and the actual papers discussed here and in the BBC documentary were not &#8216;alarmist&#8217;, however there is a clear danger that when these results get translated into media reports (and headlines) that scientifically unsupportable claims can be made. Scientists and the press professionals they work with, need to be very clear that, for the field as a whole, the widest possible coverage for any one paper should not be the only aim of a press release. <\/p>\n<p>All publicity is not good publicity.<\/p>\n<!-- kcite active, but no citations found -->\n<\/div> <!-- kcite-section 296 -->","protected":false},"excerpt":{"rendered":"<p>A recent BBC radio documentary on the possible over-selling of climate change, focussed on the link between high profile papers appearing in Nature or Science, the press releases and the subsequent press coverage. One of the examples chosen was the Stainforth et al climateprediction.net paper that reported the ranges of climate sensitivity within their super-ensemble [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[5,1,24],"tags":[],"class_list":{"0":"post-296","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-climate-modelling","7":"category-climate-science","8":"category-reporting-on-climate","9":"entry"},"aioseo_notices":[],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/296","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/comments?post=296"}],"version-history":[{"count":0,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/posts\/296\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/media?parent=296"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/categories?post=296"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.realclimate.org\/index.php\/wp-json\/wp\/v2\/tags?post=296"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}