RealClimate

Comments

RSS feed for comments on this post.

  1. You are raising an interesting problem, which seems to me similar to the general question if scientists should speak up about an issue in any other way but through peer-reviewed literature. I often here the argument that scientists should be careful not to lose credibility; and the only way not to lose it would be by publishing research in peer-reviewed journals.
    However, I strongly believe that scientists have to reach out a lot more to the general public, so that non-scientists learn to better understand the scientific process, learn to better weigh the inherent uncertainty in all areas of science, and learn to know the issues at stake. By reading peer-reviewed articles it is not possible to follow the process of thinking and critique, whereas a blog like RC is an ideal way not only for scientists to discuss important issues, but also for the non-scientists to follow the thinking process and arguments of scientists.
    I don’t think there is any conflict between peer-review and science blogs, but they wonderfully complement each other – one to present new results to the scientific community, the other to discuss the results with anybody interested. Thank you for doing that so well on your blog!

    Comment by Maiken Winter — 3 Apr 2008 @ 4:58 PM

  2. So many journals these days publish dubious or flawed or merely unclear analyses that quickly get picked up and respun lamely by the traditional media like a bad game of telephone.

    So I think we can’t wait for a journal to publish some short, edited letter weeks or months later. Exhibit A, by the way, is Pielke et al.’s very misleading Commentary in Nature — is that stuff even peer-reviewed? Who knows? Anyway, it needed fast debunking.
    http://climateprogress.org/2008/04/02/nature-pielke-pointless-misleading-embarrassing-ipcc-technology/

    As did Nature’s unbelievable quoting of a colleague of Pielke’s (Marty Hoffert) on behalf of the paper in an accompanying new article without actually identifying them as colleagues!
    http://climateprogress.org/2008/04/02/shame-on-nature-for-quoting-hoffert-on-behalf-of-pielke-without-noting-theyre-colleagues/

    Long live the blogosphere!

    Comment by Joseph Romm (ClimateProgress) — 3 Apr 2008 @ 5:33 PM

  3. With all due respect to the good Dr. Allen, his argument is pants.

    The discussion of his paper will go on in a wide variety of forums, whether he likes it or not. The question is whether that discussion should or should not be informed by the contributions of actual scientists who understand it.

    The answer seems obvious.

    Comment by John Fleck — 3 Apr 2008 @ 5:34 PM

  4. Blogs don’t serve very well for communication among scientists. Peer review does more than just protect us from being inundated with substandard work; it protects authors from their own mistakes and improves the quality of what we write. Peer review itself is an immensely valuable avenue of communication; who among us hasn’t at some time included a phrase like “We thank an anonymous referee for comments and suggestions which dramatically improved the final manuscript”?

    But as bad as blogs are for actual research, peer-reviewed journals are far worse for communicating with and educating the lay reader. Yet when it comes to climate science the lay public is hungry for knowledge, and many of them are eager, and well-prepared, for a level of sophistication and detail that can’t be found in lay journalism or even popular literature; An Inconvenient Truth isn’t enough. So blogs serve an incredibly useful purpose, enabling the interested and well-educated reader to share insights with researchers who are at the cutting edge of new knowledge.

    There’s yet another aspect which features prominently when it comes to climate science. We’re in a “propaganda war” in which one of the strategies used by the forces of ignorance and greed to sabotage action, is to spread fear, uncertainty, and doubt. We’re not just scientists (climate or otherwise), we’re also human beings with a moral obligation to leave the next generation a world worth inheriting. The blogosphere has been a primary target of the disinformation campaign; it’s the “trenches” of the propoganda war. We have to fight the enemy on this crucial battleground.

    One of their strategies is to take legitimate peer-reviewed work out of context, blow it all out of proportion, or misrepresent it most falsely, to give the impression that it overthrows the reality of global warming when in fact such was never the intent or conclusion of the authors. The Schwartz paper is a fine example of this; it doesn’t overthrow any aspect of climate science — nor does it pretend to — but it was heralded by so many as doing exactly that. In a sense, Schwartz himself was the most direct and most damaged victim of their efforts. Refuting such nonsense can’t be done in the peer-reviewed literature; no respectable journal publishes that kind of stuff and no substantial fraction of the general public would ever see it.

    So, while blogs aren’t part of the machinery for legitimate scientific research, they’re an indispensible tool for communication and combating misinformation. RealClimate is the best of the best; keep up the good work.

    Comment by tamino — 3 Apr 2008 @ 5:35 PM

  5. Dear Gavin,

    You may feel you didn’t criticize the Stainforth et al paper, but you did misunderstand it in one crucial respect, in that you said explicitly that our “the most important result … is that by far most of the models had climate sensitivities between 2ºC and 4ºC, giving additional support to the widely accepted range.” I didn’t think it mattered at the time, but it turned out it definitely did matter, because this was the first thing those BBC journalists threw at me, and I continue to meet journalists and even scientists who are convinced that your analysis was in fact our “true” result, and we only drew attention to the fat tail extending out to much higher sensitivities because we wanted to alarm people. As you know (and noted in your later post, thanks), the cluster around 3 degrees wasn’t a result at all, but a simple artifact of the experimental design, so the fat tail was indeed the main noteworthy result of the experiment.

    It would have been really easy to have picked up on this had you e-mailed me a draft of your post before posting it on the internet (and published our response along with your post if necessary). It’s all water under the bridge now, but this whole exchange does demonstrate the serious collateral damage that can be caused by relatively minor mistakes. Now that RealClimate has proved such a success and is so heavily used by journalists for source material, perhaps it is time to tighten up your procedures a little. I personally would never comment critically in public on a peer-reviewed paper even to point out “obvious problems” (who is the judge of what is obvious here?) without at least exchanging e-mails with the authors to make sure I had understood it correctly (I’m more than happy to criticize non-peer-reviewed material on Channel 4).

    While I’m posting (I can see how you guys get into this) I’m also very uncomfortable with your notion of “tacit knowledge:” it certainly seems to be tacit knowledge in the blogosphere that the chances of the climate sensitivity (equilibrium warming on indefinite stabilization at 560ppm CO2, for the non-enthusiasts) being greater than or equal to 6 degrees are too small to be worth worrying about (meaning down at the level of an asteroid strike). If we accept this tacit knowledge, then your original post and the fuss over the press coverage of Stainforth et al make a lot more sense: we would have had no business drawing attention to these high-sensitivity models if we were as confident as you all seem to be that the real world sensitivity just cannot be that high. But (although of course I sincerely hope you’re all right) I just don’t see the evidence for this level of certainty in the peer-reviewed literature. In an environment without peer-review, it seems to me to be much easier for such “everyone knows X” myths to develop.

    I appreciate that publish-first-and-ask-questions-later is “traditional” practice in blogging, but perhaps, as scientists, we should be challenging that practice. After all, if the New York Times can pass articles through a simple fact-check procedure before publishing them, why can’t RealClimate?

    [Response: Dear Myles, thanks for your thoughts. I was only able to join this discussion late, so you find my response down at comment #116. -stefan]

    Comment by Myles Allen — 3 Apr 2008 @ 5:55 PM

  6. I remember a boss telling us all that he wanted to hear what we thought of his style of management but we rapidly discovered he didn’t want to hear complaints. I suspect this plea is in the same vein. Your treatment of Douglass et al. was far from fair or even correct as Christie would have told you if he had been contacted first or been given right of reply. He did list your errors elsewhere but you didn’t acknowledge that either. And that is where you go wrong. Pointing out that obs and theory don’t match is what real science is about. It should actually be done as part of the normal validation procedure of models. At least though you brought out the truth about the real spread of uncertainties in the models, which had been cunningly disguised by the IPCC reports. I guess it was the only argument you had, but it was quite funny to hear that model results are acceptable if their huge error bars manage to clip the observation error bars. How scientific is that?

    As far blaming journalists for parroting the press release given to them by a University. What else would you expect? Surely the scientists see and approve the press release so the fault lies squarely with them if it is grossly misleading.

    Yes Tamino, it is about propaganda but the fear uncertainty and doubt is being spread by both sides. Some of you just don’t see it because you feel innately so much more virtuous than the other side. Bending the truth is then recast as countering the arguments of the evil doers. But it’s still bending the truth and it diminishes you.

    [Response: Sorry, but you are just wrong. The error in the Douglass et al paper is clear and obvious and does not require a much thought to discern. It is simply that the statistical test they apply would reject ~80% of samples drawn from an exactly similar distribution. To make it clearer, take a fair die - the mean number of points is 3.5 and with around 100 throws, the mean will be known to within 0.1 or so. Then take the same die and imagine you get a 2, then the Douglass et al test would claim that this throw doesn't match since it is below 3.3 (3.5 - 2 standard deviations). This is absurd. How checking with John Christy would have changed the situation is unclear. Matching data and theory is indeed the main activity of science, but it needs to be done correctly. - gavin]

    Comment by JamesG — 3 Apr 2008 @ 6:23 PM

  7. RE “most of what scientists know is ‘tacit’” — I was just in a social science research meeting and someone brought up the problem of “if it isn’t quantifiable, it doesn’t exist”; she used ethics and other issues as examples. I brought up some issues in climate science, like the possible disintegration of the Greenland ice sheet, and how it wasn’t included in a calculation of sea rise in the IPCC assessment, since it wasn’t quantifiable in terms of timing, since it is a non-linear type thing. And there are many other aspects of climate science I gather that are not easily quantifiable, or for which there is no neat formula.

    I think climate scientists know about these, but can’t really make definitive claims, so they don’t get into peer-reviewed articles much…or else other scientists might attack them with ferocity (even substracting denialists from the equation here).

    It is that tacit knowledge that is so important, and needs to be used on blogs such as this, to let us know what makes sense at an expert level. Laypersons can proffer all sorts of weird notions that come to their minds, and scientists with their valuable (tacit and explicit) knowledge can put limits on it (what’s possible, impossible, not likely, likely, etc). And we need their assessment of the peer-reviewed articles and IPCC assessment, as well.

    Comment by Lynn Vincentnathan — 3 Apr 2008 @ 7:08 PM

  8. I’ve thought a lot about this since launching my blog a couple years ago. Blogs have created a forum for many people to informally discuss science. They are also a great forum for scientists to provide context to their own work and to the work of others in their field.

    And some of the time, that may take the form of criticism. That’s ok. But we do have to be careful in blurring the line too much between peer-reviewed publications and blogs. The first problem is the obvious one. Blog posts are unfiltered, un-reviewed, and often written off the cuff, while journal articles are screened, reviewed, and (should be) meticulously researched. It is far easier to write a criticism of a paper on one’s blog than to write a response and submit it to the journal. The issue isn’t just that no reviewers check the work… the blog author is unlikely to do even close to the amount of research and analysis nor give the wording nearly the same level of consideration as is expected in a paper submitted to a high quality peer reviewed journal.

    The other problem is that blog posts are readily accessible to anyone at anytime, both in language, and in the unlicensed nature of the internet. To a huge swath of the public, blog posts are the face of science. Like it or not, bloggers with scientific credentials are like self-appointed ambassadors for science. If we are going to write about our science, we should do it with thought, and we should do it well. That is a standard that myself and many other science bloggers often struggle to meet.

    Comment by Simon D — 3 Apr 2008 @ 7:22 PM

  9. Tacit knowledge is an inescapable reality in all endeavours of skill (eg masonry), as far as I can tell, so I see no particular problem with Gavin being explicit about it. I can read any peer-reviewed article I like on modern climate models, but until I go through much of the process of building, running, validation, discussing with colleagues how they solved particular wrinkles etc of some models, I am unlikely to fully comprehend climate modelling as a skilled craft. That is where the “tacit knowledge” resides, in the main; knowing where the bodies are buried, to mal-appropriate an expression.

    As for science blogs such as this one, I think on the whole they are invaluable. Having an interactive forum as opposed to a passive one such as a book or TV documentary is great. As far as I can see, so long as criticisms of scientific articles are explained on a factual basis, the readers can decide if the criticism seems warranted or not.

    Perhaps the inevitable mistakes and spoke-too-soon cases could be dealt with by adding a “Corrections/Elaborations” section to the site.

    Don.

    Comment by Donald Oats — 3 Apr 2008 @ 7:32 PM

  10. Major areas where tacit knowledge plays an important role, is that people in a field know a) what didn’t work, although at first glance it should have or might have, b) what was published and is worth ignoring and c)the small tricks that make a lot of things work. Don’s mention of masonry is exactly correct. You don’t find that stuff in books, you do find it in discussions around the coffee pot and that is the chief value of graduate school. As the bunny put it long ago

    Uncle Eli has always admired astronomy, botany, and zoology as sciences with important amateur participation. By nurturing the large community of those interested in the science these fields have built important support groups, and amateurs have made important contributions. Many amateurs become obsessed with relatively narrow and previously trodden areas. Within those areas their knowledge often exceeds that of professionals. To Eli the most important thing is that people get to experience the joy of science. The smartest thing NASA ever did was reserve time on the Hubble for amateurs and some good science has resulted.

    What amateurs lack as a group is perspective, an understanding of how everything fits together and a sense of proportion. Graduate training is designed to pass lore from advisers to students. You learn much about things that didn’t work and therefore were never published [hey Prof. I have a great idea!...Well actually son, we did that back in 06 and wasted two years on it], whose papers to trust, and which to be suspicious of [Hey Prof. here's a great new paper!... Son, don't trust that clown.] In short the kind of local knowledge that allows one to cut through the published literature thicket.

    But this lack makes amateurs prone to get caught in the traps that entangled the professionals’ grandfathers, and it can be difficult to disabuse them of their discoveries. Especially problematical are those who want science to validate preconceived political notions, and those willing to believe they are Einstein and the professionals are fools. Put these two types together and you get a witches brew of ignorance and attitude.

    Unfortuantely climate science is as sugar to flies for those types.

    Comment by Eli Rabett — 3 Apr 2008 @ 10:39 PM

  11. Interesting thread. This sounds like the kind of discussion related to the Lockwood and Frohlich (2007) (L&F) paper on solar variation. The L&F paper was peer-reviewed, but two criticisms I found were published on websites (their analyses were not vetted).

    The L&F paper is here:

    http://publishing.royalsociety.org/media/proceedings_a/rspa20071880.pdf

    The critiques are here:

    http://members.shaw.ca/sch25/FOS/Lockwood/Gregory-CritiqueLockwood.pdf (Beware. “Friends” of Science did this one.)

    http://www.spacecenter.dk/publications/scientific-report-series/Scient_No._3.pdf (By Svensmark and Friis-Christensen, whose research is solely on the cosmic ray theory.)

    Comment by Stephen Berg — 3 Apr 2008 @ 10:44 PM

  12. One of major benefits of a science blog like RealClimate, which is not replicated in formal comments submitted through the peer reviewed literature, is that it allows hundreds of people around the world to participate in an online discussion about issues of common interest. This provides a great vehicle for active learning and disseminating knowledge in both the scientific and general community. If there is an error in the original post, such as Myles (#5) suggests, or one of the subsequent comments, this can be ironed out through the discussion.

    Comment by Chris McGrath — 4 Apr 2008 @ 1:55 AM

  13. Thanks Gavin,

    Myles’ paper lies behind a wall, although a free one in this case, which pretty much sums up the issue for a passionately interested observer such as myself. Much scientific literature is not accessible to non-experts (including other scientists), either for this reason or because they lack the necessary ‘tacit’ knowledge to read it critically.

    Without blogs such as RealClimate I would not know how to recognise and counter denialist baloney. I would also not fully appreciate many of the implications of particular climate research, including the socio-political implications. And I wouldn’t have much insight into how the science is done and how solid the conclusions are. My views are very much in agreement with Tamino in post 4. Blogs are a vital and necessary part of opening the ‘universe’ of climate science to a broader non-expert audience.

    Ultimately science blogs may make a fundamental difference in our response to climate change – as they help create the critical democratic mass to counter the ‘corporatocratic’ tendency to business as usual present in most developed countries.

    PS – I struggled to find a better word than Corporatocracy. For a definition see:
    http://en.wikipedia.org/wiki/Corporatocracy

    Comment by Bruce Tabor — 4 Apr 2008 @ 2:24 AM

  14. One of Myles complaints in his piece is that his response got deleted from a blog, so the criticism was allowed to stand unchallenged and perhaps by inference, supreme. Yes, blogs are often more censored and less open than they appear.

    [Response: You are misreading his comment. Myles did not ever publicly post his reply, while the comment was posted by it's author. No deletion of anything occurred. - gavin]

    But should serious criticism have to wade through peer review, in order to be taken seriously? Many serious criticisms are simplistic, such as important results that were not cited and discussed in the paper. If scientist X publishes a new generation model, and I want to point out that several issues published as widespread in the previous generation of models were not discussed in the paper, so we are left ignorant of whether those issues are resolved or addressed in the new model, will that be deemed as worthy of being published as a response in esteemed journals?

    More likely, I would first inquire of the authors directly. Even if I found that the issues were not addressed and thus were still likely present in the new model, the absense of the references might be assumed to be evidence that the problems were not addressed. But is such an “assumption” publishable? Is a new model entitled to be a complete unknown except for the information published in the article, and thus not subject to all the limitations of previously published model results?

    Yet, the peer review process is often assumed to be an attempt to guarantee that peers familiar with the literature would not have let the paper through, if it didn’t address known issues. Thus, the model is allowed to stand uncriticised in the peer review literature, as the apparent new pinnicle of model achievement. After all, the published criticisms don’t apply, since the model paper was publshed after the diagnostic studies of the previous generation.

    Perhaps blogs and peer review should be combined, and perhaps the best way is with moderated blogs appended to the electronic version of the paper. That way potentially serious issues, that don’t rise to the level peer review publication themselves, can be raised and hopefully addressed.

    Comment by Martin Lewitt — 4 Apr 2008 @ 2:50 AM

  15. Peer reviewed papers are the top of the pyramid of information and knowledge about climate change. But there are very few lay-folk who are able to truly validate let alone understand and usefully interpret the methodology, the equipment, the models, the inputs or the outputs, let alone the see the full scope of the social, political and economic implications of these heady documents.

    So all we can really do is watch as the real scientists gently debate the worth of each new revelation and eventually accept that which they in the end conclude is good, pragmatic and generally safe to hang our hats on.

    It is within these blogs that we discover how you the true scientists feel about the import of the most recent revelations, and it is upon that consensus that we tend to rely.

    The ‘contributions’ to these bolgs by we non-scientists are the only way we have of exploring and eliminating our own uncertainties, and we are most grateful to RC for that opportunity. We hope we do not abuse it!

    Please keep on doing what youre doing. Thanks

    Comment by Nigel Williams — 4 Apr 2008 @ 3:39 AM

  16. If criticism of papers and projects should be kept to the peer reviewed literature, perhaps promotion of said papers/projects should be also? Myles Allen might dislike BBC Radio 4′s implication that CPDN was deliberately misleading, but he was a regular sight on British television favourably discussing the project and encouraging people to get involved (for obvious reasons).

    This is special pleading. If a project is going to be discussed in the public arena, then criticism (knowledgeable or otherwise) must be part of that discussion. It it were to be removed, then the public understanding of both the science and the issues would likely be even more distorted, having ceded most of the battlefield to unscientific projects and dubious criticism.

    Myles Allen’s comments smack to much of “leave science to the scientists” for me. Science will be discussed by the public, and many will want to know other scientists’ views on particular topics; to say they must be at a level to understand the peer-reviewed literature to do so is elitist. Yes, talking to the public forum as RC, Pharyngula, Neurophilosophy and other science blogs do can be prone to errors or misrepresentations. But the answer is not to shut down the blogs, it is to point out the errors and misrepresentations.

    Comment by David B. — 4 Apr 2008 @ 4:11 AM

  17. Don hits the nail on the head. He describes climate modeling as a ‘skilled craft’. That is what it is. It is a craft not science which must arrived at by generating and testing hypotheses in empirical studies. A good recent description of computer models is that they are the mathematical representation of the modelers opinion. That is not science people.

    Tacit knowledge is not necessarily accurate knowledge. The two may or may not be the same. Because you have tacit knowledge, it does not mean you are right or even knowledgeable. I has been shown in numerous studies that when making prediction, expert opinion is no more accurate than non-expert opinion. By Gavin saying that the tacit knowledge he has gives him the ability to sort the important papers from the frass, he is really saying that he is sorting his preferred papers from his non-preferred papers. i.e. the papers biased to his leaning. This is not the same as saying that he can sort worthy from unworthy papers.

    Tacit knowledge should never ever, ever be regarded as truth until it has withstood empirical investigation.

    [Response: Your idea about what science is, and is not, is deeply flawed. Modeling (of all sorts) is fundamental to science because it is only through modelling that the quantitative implications of basic theory can be found. All predictions (whether in a laboratory or natural setting) are based on such models and it is only from comparing predictions to observations that one progresses. - gavin]

    Comment by Richard — 4 Apr 2008 @ 4:43 AM

  18. Interesting. Also in the light of an extensive debate on the Nature website about peer review.

    http://www.nature.com/nature/peerreview/debate/index.html

    It seems that one should not overstate the value of peer review. It has its value and it has its weak spots.

    The one thing I am convinced of though is that science progresses by debate. Anyone who suggests otherwise simply does not understand the relevance of the scientific debate [think about Al Gore and claims that the debate is over and that anyone who disagrees too much with him apparently is a fraud or payed by oil companies or whatever ...].

    You can differ in opinion about what the best format is for this debate to take place, but science should be about facts, observations, experiements, arguments, interpretations and discussing them. This can be sometimes a hard, long, difficult process, but science is not definitely not decided by committees or majority votes. As some politicians want us to believe.

    Scientists should embrace the open scientific debate, and anyone who challenges that should be made very, very clear that without open debate, there simply is no science, no matter how much one is in favor of or opposes to particular people, statements and actions.

    Jenne

    Comment by Jenne — 4 Apr 2008 @ 5:28 AM

  19. I produced the Radio 4 programme you described as “rather scurrilous”. Unsuprisingly I think this is unfair and I am very proud of the programme. I invite people to make their own minds up. The programme is what climate change journalism should be it is challenging and rigourous. It does not feature any sceptics to falsely balance the debate and allows people we criticise to defend themselves.

    The page below has the programme available in full in the top right-hand corner.

    http://news.bbc.co.uk/1/hi/magazine/4923504.stm

    Many scientists have contacted me privately to commend the programme but I will mention a couple of people close to CPDN who have written about the programme and the press release.

    Tim Palmer, the head of the Probability and Seasonal Forecasting Division at the European Centre for Medium-Range Weather Forecasts wrote the following in Physics World:

    ” A recent well-researched BBC radio programme exposed a number of exaggerated press releases by climate institutes.”

    Bryan Lawrence of NERC who fund CPDN said on his blog of the infamous press release:

    “I was staggered to read the actual press release that caused all the fuss (predictions of 11C climate sensitivity etc). The bottom line is that had I read that press release without any prior knowledge I too might have believed that an 11 degree increase in global mean temperature was what they had predicted (which is not what they said in the paper). I can’t help putting some of the blame back on the ClimatePrediction.net team – the press release didn’t reflect the message of their results at all properly, and they shouldn’t have let that happen. I’m still naive enough to believe it’s incumbent on us as scientists to at least make sure the release is accurate, even if we can’t affect the resulting reporting.”

    I can imagine that this is exactly the kind of blog comment that needs stopping.

    In the original programme we also touch on frogs and the huge amounts of coverage giving to the frogs killed by climate change story. The brilliant Andrew Revkin wrote about this recently:

    http://dotearth.blogs.nytimes.com/2008/03/24/vanishing-frogs-climate-and-the-front-page/

    It looks like we were right in our approach to the story. It was a story I couldn’t have hoped to have covered without blogs. They helped me realise there was a story but only when I had spoken to many peer reviewed scientists did we broadcast.

    [Response: I have edited out the term 'scurrilous', since that was perhaps a little strong. However the implication that the CPDN scientists were trying to deliberately mislead the public is unfounded. That implication, nor the accusation that they were being alarmist, did not arise from our original blog posting. - gavin]

    [Response: Please also note Myles' response below. - gavin]

    Comment by Richard Vadon — 4 Apr 2008 @ 6:28 AM

  20. I too am struggling with Gavin’s tacit comment

    My piece tries to make the point that most of what scientists know is “tacit” (i.e. not explicitly or often written down in the technical literature)…

    Not writing all the details would imply that there is universal agreement, but that’s where I have a problem. Universally we can agree on a full range of things. Mathematical symbols, simple formulas, etc. that are foundational, but as we move higher in the structure of the logic, constructs become supportive, and their use less accepted. Somewhere past that point the uses are far less agreed upon or understood in this new context. A “tacit” use of a construct becomes problematical.

    Where this line of acceptance is depends on the readers’ understanding and agreement with the writer’s arguments.

    So my concern is how are reviewers, unless in full agreement, supposed to accurately review a scientific paper. I will not go beyond this point as we get deeply into a mine field of intent, bias, egos and even writing/language skills.

    [Response: I wasn't trying to suggest that tacit knowledge was some kind of opinion that all scientists must agree with, but rather it is the shared background that, say, everyone using climate models has - i.e. why we use initial condition ensembles, how we decide that a change in the code is significant, what data comparisons are appropriate etc. This knowledge builds up with experience, but the reasons for it are rarely spelt out in the technical literature. - gavin]

    Comment by CoRev — 4 Apr 2008 @ 7:00 AM

  21. Once upon a time on a mountain I met a prominent scientist who liked the idea of me acquiring data not done anywhere else in the world. He stated “not enough observations, too many thinkers” , I agreed. In the final analysis,
    its not a press report which does good science, it is hard work, and as it turns out, not letting all this hard work get distorted by any media source is the job of anyone interested in good science.

    Comment by wayne davidson — 4 Apr 2008 @ 7:03 AM

  22. Peer reviewed science and Blogs complement each other.

    Without Realclimate’s explanations I’d probably still be a climate change sceptic (or at least an agnostic).

    Without other blogs like Rabbett Run / Stoat / and Tamino’s Blog (Aghh! can’t recall the name), there’d still be many issues I’d be confused about, even with reading the peer reviewed science.

    It’s taken me years of hard study to get to the stage where I can read the majority of papers I find (without paying for them) and understand or see problems on a single read. That’s for an upper second graduate in electronics with years of wider science reading and hobby level application. Yet even now I am amazed at the kickself simple objections real scientists make to papers I’ve read and thought OK. The guidance of working scientists in the field is crucial for non-professionals like me.

    I also find I have reached the stage where I don’t swallow what I see in the media uncritically, virtually every story could bring some nit-picking objection from me. However apart from the obvious buffoonery of elements of the press (I’m thinking in particular of the entrenched ill-considered denial in the right-wing British press). I do think one has to be lenient with the press. In explaining issues such as climate science they are caught between the public’s demand for easy to understand sound bites and the complexity of the reality. I find that more and more I have to bite my tongue to stop myself constantly correcting the understanding of friends and colleagues. They have the basic message, the details will just confuse them.

    This problem of the existing limits to public grasp is probably best shown by my experience during my degree. We had an excellent lecturer who, as the course proceeeded, would more and more say things like: “You remember how last term I explained…. Well, I was lying…”

    In order to bring his pre-graduate students up to a graduate level of understanding my lecturer had to allow us to grasp a simple (and “wrong”) level of understanding so that we could then proceed to grasping the deeper reality and gain a “less wrong” level of understanding. Likewise it’s impossible to get most of the public to grasp the message, and when even amongst the vast majority of professionals who accept the broad scientific consensus the details of their respective understandings mean a plethora of different messages for the public.

    From my position as an amateur following this issue I need blogs as much as I need access to the peer-reviewed science. But they are apples and oranges, blogs are in essence like part of the press, the journals are the true arena for resolving issues in the science.

    However I have sympathy with Myles Allen feeling aggrieved that he was not informed of the RealClimate post he felt unfair (although I had taken on board the point Myles makes about models producing a wide range of apparent sensitivities). I think that any blog run by active scientists should contact the authors of papers they criticise as a matter of course.

    If anything I’d like to see more scientists willing to write here to explain their own papers. I’ve often thought when reading posts here that having 2 scientists discuss with each other in a public forum could be very enlightening.

    Myles Allen has posted here, RC planning another climate sensitivity post, perhaps Myles could spare the time…
    I know you’re all busy, I know it’s cheeky, but if ya don’t ask… ;)

    Comment by Cobblyworlds — 4 Apr 2008 @ 7:58 AM

  23. Isn’t it two dasy late for the suggestion that science is like chicken sexing?

    Comment by Dr Slop — 4 Apr 2008 @ 8:11 AM

  24. The big problem with blogs is that there is no way for an outsider to know which are reasonably careful creations of informed scientists, which are opinions of the scientifically illiterate, and which are astroturf creations designed to confuse critics of science that is in conflict with an industry.

    Given that terrain, I would rather have something like RealClimate than not: it helps to balance things out. Errors tend to be corrected quickly here as a consequence of a large informed readership (even if it is sometimes annoying that you get drive-by ignoramuses who don’t benefit from getting their misconceptions answered).

    As far as critiquing peer reviewed work goes, if the work is really no good, does it deserve to have an inflated citation count by attracting a flood of peer reviewed rebuttals? A site like this is of value in providing a forum for such rebuttals — and developing a consensus on whether a formally published rebuttal is worth the effort.

    What I see as missing is something that has the interactivity of a blog but leads to a final agreed version of an article. A possible model is the RFC (Request for Comments) approach used for agreeing standards in the Internet world. You publish a draft, and keep working at it until everyone likes it, then it becomes a standard.

    Comment by Philip Machanick — 4 Apr 2008 @ 8:11 AM

  25. Interesting question. I’ve discussed related topics over the years among scientists and science journalists. I have actually had science writers who are nonscientists say that they felt their status as laymen made it easier to relate to their audience, while I have had science journalists who are scientists claim they don’t see how one could be effective without experience as a scientist. Realclimate fulfills an important niche in that it provides a forum for scientists to speak ex cathedra as it were, while still providing refuge from the anarchy that pervades the blogosphere. I believe that the contributors are careful to present the consensus science and to note when they are expressing their own individual opinions. This important distinction makes Realclimate a science blog, rather than a blog about science.
    As a blog, it is important that Realclimate not be bogged down in a lengthy formal peer review process, alghough I’m sure they share their entries with each other before posting and benefit from the criticism. As such, I think that Donals Oats’s suggestion of a Corrections/Elaborations section, cross referenced to the relevant posts would be a reasonable addition. It would provide a mechanism for redressing errors/grievances… that sometimes (albeit rarely) occur.

    Comment by Ray Ladbury — 4 Apr 2008 @ 8:29 AM

  26. I believe the science community needs to be more involved in blogging. How many times have I heard a presentation or read a paper and thought wow that is great or that is ridiculous, or that gives me a great idea. Many. However, I have submitted very few such comments to a peer reviewed process. It is such a slow unwiedly process that it seldom advances the science much. Science builds on itself. With blogs we can express questions, concerns, confirming analogs etc. in a manner that speeds science faster toward better answers. Collectively we are aware of much more than we are individually, but until recently we labored mostly in are small groups with input from colleagues who managed to attend the meeting we were at and then comments are not usually collectively shared.

    Comment by mauri pelto — 4 Apr 2008 @ 8:52 AM

  27. I have the exciting and often infuriating privalege of working in wildlife science where decisions are guided as much by “gut” feelings as by scientifically derived knowledge. I can’t count the number of times I’ve heard that decision-making in our field is an “art” as much as it is a “science”. It has allowed some land management actions, for example, to go on long after they should have been put to bed (e.g. the “science” behind the Savory grazing system or clearcut logging to replicate natural forest disturbance).

    My recommendation is to be careful. I’ve noticed journalists sometimes give as much weight to information from other news articles and blog entries as a scientist would to a peer-reviewed article. Look at Andy Revkin’s blog for numerous examples including yesterday’s (April 3) entry. That is no different than a preacher citing scripture as proof of this or that.

    Blogging is a great communication tool and I very much appreciate your work with this one, but it could also be a slippery slope that could lead to the degradation of science, simply because most of the end users don’t understand the scientific method. Maybe a blog entry about repeatable results and the ability to make predictions, etc. would be helpful.

    Comment by Andrew Sipocz — 4 Apr 2008 @ 9:23 AM

  28. I think Myles makes a good suggestion, that blog entries that point out possible ‘errors’ be offered to the original authors for comment prior to posting. This is equivalent to the comment and response used in peer reviewed journals, and every major newspaper offers the subject of an ‘expose’ the opportunity to comment before the papers hit the street. I find the give and take of these exchanges to be very useful in ubderstanding the significance of the original article. Journal articles, especially in ‘leading’ journals such as Nature and Science, are getting ridiculously short, and many details of analysis are necessarily omitted, and much can be buried in a simple figure. The comment and response format in postings on an RC-type forum would add greatly to the general understanding of new papers.

    Somewhere, perhaps in RC, someone posted the suggestion that if you think you have found a fundemental error in a major scientific work, you probably haven’t. If you think you have, contact the author(s) for comment. Any responsible scientist is willing to openly and honestly discuss his or her work with scientific colleagues.

    Comment by Mark Stewart — 4 Apr 2008 @ 9:43 AM

  29. I would like to follow up on # 16. Many years ago, 1946 to be specific, I had the privelege of working in the Cavendish Laboratories as a Research Student. Prof. J.J.Thompson left 1000 GB pounds, (a lot of money in those days), in his will, to provide free tea in the afternoon. As a aside, J.J.Thompson’s office was called “The Tea Room”, because he spent a lot of time there, talking to people over a cup of tea. Naturally, every afternoon, at 3 pm, the scientists in Cavendish Labs collected to get their free tea, and it was here that an enormous amount of interchange of ideas took place. I can remember one memborable afternoon when I was able to converse with Prof. Dirac. I hope this ritual continues to this day. I believe that the informal exchange if ideas between scientists is something that should be encouraged as much as possible.

    Comment by Jim Cripwell — 4 Apr 2008 @ 9:52 AM

  30. Gavin’s use of the phrase “tacit knowledge” seems to have caused confusion among some readers, who see it as some sort of attempt to push a specific scientific view. When a chess master discounts a side variation or mentions that advancing a pawn certainly loses, it is not immediately obvious to the lay person why this is so. He is using tacit knowledge that is gained from experience. This is no different than what a scientist uses.
    I would also add that people who have not been involved with academic research or research conferences do not understand the harsh tone taken by some contributors. Scientists have to develop a thick skin due to the competitive environment within their research community. These criticisms are often interpreted by outsiders as vehement personal attacks, but that is usually not their intent.

    Comment by Jeff — 4 Apr 2008 @ 10:28 AM

  31. Richard #17: This is pure, unadulterated horse puckey (and yes, I have wanted to say that about some of the papers I’ve reviewed, as well–another value to a blog). You clearly know nothing about climate models. They have nothing to do with any single modeller’s opinion. Rather, they put in the most important forcers (as suggested by data and studies) with the strengths constrained by data and then see how they reproduce the observations. These are experiments that validate or invalidate the models. The fact that they occur in a computer does not invalidate them.
    Richard, if you want to peddle this stuff, you had best find an audience that is as ignorant of science as you seem to be.

    Comment by Ray Ladbury — 4 Apr 2008 @ 10:42 AM

  32. Let’s not confuse realclimate discussions with the general question of how modern scientific discussions should be conducted in general. In the first place, as others have noted, climate change is unique as an intensely politicized public policy controversy (in fact, I think it’s unique in the entire history of science). In the second place, climate change involves experts in dozens of radically different fields, from aerosol chemistry to tree ring chronology, so communication and validation of expertise is far more difficult than in any other major scientific topic (in fact, I think again unique in the entire history of science). Discussion of the possible role of online forums in typical scientific work, or even ordinarily controversial work like stem cell research, is all interesting, but it’s a very separate matter.

    Comment by Spencer Weart — 4 Apr 2008 @ 10:57 AM

  33. RE #10 & “Al Gore and claims that the debate is over.”

    I took that statement to mean that the general debate re whether or not AGW is happening is over. Now, of course, it’s a free country, and scientists can spend the next 30 years trying to disprove AGW if they wish (and there should be some doing just that).

    It’s like saying the debate about evolution is over — it is re general scientific knowledge. The evolutionists will go on arguing vociferously about the details, but the main debate is over. And there is this outside chance that God will appear (or we will find out in heaven or hell) that he planted all those fossils as an April Fool’s joke, and that he actually did creation with his magic wand.

    Science in the final analysis is provisional and contingent. It keeps changing…..but broad scientific debates (e.g. re the earth goes around the sun) come to a close, even if only a provisional and contingent close, and most scientists turn to putting their energies into other areas where there still is some debate (usually over what I call “the details”).

    So, I agree with Al Gore (and most, if not all, climate scientists) that the general debate about AGW is over (tho some keep arguing on and on to the contrary like zombies), even though the scientists are still doing climate science and ironing out “the details.” I mean, how much science does the average person have to know to screw in a compact fluorescent bulb?

    Okay, here’s my lightbulb joke. How many people does it take to screw in a compact fluorescent bulb?
    1. One layperson; or
    2. 3 scientists – one to collect the data, one to do the analysis and write the report, and one to screw it in; or
    3. 6 rocket scientists – one to collect the data, one to do a dynamics study, one to do an ergonomic study, one to do a human impact study, one to report to NASA, and one to screw it in.
    4. Denialists? They are unable to screw in compact fluorescent bulbs, but they are writing a 500 page thesis on why it’s impossible to do so.

    Of course, peer-review adds a whole other set of scientists needed for this lightbulb project.

    Comment by Lynn Vincentnathan — 4 Apr 2008 @ 11:03 AM

  34. Oh, and one more thing. Gavin is coming from an unusual place. After reading in a wide variety of the climate science literature, I realized that computer modeling is far beyond any other field to the extent it depends upon tacit knowledge… by which I mean important knowledge not documented in peer-reviewed publications, nor even in textbooks and gray literature like conference proceedings, but widely shared. Even the very earliest papers, back to Arrhenius, basically just gave equations and results plus a few hints about how the one led to the other. As for the crucial model results in the latest IPCC report, they are essentially published in the report itself. Maybe I’m wrong, but as far as I can figure out, if I wanted to understand what lies behind the results I would need an expert guide just to figure out how to use the relevant databases etc.

    Comment by Spencer Weart — 4 Apr 2008 @ 11:19 AM

  35. I couldn’t get in to read Myles’s piece, but it dawned on me he’s the one behind the climate prediction project, which used people’s computers from around the world (unfortunately I had a bad computer & dial-up connection at the time).

    Yes, this work is important, even though what’s in the high-end fat tail is less likely than what’s in the center glob (around 3C). For scientists from Planet Zork studying earth it’s just an academic issue, but for people living on planet earth, we really need to be aware of all possibilities in such a dire situation that we are slowly (in human terms, but fast in geological terms) creeping into. The media not only need to give accurate summaries of scientific findings in understandable, lay language, they also need to reflect people’s concerns about real extreme dangers, even if those extreme dangers are not as likely as lesser dangers.

    I personally think the media on the whole, esp over here in the U.S., sink on AGW. Their silent treatment for nearly 15 years, interspersed with some pro-con formats (e.g., NIGHTLINE’S “IS SCIENCE FOR SALE” in 1995), making it seem there was an even debate, and not at all raising the issue of the dangers if the pro side were correct. And the sponsor for that program, Texaco — which led me to believe that whether or not science was for sale, the media sure are.

    Comment by Lynn Vincentnathan — 4 Apr 2008 @ 11:20 AM

  36. Gavin’s point about tacit knowledge is important. When it comes to peer reviewed papers, one has to presume the reader will have a minimum level of familiarity with the subject matter. One also presumes that the reader will have a day job, and so the question becomes whether the information in the paper is of sufficient interest to the average scientist in the community to say, “Hey, take a look at this. It looks mostly correct to me and has some interesting information/insights/methods…” This is not in any way the gold standard in the sciences. The gold standard comes when the community as a whole says, “Hey, cool, I can use this.” The paper is cited. The techniques are used. Science advances. Eventually, what was in the paper becomes part of the tacit knowledge assumed by reviewers.
    The tacit knowledge one can presume for a blog like Realclimate is much lower. One presumes there is an interest in the subject–why else would the reader be perusing the blog. One presumes at least a passing acquaintance with the scientific method and maybe some familiarity with basic results like conservation of energy, etc. One could perhaps assume that the average reader has taken the time to acquaint him- or herself with material to which one is vectored via the “Start Here” button–although this is far from Universal.
    For the average newspaper reader of a science story, the tacit knowledge is nearly nonexistent–or worse, wrong. And then we have the blogosphere, where information density is at best, rarified and often toxic.
    It may be too much to ask that people become discriminating consumers of information. God knows well meaning friends unintentionally bombard me with emails that never would have been sent had the sender paid a quick visit to Snopes. However, in an information economy, it seems that all too many readers and journalists are content to remain paupers.

    Comment by Ray Ladbury — 4 Apr 2008 @ 11:21 AM

  37. Not writing all the details would imply that there is universal agreement, but that’s where I have a problem. Universally we can agree on a full range of things. Mathematical symbols, simple formulas, etc. that are foundational, but as we move higher in the structure of the logic, constructs become supportive, and their use less accepted. Somewhere past that point the uses are far less agreed upon or understood in this new context. A “tacit” use of a construct becomes problematical.

    Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you. But how easy was it to apply that theory the first time out? How much worse would it have been if they didn’t tell you explicitly which piece of theory you needed to use to solve the first few questions?

    You can learn all the theory in the world, in most cases you must first practice it so you know what theory/technique to apply in a given situation. Knowing stuff by itself is never enough, you must also have the experience to use what you know and this comes from years of practice and isn’t going to be part of the formal literature. This is the difference between a professional and an armature and is the difference between debate in peer reviewed journals and blogs.

    That’s not to say blogs don’t serve an important purpose. Good blogs like this one can play a very big role in helping understand and digesting the debate playing out in the peer reviewed literature, but you shouldn’t fool yourself into thinking it’s where the real debate is going on. What you should be looking for from a blog is insight into the debate playing out in peer reviewed literature.

    When blogs strike out on their own and try to do “original” work there is going to be a real problem because blogs are fundamentally appealing to and often written by armatures who don’t have the experience to apply what knowledge they have.

    Comment by L Miller — 4 Apr 2008 @ 11:26 AM

  38. Were it not for blogs such as this one, I, a professional engineer, but strictly an amateur scientist, would have far fewer opportunities to learn about the topic of climate change, and the controversies surrounding it. I do not regard what I read here as peer-reviewed scientific papers, unless you say so, but rather, more like what I read about such papers in publications like Science News. I would be much more poorly informed without RealClimate. Occasionally it happens that serious scientists with the best of motives do disagree on conclusions and details. It is also the case that there are “scientists” who have dubious motives and more dubious “papers” used more for political activism than for actually advancing scientific knowledge. Reading RealClimate is very helpful to us who are not professional scientists to discern the difference. Please, keep up the excellent work you are doing.

    Comment by Gene Hawkridge — 4 Apr 2008 @ 12:38 PM

  39. Re. 19 Dear Richard Vadon,

    I took the trouble to email you about your radio 4 program. I pointed out several inconsistencies between your program and the scientific literature. You need to read the Journals Science, Nature, Climate, Geophysical Letters.

    These people that you list as approving your program do not publish about climate change in peer review journals. They have no system of checks and balances as such as can say anything they want without having evidnce to back it up.

    I am afraid that your comments are very misleading and show a complete lack of knowlege about how science works.

    Comment by Richard Ordway — 4 Apr 2008 @ 12:57 PM

  40. Re Jenne @ 18: “Scientists should embrace the open scientific debate”

    And they do, within the system of conferences, symposia and peer-reviewed journals set up to discuss and debate scientific issues. A problem arises when the debate changes venue into public fora where any and all comers are free to participate, no matter what their level of scientific knowledge or lack there of, no matter what their economic, ideological or political agenda is, and to present misunderstood scientific concepts, misinformation, deliberate disinformation, and outright fabricated untruths as equally valid points of argument. Don’t mistake such a free-for-all for scientific debate.

    Comment by Jim Eager — 4 Apr 2008 @ 1:38 PM

  41. Peer reviews save you from publishing nonsense. Blogs don’t.

    Comment by Hardy B. Granberg — 4 Apr 2008 @ 2:20 PM

  42. Re: Ray #36, in the field of climate science, I don’t think you can argue for assuming that different a level of tacit understanding by the reader for the peer review literature than for blogs like this one. Climate science is particularly multidisciplinary, with contributions from several disciplines of physics, chemistry, oceanography, geology, meterology, biology and historical records. The peer review writing that crosses the disciplines, and is not just within one specialty should assume much less tacit knowledge, and be more explicit about everything they are doing. Unfortunately, especially unifying parts of the science, such as coupled hightop models will probably have only a subset of the disciplines represented among the reviewers. The modelers especially should aspire to high quality writing accessible and complete enough in explanation to be reviewable by the scientifically literate of any discipline.

    Comment by Martin Lewitt — 4 Apr 2008 @ 2:39 PM

  43. It seems obvious to me that science must reach beyond the peer-reviewed literature. Science education is one important activity that necessarily reaches beyond the literature; it deals in simplified versions of scientific models and procedures in order to introduce people to those ideas. (Jack Cohen and Ian Stewart have called these simplifications, “stories for children”, which is perhaps too condescending). In addition to formal education, which only reaches the next generation, science also has to provide materials for the public at large, such as popular science books.

    ClimatePrediction.net was important not only for its results, but because it did engage the public, via the distributed computing infrastructure, via the BBC Horizon programmes, and via material for use in schools. It helped to teach people about climate prediction and more generally about computer modelling.

    Computer modelling is an increasingly important part of many scientific endeavours. It is one aspect of the use of computing infrastructure in science – or “e-science” as it is sometimes called. This “in-silico” modelling has been called a third pillar of science, in addition to the long established pillars of experiment and theory.

    Little of this is yet taught in schools or even in some undergraduate courses. We need a large effort to increase people’s awareness and understanding of these approaches and we can’t do that within the peer-reviewed literature.

    The question of how blogs fit into this space is interesting. Clearly they are rather different from traditional educational material, which is subject to its own review processes.

    It’s perhaps worth noting that members of the open science movement are not just discussing science on blogs; they’re actually recording their science on blogs. This seems to be happening more in the life sciences – take a look at openwetware.org as an example.

    Comment by Dave Berry — 4 Apr 2008 @ 4:18 PM

  44. #42 martin

    This is a laudable goal and I’ve long been a huge fan of interdisciplinary work, and smart authors try to check things out with an appropriate mix before they submit a paper.

    BUT:

    1) Have you run peer-review for papers? How easy is to collect enough good reviewers for a journal issue? [Note that with revisions, somebody may have to look at a paper several times.]

    How much credit does an academic get for reviewing papers?

    2) Page counts, especially in prestigious journals, are limited. A certain level of expertise has to be assumed. One can argue about what that should be, but the devil is in the details of how much time it takes, schedule limitations, page counts, etc.

    At best, peer review gives a coarse screen, i.e., it’s necessary, but not sufficient. Lots of junk gets through, but there’s a cost/performance tradeoff like elsewhere: perfection is very, very expensive. Junk tends to get refuted, or just lie like there like a dead fish and not get cited.

    Regarding blogs: I think science blogs are pretty useful, if one wants to use them properly. After all, blog-like things are hardly new – USENET discussion groups started in the mid-1980s, and some have actually been quite productive. [I just wish modern blogs were as sophisticated about killfiles. :-)

    Comment by John Mashey — 4 Apr 2008 @ 6:20 PM

  45. I don’t see the problem here. If there is a blogposting by an expert that criticizes a published article, then that may prompt other scientists to take a more critical look at the article to see if the criticism is justified or not. If the criticism is seen to be justified by most other experts then that puts pressure on the journal to improve their peer review standards.

    Complaining about the peer review process when your article is rejected is also a good thing. E.g. read this rejected article by Michael Duff. It is a comment on an article by Paul Davies in Nature. Davies’s article was obviously flawed due to a very elementary units issue. Unfortunately, Duff’s Comment was rejected. Duff included the Referee reports in his preprint and everyone can now see how the Referees and Paul Davies himself make a fool of themselves (see page 3 and further of the preprint). Global warming also comes up in the exchange, see last remark before the start of Appendix B on page 9 :)

    Two more examples:

    Doron Zeilberger exploded in anger when his article was rejected :)

    Another case of scientists complaining about flawed peer review process :)

    Comment by Count Iblis — 4 Apr 2008 @ 8:08 PM

  46. Martin Lewitt: Actually, interdisciplinary disciplines are not more the norm than the exception. My own specialty (radiation effects in semiconductors) combines nuclear physics, semiconductor physics, electromagnetism, spacecraft design, radiation transport and details of semiconductor fabrication–and maybe a wee bit o’ psychology as well. If one does climate science, one has to be up on all the contributing disciplines at least to the extent that one knows the basics, the real experts, and could at least review a paper for general interest in the field. Similar considerations apply in materials science, planetary physics, particle astrophysics and on and on. This ain’t your father’s science.

    Comment by Ray Ladbury — 4 Apr 2008 @ 8:14 PM

  47. As a concerned citizen but non-scientist, I ask that we always keep in mind that humans are facing a compressed timeline of looming change. It is encouraging to see scientists, policy makers and journalists efficiently exchange climate information, research progress and discuss solutions.

    I really do not want to see any delay in meeting climate challenges. To me, ‘leisurely discourse’ is the equivalent of ‘business as usual’

    Comment by Richard Pauli — 4 Apr 2008 @ 9:57 PM

  48. Gavin’s reply to my post #17 cannot go unchallenged. Computer modeling is NOT fundamental to science. Hypothesis generation and confirmation by empirical study is. Computer modeling was not around when Darwin postulated evolution or Einstein postulate relativity. It was not fundamental to the development of the laws of gravity and thermodynamics or any other major theory. It is merely a tool for observing possible changes that might occur to a process given changes in the assumptions underlying that process.

    How many of the climate models which are used as predictors of future temperature (i.e. climate forecasting) have actually undergone a forecast audit (yes there is such a thing)? I would suspect none of them. Probably because they would not pass such an audit. The paper by Green and Armstrong (2007) is essential reading for anyone who wants to understand the pitfalls of forecasts using climate models.

    As for my lack of knowledge of science, I guess that my science degrees (note the plural) are just not as good as yours.

    [Response: I have no comment on the worth of your degrees, but modelling (in the most general sense - i.e. not just GCMs) is fundamental to all science. Working out the consequence of hypotheses you generate in any particular system requires a model. Sometimes it's simple and you can do it analytically (ie. a two body gravity problem), sometimes it's not. In climate, it's mostly not. The G+A paper is a great demonstration of what happens when people think they understand what they read when they clearly don't. - gavin]

    Comment by Richard — 4 Apr 2008 @ 10:50 PM

  49. gavin (inline to 20) wrote:

    I wasn’t trying to suggest that tacit knowledge was some kind of opinion that all scientists must agree with, but rather it is the shared background that, say, everyone using climate models has – i.e. why we use initial condition ensembles, how we decide that a change in the code is significant, what data comparisons are appropriate etc. This knowledge builds up with experience, but the reasons for it are rarely spelt out in the technical literature.

    Reminds me a bit of the math professor who skips from step 1 to step 10 in a proof because the steps in between are obvious to him. Of course those steps are probably obvious to other math professors or graduate students – but wouldn’t be to freshmen in college — which is why a professor will have to make a special effort not to skip steps when teaching freshmen. Of course, math is formal, and much of what you are speaking of is a familiarity with past literature, with problems which have been dealt with in the past but and which everyone now knows how they were solved, etc.. Things which one no longer has to articulate — because your audience of peers will recognize the short-hand and so forth.

    Comment by Timothy Chase — 4 Apr 2008 @ 10:51 PM

  50. Dear Richard (post 19),

    I can understand that you made your programme with the right intentions, given that you felt the climateprediction.net team was a bunch of dishonest scaremongers from the outset (why else would you have taken such pains to disguise from us what the programme was actually about when you originally approached us for interview?).

    And you could have been forgiven for getting that impression from what was available on the internet. Of course, Gavin and Stefan didn’t suggest we were dishonest, but if they were right that our most important result was the 2-4 degree cluster, then it would certainly have been dishonest not to have made sure that this cluster was mentioned in any press releases. But, to reiterate, that wasn’t a result at all: the study itself didn’t tell us anything about the likelihood or otherwise of the traditional range.

    Since I am not aware that you actually interviewed any of the journalists who originally covered the story or who were present at the original press conference (please correct me if I’m wrong here), I can see how your views must have evolved. It is a very nice illustration of the dangers of getting too much information from cyberspace: internet discussions have their own momentum (“tacit knowledge”) that may not reflect what has actually gone on in the real world.

    As I said in the discussion of your programme on RealClimate, we asked Fiona Fox of the Royal Institution to follow up with those who had actually covered the story. She kindly wrote to a all the journalists she had on record who were at the press conference asking them for their reaction to your accusations, stating:

    ========

    My own clear memory of this briefing is that the scientists were very clear that the results showed a range of warming between 2 degrees and 11 degrees and that each time they were asked about the impact of 11 degrees they reminded journalists that this was the worst case scenario and it could just as easily be at the lower end. Obviously we all knew (the press officers that is) that you would report 11 degrees and the fact that this was twice the level suggested by previous studies was clearly a significant news story. However I believe that the scientists themselves were very measured and did not emphasise the 11 degrees.

    Fiona Fox , Director
    Science Media Centre
    The Royal Institution
    ========

    The responses Fiona received were as follows:
    ========
    Hi Fiona,

    My memory tallies with yours. They presented the range, they described the concept of the ensemble, they emphasised (in response to a very perceptive question from some star BBC journalist) the role of clouds in the uncertainty, they mentioned 6 main reasons for uncertainty.

    If anyone went for the exaggeration it was the journalists – we all mentioned 11 degrees I’m sure but as far as I recall, PA and Metro presented it virtually as a fait accompli.

    Richard Black, BBC
    ========
    Thanks Fiona, my memory is as yours. Let me know what feedback you get and I’ll write you something properly tomorrow.

    Ruth Francis, Nature
    ========
    Hi Fiona,

    As I recall, the researchers, and Myles Allen in particular, emphasised the fact that the bottom end of the range (ie the 2 in 2-11 degrees C) corresponded to previous predictions of 2-5 degrees C. I seem to remember that they said this gave strength to the prediction that there would be a warming of *at least* 2 degrees C, but that there was a greater degree of uncertainty at the top-end. This last point was definitely underlined. To back that up, refer to Myles’ quote in my article:
    http://www.scidev.net/News/index.cfm?fuseaction=readNews&itemid=1878&language=1.
    Hope this helps.
    Catherine.
    Catherine Brahic
    Senior correspondent
    Science and Development Network (SciDev.Net)
    ========
    I’d agree with Catherine’s interpretation – as far as I recall, they were all quite careful to stress the greater temperature change the greater the degree of uncertainty. I’ll try and dig up the bulletins report.

    Sarah Mukherjee, BBC
    ========
    Hi Fiona – my memory is that the scientists took pains to point out that it was a range and quite a broad range at that. I also remember Myles in a rather vivid phrase saying that we had to remember that we could still take actions to avert the worst warming and that we shouldn’t assume “that our children will stand by and watch as the seas boil around them”, showing that the worst case wasn’t necessarily the most likely outcome.
    Thanks,

    Fiona Harvey
    Environment Correspondent
    Financial Times
    ========

    I am not aware of anyone who covered the paper who did not either attend the press conference, speak to a project team member, or use an agency report from someone who had done (again, if your research revealed otherwise, please correct me). The Natural Environment Research Council Press Office assured me that all recipients of their press release (including all those quoted above) would have received it attached to the paper and would have known that it was intended simply to draw attention to some interesting results in the paper, not to provide a comprehensive summary. Judging from the responses above, it appears they were absolutely right.

    [Other readers may like to know that all this information was available to Richard before the airing of his programme: since Richard is still encouraging you to go and listen to it, you might like to ask yourselves how balanced it really is in the light of the above responses from the people who were actually there.]

    Tim Palmer and Bryan Lawrence would not have known about this context (nor, indeed, would any of the scientists you interviewed for your programme), since despite the fact that your programme was about coverage of a scientific story, you apparently didn’t want to talk to anyone who had actually covered it.

    Gavin is probably right that scurrilous was a bit strong, since I accept your intentions were in the right place. Misguided would have been a better word.

    Regards,

    Myles Allen

    Gavin: is there any way this response could be pushed up next to Richard’s? I’ve made these points before, but as far as I can tell no one noticed because they were too far down the thread (another example of the fallacy of the “you can always correct mistakes by responding on the blog” argument).

    Comment by Myles Allen — 5 Apr 2008 @ 3:33 AM

  51. Have the people who actually wrote the original press release spoken up in this thread or elsewhere? I mean by that the people who put the words together in the form sent out, probably by a marketing or PR department staffer. Their job is getting the organization’s name into the news, not writing abstracts with real info.

    Comment by Hank Roberts — 5 Apr 2008 @ 7:04 AM

  52. Richard #48: Although you claim to have “science degrees,” it’s a pretty safe assumption that you haven’t done any science in, say, the past 30 years, as otherwise you would realize that modeling (computer or otherwise) is central to science. How else are we to study Earth’s core, the explosion of supernovae, ecological systems, many aspects of materials science, and really the majority of cutting edge science.
    And I would also urge you to look into model validation in fields outside your own narrow discipline (whatever that may be). Different techniques are appropriate to different models, techniques and fields of inquiry. The fact that you do not understand this means you really know nothing of how science is actually done.

    Comment by Ray Ladbury — 5 Apr 2008 @ 7:22 AM

  53. In response to Hank (51):

    I think the offending paragraph was written by a long-suffering Natural Environment Research Council press officer who has since moved on to other things. But I don’t think it’s fair to tee off on the press officers, who have a pretty thankless task. If I recall correctly the 11 degree number went in and out of successive drafts like a yoyo, and ended up being left in on the grounds that it had to highlight something “new and concrete” — not, I might add, “alarming”: my impression was that the Press Officer would have just as happily drawn attention to zero-sensitivity models, if we’d have found any.

    Anyway, I eventually signed it off on the understanding that no serious mass-circulation journalist would rely on the press release in reporting the story, and that its sole purpose was to encourage journalists to find out more. It seems, judging from the responses Fiona got and despite Richard Vadon’s claims, that this understanding was correct.

    The press release could undoubtedly have been clearer, but it seems no-one who reported the story directly actually misunderstood what had been done, so it didn’t in fact do any damage. But of course, if Richard had stuck to “scientists issue a press release that might have been misunderstood but wasn’t” his editors probably wouldn’t have been very impressed.

    Of course, if Richard can come up with journalists who did report the story solely on the basis of the press release and did not understand that 11 degrees was the top end of a large range, then that is a different matter. So far, no one has come forward to my knowledge.

    Comment by Myles Allen — 5 Apr 2008 @ 8:13 AM

  54. I would like to add a few points based on my experience with this blog:

    1) A lot of educate people put in some interesting posts but rarely have detailed training/education in the area they are discussing.
    2) A lot of strange claims are made with no supporting proof
    3) These posts are always reviewed by many readers and comments are allowed.

    Relative to these points, overall I feel that these are good things – waiting only for experts to write on a given topic is not a good way to get a large amount and diversity of science knowledge out into the layman world.
    However, a lot of junk science, incorrect statements and some great insight will result.
    I feel that the overall result is that the basic job is being done. The issue of peer-review is fully addressed because of feedback that is allowed.
    However, I believe that your site needs one major change:
    You should post a notice that any one wishing to post should be careful that when they state facts or make board claims to qualify these statements as opinions. Otherwise, they need to offer links or proof based on know science in the field.
    (PS: anyone who thinks computer modeling is ‘central’ is going way over board – I can do massive amounts of research in many areas of advanced physics and never need to use a computer based modeling program and in fact, many great advances in science didn’t need any such thing; in some fields it is critical but ‘central’? No way.
    PPS: as for media hype, that is life in a media driven, ’8-second’ sound bite would – that’s just the way it is and it is something blogs are great at handling compared to the regular print/cable noise machines.)

    Comment by DBrown — 5 Apr 2008 @ 9:31 AM

  55. I teach large gen ed classes for non-science majors and I love RC. In fact, I love any venue where I can get a “quick and dirty” education from those who really know. I prowl the posters at fall AGU stalking authors of posters on subjects of interest to me (pretty much everything) because scientists know so much and have to communicate so narrowly in their peer-reviewed papers. I can get caught up on the latest thinking in just a few minutes when talking to an expert or reading a blog, whereas I simply don’t have the time, background, or fortitude to wade through the literature on all of the subjects of interest to my students to look for exciting, new developments.

    Comment by Rich Thompson — 5 Apr 2008 @ 10:29 AM

  56. Re # 54 DBrown “I can do massive amounts of research in many areas of advanced physics and never need to use a computer based modeling program and in fact, many great advances in science didn’t need any such thing”

    I would argue that any explanation of empirical data that attempts to describe physical reality constitutes a model, whether that model is depicted in words only, in a graph or other diagram, in mathematical equations, or in computer code. Bohr’s 1913 depiction of atomic structure as a central nucleus surrounded by circling electrons was a model (http://en.wikipedia.org/wiki/Bohr_model), and a roadmap depicting the spacial relationships of cities, roads, and other geographical features is a model. Without the model, you have nothing but disparate “facts,” with no way to link them.

    Comment by Chuck Booth — 5 Apr 2008 @ 10:32 AM

  57. #54, & “anyone who thinks computer modeling is ‘central’ is going way over board”

    That’s probably true as a general statement. It would be much better from a scientific POV to have 2 earths, one the control & one the experimental earth. Or we could do an O X O type of experiment, where we make observations, then apply the treatment to our one earth (emit GHGs), then make further observations (of course, this may take a long time due to the lag of T to GHGs, such as ocean thermal inertia, etc). Or we could do an O X O -X O type of experiment, where we make observations, then apply the treatment to our one earth (emit GHGs), make further observations, then remove treatment (stop emitting GHGs) and make further observations (which again would require a long time frame due to the lag).

    Or, we can use models.

    Comment by Lynn Vincentnathan — 5 Apr 2008 @ 11:43 AM

  58. Myles, I understand your point about press officers, but I think it should be cautionary — you did what most big organizations do. It blew up because what these people do — whether you call it “press officer” or “marketer” or “PR expert” — is advertising.

    Advertising is such a big industry because
    – the law says no one is going to believe it
    – the practice says people believe it
    – the law says ‘puffery’ is ok, nobody believes it
    – the practice *including*the*science* says it works.

    Your press officer, you say, “had to highlight something “new and concrete”
    and you “eventually signed it off on the understanding that no serious mass-circulation journalist would rely on the press release in reporting the story, and that its sole purpose was to encourage journalists to find out more.”

    How can you believe this? Even though it’s the assumption the marketing business works on, even though it’s the legal presumption made about puffery and ads, even though everyone in the business world _pretends_ they believe this — nobody really believes it.

    If anyone believed it, they wouldn’t spend money on doing something knowing it wouldn’t work.

    Advertising works. It fools people all the time.
    Media people are people.
    In a hurry.
    Looking for filler.
    Looking for something to catch people’s interest.
    Looking to attract the readers their publication exists to sell to its advertisers.

    None of this is a secret. It’s only awkward when the contradiction between theory and practice surfaces.

    Comment by Hank Roberts — 5 Apr 2008 @ 1:11 PM

  59. DBrown, Whether a model is done on a computer or on the back of a napkin is immaterial. It is still a model, and you absolutely cannot do physics in any meaningful way without models. Whether you are doing measurements or theory, at some point you will need to model your errors at the very least. Looking at discrete events during a time interval? You model your errors as Poisson. Estimating probabilities by looking at proportions of “success” and “failure”. Your error model is binomial. I can do this on a parallel supercomputer or in Microsoft Excel or on a chalk board–but if I do not resort to a model, I have not done physics.
    I would be curious what sort of physics you think is possible without modeling.

    Comment by Ray Ladbury — 5 Apr 2008 @ 1:33 PM

  60. #59 – who said anything about not needing models? Computers, yes. Please read my post, then comment – thanks.

    Comment by DBrown — 5 Apr 2008 @ 2:07 PM

  61. Dear Hank,

    I know it’s almost as fashionable to knock journalists as it is to knock press officers, but to be honest, I was very reassured by the results of Fiona Fox’s inquiry (which, after Richard’s initial allegations, I was rather dreading). The journalists not only clearly understood the story perfectly well (in spite of, you might say, the unclear wording of the press release), they could remember all about it in remarkable detail more than a year on (Fiona Harvey of the FT could even quote me verbatim).

    In many ways the worst libel in Richard’s piece was the claim (which has since been repeated by other BBC journalists) that his colleagues were just copying out the press release, when he knew perfectly well they were doing nothing of the kind.

    Myles

    Comment by Myles Allen — 5 Apr 2008 @ 2:11 PM

  62. Picking up on Mark Stewart’s comment (# 24), the condensing of journal article lengths seems to be occurring in many journals. Whatever the reason for this, it can very much work against the goal of a clear and elaborate explanation of the information needed for proper understanding of a paper’s major and minor points. The introduction/background and materials/methods sections are often especially hard hit in this regard, although even important results can be shunted to a digital appendix if they take up too much space (and which are made thereby, inaccessible to those with access only to a paper copy of the journal). Thus I find, as a scientist, that even papers in one’s own field can sometimes be difficult to understand, or at the very least more laborious and inconvenient than need be. So imagine what non-scientists are increasingly faced with. Scientist-based blogs like RC help counter this trend by fleshing out this ultra-terse language that we’re increasingly bound to. I see this as part of the “tacit knowledge” that Gavin mentions.

    Also, until journals (or some new form of information dissemination) can provide a real-time, interactive, and somewhat informal discussion among working scientists in those fields having very high societal/political relevance, scientist-based blogs like RC are filling a very important void. A function which I very much appreciate.

    Comment by Jim Bouldin — 5 Apr 2008 @ 2:29 PM

  63. Re 2
    Joseph Romm complains http://climateprogress.org/2008/04/02/nature-pielke-pointless-misleading-embarrassing-ipcc-technology/ of Pielke’s Nature commentary that :

    “Since this paper doesn’t define the word “innovation,” it is very hard to tell what precisely the authors’ point is (other than to lead us into the technology trap). ..this is characteristic of Pielke’s work —BF he doesn’t define terms specifically enough to make policy-relevant conclusions.BF [emphasis in the original...He says "all the regular readers of this blog know why the technology trap is dangerous (it leads to delay, which is fatal to the planet’s livability ) ...failing to stabilize well below, say, 700 parts per million of CO2 ppm is really, really, really suicidal ....So what is the point of the piece? To convince people the situation is hopeless? [Nature actually runs a side piece on the commentary titled, “Are the IPCC scenarios ‘unachievable’? — and people call me an alarmist!].”

    While Romm neglects to define ‘fatal ‘ or ‘ suicidal’ , his own commentary has elicited a reader response that indeed qualifies as ‘Alarming ‘:

    http://adamant.typepad.com/seitz/2008/04/the-last-carbon.html

    Comment by Russell Seitz — 5 Apr 2008 @ 3:01 PM

  64. Hi Myles

    I can’t believe we are doing this again. Let’s stop soon ;-)

    As you know my position is that you judge the journalists on what they write. The broadsheet articles on your story did not give the readers a proper understanding of your research. We quote them in the programme. They make it sound like the world was about to end. Let’s be clear that this is first and foremost the responsibility of the journalists and headline writers involved. I know that you at CPDN were appalled at some of the coverage. The question we asked in the programme was, did the press release play a part in this?

    I think you are correct when you say “The press release could undoubtedly have been clearer”. In our programme you didn’t say that. I think if the press release had been clearer the coverage wouldn’t have been so apocalyptic. I know you disagree. I suggest people listen to our interview with you. You put your view strongly and clearly.

    The programme is available here :

    http://news.bbc.co.uk/1/hi/magazine/4923504.stm

    Comment by Richard Vadon — 5 Apr 2008 @ 3:36 PM

  65. Although traditional peer-review may be useful in technical communications between scientists it has failed the public on climate change science. A new process is needed to for use in climate change science blogs.

    Comment by Pat Neuman — 5 Apr 2008 @ 3:45 PM

  66. Dear Richard,

    The journalists who covered the story clearly understood it, so while the press release might have been mis-understandable (what press release isn’t?), we made sure no-one actually misunderstood. What the headline writers (who wouldn’t have even read the press release) chose to say was beyond our (and, I understand, even the journalists’) control.

    Did you interview anyone who actually covered the story who you were accusing of acting highly unprofessionally in just copying out the press release? If so, why did you not include that in the final version of the programme? I think you (and Bryan Lawrence, Tim Palmer and all) would have got a very different impression of what happened if you had done.

    I appreciate by the time you got the results of Fiona Fox’s inquiry the BBC had already invested too much in the programme to change it, but I don’t see why you are defending it now.

    Myles

    Comment by Myles Allen — 5 Apr 2008 @ 3:58 PM

  67. [The error in the Douglass et al paper is clear and obvious and does not require a much thought to discern. It is simply that the statistical test they apply would reject ~80% of samples drawn from an exactly similar distribution. To make it clearer, take a fair die - the mean number of points is 3.5 and with around 100 throws, the mean will be known to within 0.1 or so. Then take the same die and imagine you get a 2, then the Douglass et al test would claim that this throw doesn’t match since it is below 3.3 (3.5 - 2 standard deviations). This is absurd........- gavin]

    Gavin: Are you saying you think all model results are equally likely or just that Douglass et all should have included a bell curve to show us which were more likely? If you believe the former then your critique of Myles Allen’s work seems contradictory. If it’s the latter then if Douglass et al. had added that bell curve and shown that only the very unlikely model results clipped the observation results, would you then agree that the model predictions don’t match observations and that consequently the theory behind the model needs revised? And would you really include ALL models, even the obviously barmy projections and the obviously poor models, or should you use only the ones that the IPCC use, which would seem to me to make more sense?

    [Response: If they had shown the distribution of the individual simulations (varying due to different models, different 'weather' etc.), the observations (with their own uncertainties) would have been shown to fall well within the distribution of the models. Thus their main conclusion would have been refuted. If they wanted to do something else they could have, but they didn't. All of the model runs used are in the IPCC archive. - gavin]

    Myles Allen is confusing. He has now proven by citation that he argued at the press conference the 11 degree outlier result should be treated as far less significant as the 2-3 degree spread then he argues on this blog that the 11 degree result IS the significant result of the work. Which is it? Anyway, I remember seeing that there were results that showed cooling in the sensitivity analysis and which were specifically rejected as “obviously wrong”. Now a sensitivity analysis with such huge variance on the aerosol parameter will always produce some cooling curves, so his later statement that non-warming results weren’t obtained was just confirmation bias via data culling. The scientific way would be to say that if the cooling results are obviously wrong then the 11 degree result is just as wrong because they both represent extremes of the error bars. [edit]

    [Response: You are misunderstanding the CPDN simulations. Try reading some of the papers arising from the project. Sanderson et al (2007) or Knutti et al (2006) are quite good - or go to the cpdn website and read what is available there. There was no variation of the aerosols, and the 'cooling curves' you mention were obvious errors in the control runs due to an inability of the simple ocean treatment they used to deal with extreme conditions in the Equatorial East Pacific. Absolutely nothing to do with climate sensitivity. - gavin]

    Comment by JamesG — 5 Apr 2008 @ 4:40 PM

  68. Response to 67: It was the huge range, 2-11 degrees, and the asymmetry relative to the traditional 2-4 degree range, that we felt was the important result, not the fact that the “vast majority of their results showed that doubling CO2 would lead to a temperature rise of about 3C”, as Richard Vadon put it in the web version of his programme (this cluster was simply an artifact of the way we had imposed the perturbations). It was also important that it wasn’t just a “tiny percentage” showing high values, as Richard claims, but a systematically fat-tailed distribution (20% higher than 7 degrees, if I recall correctly).

    I think our interpretation of the Stainforth et al results was correct. Certainly, the paper has been cited quite a few times (including by the IPCC, with appropriate caveats, of course) as evidence that climate sensitivity could be a lot higher than the traditional range. No one, to my knowledge, has ever cited that study as providing support to the 3 degree traditional value. To this extent, it seems that the journalists Richard Vadon was criticizing appear to have understood the significance of the study rather better than he did. It’s a shame he doesn’t appear to have talked to any of them.

    Myles

    Comment by Myles Allen — 5 Apr 2008 @ 5:54 PM

  69. DBrown, Computers have been central to science since they were invented in the ’50s. And coincidentally, most of the great physics done without a computer were done before there were computers. And as I said, it makes no difference whether model predictions are done by computer or on the back of the envelope. All a computer does is make it possible to look at more complicated models and apply more powerful techniques–that’s why they are in fact central to many disciplines on the frontiers of physics…or biology, chemistry and even mathematics, for that matter.

    Comment by Ray Ladbury — 5 Apr 2008 @ 5:58 PM

  70. Well, cautionary.

    I’d suggest every scientific organization adopt the wording above, and print what was said about the press release ON every press release.

    I realize the British notion of “libel” differs from that under US law and I can’t comment on that.

    But I do agree the European adoption of the Precautionary Principle is a good idea and I wish it would be adopted by US journalists.

    In that spirit, a cautionary text box along these lines, paraphrasing the above language, would be well advised for any press officer. This stuff’s too serious and too easy to spin to put press releases out without some warning that they’re not news releases.

    “No serious mass-circulation journalist will rely on this press release in reporting a story. Its sole purpose is to encourage journalists to find out more.”

    Comment by Hank Roberts — 5 Apr 2008 @ 6:51 PM

  71. Oh, I realize I’m saying the same thing Gavin wrote in the prior thread:

    In How not to write a press release, Gavin wrote:
    “… the scientists also need to appreciate that most journalists will only read the press release, … This implies that the press release itself is the biggest determinant of quality of the press coverage …”

    Comment by Hank Roberts — 5 Apr 2008 @ 9:45 PM

  72. “whose papers to trust, and which to be suspicious of [Hey Prof. here’s a great new paper!… Son, don’t trust that clown.] In short the kind of local knowledge that allows one to cut through the published literature thicket.

    But this lack makes amateurs prone to get caught in the traps that entangled the professionals’ grandfathers, and it can be difficult to disabuse them of their discoveries. Especially problematical are those who want science to validate preconceived political notions, and those willing to believe they are Einstein and the professionals are fools. Put these two types together and you get a witches brew of ignorance and attitude.”

    If peer review is so great, then why is this the case?

    Comment by Curious — 6 Apr 2008 @ 1:10 AM

  73. As an interested amateur, albeit one with two Physics degrees, may I raise my hand in support of blogs and forums. On the topic of Climate Change, we of the general public obtain our information and understanding from:

    the environmentalist lobby which tends to be sensationalist
    the skeptic lobby which tends to be brutal and unprincipled
    the media who cherry pick for the sake of populism
    the realclimate website and similar

    The first three I mention really have muddied the waters. We depend on the forth for up to date information and appraisal. Communicating to a wider audience is necessary in this case where we have competing lobbies diverting our attention.

    Comment by Simeon — 6 Apr 2008 @ 4:10 AM

  74. Ray Ladbury: This thread is getting old – as I have been saying – COMPUTER modeling is not critical to all science – MODELING IS essential but (try reading this part carefully, please) NOT computer modeling – computers are not central to the subject of science or the empirical reasoning it is based. I do not see why you keep defending the issue of modeling – I have NOT said modeling is not critical, only that using computers is not central – valuable, sure. Please, try reading this post and my others more carefully because you are defending an issue that I have not disagreed with you about.

    Comment by DBrown — 6 Apr 2008 @ 8:02 AM

  75. Curious,
    > if peer review is so great

    Wrong tense, you mean “if peer review were so great” — it’s the worst form of scientific publication except for all the others tried so far.

    Look here:

    http://www.realclimate.org/index.php/archives/2005/01/peer-review-a-necessary-but-not-sufficient-condition/

    Comment by Hank Roberts — 6 Apr 2008 @ 9:24 AM

  76. DBrown, “not central” may mean something special to you, but what does it mean for work that can’t be done without the computer as a tool?

    Heinlein once told a student that he and his wife had spent many days calculating in pencil on rolls of butcher paper to get the math right for “Destination Moon” — and the student had asked why he didn’t just use a calculator. But at the time, ‘calculators’ didn’t exist.

    From reading Dr. Weart’s history, computers made climate modeling possible — there wasn’t ever enough time to do the math otherwise.

    Comment by Hank Roberts — 6 Apr 2008 @ 9:27 AM

  77. Re: 50 Myles Allen

    It was very thoughtful of Fiona Fox to provide the journalist who attended the press conference with her own recollections of what had happened more than a year previously when asking for theirs.

    Comment by TonyN — 6 Apr 2008 @ 9:48 AM

  78. The only difference between computer models and back-of-the-envelope (or 60 pages of equations) models, is that in the former case a computer does the arithmetic. Doing the arithmetic without a computer takes so long, and is so prone to error, that computer modelling has become *central* (and essential) to many fields of science.

    If you really want to reject the importance of computer models, you might as well claim that arithmetic (or mathematics in general) isn’t central to modern science.

    Comment by tamino — 6 Apr 2008 @ 9:52 AM

  79. DBrown, We are talking at cross purposes because you are making a distinction (between modeling and computer modeling) without a difference. Many of the most active and exciting fields in physics simply cannot be done without computer modeling–and this includes astrophysics, planetary physics, geophysics, particle physics… and yes, climate physics. What I fail to understand is how you cannot say that these fields are central to physics. Even in fields where experiments can still be done on desktops, the error analysis must be done by computer (e.g. Monte Carlo methods)–and yes, this is modeling. Hell, computers are even becoming central to mathematics–as in the proof of the 4-color map problem several years ago.
    If in fact you are a scientist, I can only assume that you have been so deeply immersed in your own research that you haven’t noticed the passage of the past 40 years. Time to come up for air. If you are not aware of the importance computers have assumed, it’s time to reacquaint yourself with physics. You’re missing more than half the fun.

    Comment by Ray Ladbury — 6 Apr 2008 @ 11:47 AM

  80. In the case of Stainforth et al., both the agreement of model predictions AND the thick tail are important results. The fact that predictions agree by an large supports the contention that the most important contributors to climate are well understood. This is crucial in exposing the lies of the denialists.
    On the other hand, the long tail on the positive side is crucial because the costs of climate change rise nonlinearly with temperature. This means that from a risk management perspective, there is still much to be gained in better understanding climate even as we work to mitigate the effects that are a virtual certainty. It may be a difference in perspective between science and engineering, but they are both crucial viewpoints.

    Comment by Ray Ladbury — 6 Apr 2008 @ 12:14 PM

  81. Tony,

    Yes, if we’d known this was turning into some kind of forensic examination, it would have been better for her not to have written the e-mail like that (which is why I included it along with the responses). But all she knew was that a concern had been raised: we had no idea Richard Vadon was going to go to such lengths to pin the blame for the headlines on us.

    Anyway, the only relevant point here is that Richard Vadon clearly thought (and presumably still thinks, since he hasn’t revised the web post on their programme) that the percentage of models we found with different sensitivities somehow told us anything about the real world. It didn’t (because of the way parameters were sampled etc. etc.), and no scientist has ever suggested otherwise in the peer-reviewed literature. The only relevant finding was the fact that the high-sensitivity models were not significantly less realistic than the normal-sensitivity models, together with the fact that there were enough of them to rule out a pure fluke (20% greater than 7 degrees is hardly a tiny percentage). He was starting out from a blogosphere myth, and working out a way to stop such myths promulgating in the first place is what this debate should actually be about.

    Myles

    [Response: Myles, It is not a 'blogosphere myth' that the most likely value for the climate sensitivity is around 3 deg C - you, me and the IPCC all agree with that. The CPDN results did not change that (and haven't in subsequent papers either). You are conflating a minor technical misinterpretation of the histogram of CPDN results with a much bigger issue of where the CPDN results fall in the wider context. Most of the erroneous headlines were based on an interpretation of the Stainforth paper as implying that sensitivities greater than 7 (and up to 11 deg C) were now much more likely and that the predictions of climate change in the next 50 to one hundred years needed to be dramatically revised upwards. That is not the case since, as you state above, the histogram of CPDN results tells us nothing about the real world. What Vadon and Cox saw was based on this much more obvious disconnect, not one line in our original post (which after all came after all the headlines). I fundamentally disagree that the problem was our post or some ill-defined 'myth' that you imply we are propagating. The problem was one of insufficient context. The solution is what you found at the press conference - if the time is taken to explain things properly and make sure that people understand, then you get a better outcome (most of the time). Our efforts are therefore better directed at continually trying to improve the background context that journalists have, and not trying to find convenient scapegoats in the 'blogosphere'.

    As I stated in the post above, your NG piece did end on an interesting point, but continually dragging this conversation back to an inappropriate single example is diluting it - because we end up arguing about the specific and not the general. We will probably need to agree to differ on how influential our post was compared to the headlines in multiple mass-media outlets, but I don't think I am alone in thinking your fire is aimed at the wrong target. - gavin]

    Comment by Myles Allen — 6 Apr 2008 @ 12:29 PM

  82. arXiv.org (now hosted by Cornell Univ. Library) seems to occupy an interesting place in the science communications continuum. Does anyone know how many of the articles posted there find their way into peer-reviewed journals or does posting there make that impossible?

    Comment by Bill S — 6 Apr 2008 @ 12:41 PM

  83. quote When blogs strike out on their own and try to do “original” work there is going to be a real problem because blogs are fundamentally appealing to and often written by armatures who don’t have the experience to apply what knowledge they have. unquote

    Don’t take them seriously. They’re just winding you up.

    JF

    Comment by Julian Flood — 6 Apr 2008 @ 2:08 PM

  84. #37 Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you.

    Would you say that 2nd grader had “tacit knowledge” of math?

    Comment by BlogReader — 6 Apr 2008 @ 3:42 PM

  85. Bill S (82) — At least one peer-reviewed journal requires submission by, in part, posting the ms on arXiv.

    Comment by David B. Benson — 6 Apr 2008 @ 5:23 PM

  86. Hi Gavin,

    Sorry, I didn’t mean to imply (didn’t think I implied) it was a blogosphere myth that the most likely value of climate sensitivity was 3 degrees. You’re right that’s the consensus for the most likely value. The myth I was referring to was the idea that the cluster around 3 degrees in the Stainforth et al results somehow gave support to this value, which Richard seems to be convinced was our “real” result (and in which case his programme would have made complete sense).

    But you’re right, it’s just an example of what can go wrong, and there is no point in getting lost in the details of who thought what when and why (in self-defence, I only got dragged back into this one because Richard popped up).

    I’m not sure I agree with you that Stainforth et al told us nothing about sensitivities greater than 7, given that no-one had reported GCMs behaving like that before so these fat tails could, until then, have been dismissed as a simple-modelling artifact. But that would bring us dangerously close to discussing climate sensitivity, which is off-topic.

    I’m afraid I’ll have to drop out of blogging for a few days: it’s been an interesting weekend, which has left me with a much better impression of blogs than I had before, not least because of your very measured moderation (particularly impressive in view of my article — someone told me you had to be provocative to be noticed in the blogosphere). Thanks, and enjoy the rest of the discussion. Let me know what you all decide to do (if anything).

    Regards,

    Myles

    Comment by Myles Allen — 6 Apr 2008 @ 5:30 PM

  87. “#37 Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you.

    Would you say that 2nd grader had “tacit knowledge” of math?”

    If you had read the very next line you would know. Here is the rest of that paragraph.

    “Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you. But how easy was it to apply that theory the first time out? How much worse would it have been if they didn’t tell you explicitly which piece of theory you needed to use to solve the first few questions?”

    Comment by L Miller — 6 Apr 2008 @ 6:22 PM

  88. arXiv is an interesting mix. People in my fields (me too) use it as a placeholder for important (we think) results that later appear in the peer reviewed literature. In that way it functions as a preprint server (preprints were the samidzat that circulated by post or at conferences in the pre-web world). On the other hand, in extremely rapidly moving areas of theoretical physics, it has become the primary means of exchange and last but not least some very curious stuff has appeared there. If the curious stuff starts to make noise, often a comment is inserted as an arXiv manuscript, for example, what Arthur Smith did about the Gerlich and Tscheuschner paper. So in that respect arXiv is self correcting, but, as with much of the peer reviewed literature, there is a lot of stuff that no one ever looks at, deservedly so. We might also point out that arXiv is not designed for the lay reader.

    Comment by Eli Rabett — 6 Apr 2008 @ 10:03 PM

  89. Re 79:
    Ray, I AGREE w/ you. However, pls control your rhetoric; in astrophysics and particle physics (where I “live”), computer modeling is NOT critical (except in the experimental aspects), nor even all that common.

    Re 82:
    Bill, it actually depends on the area under consideration. Nearly all astrophysics entries are published. Elementary particle physics papers are published, mostly as an afterthought in order to satisfy tenure committees and so on. It is generally accepted that submission to the arxiv IS the primary means of communication. And so it goes…I’ll see if I can come up w/ some numbers (the arxiv has utilities for just such a thing.

    Comment by Costanza — 7 Apr 2008 @ 12:01 AM

  90. You raised the important point. Editor’s decision is usually influenced by the external reviewers (2 or more). It would be fair if there is a blog where reviewers’ comments are posted for public comments before final decision is made based on how the authors responded to the comments.

    Comment by Surya — 7 Apr 2008 @ 1:01 AM

  91. This thread is utterly fascinating and Gavin appears to have given a good account of himself (especially when talking about the nature of sciene and to Myles Allen who seems to be a well balanced scientist at heart)here.

    I personally thought that science was the study of cause and effect through the use of models. It is as Einstein once said, you should account for all of the facts as simply as possible. Doesn’t greenhouse gas theory do just that?

    It is was not for real climate I would have been lost overall but now I think myself of reasonable earth science knowledge in regard to climate change (although some of the terms here still get me – doh). Indeed in the UK newspaper the Sunday Telegraph (yesterday)there is a article with Lord Lawson on the age of unreason in which he argues that global warming has stopped since 1988. I immediately knew this was an erroneous assumption remembering the recent article here on this very subject (8 year bars) which deal with weather and not climate due to their timelines not being long enough. However I believe that some scientists have responded to tehse arguments incorrectly as the article also states that global warming will kick in again come 2009. As far as I know it has not gone away!!?

    Apparantly according to Lord Lawson we are all being overly religious and zealous in our claims for future AGW. Not likely say I.

    Comment by pete best — 7 Apr 2008 @ 5:25 AM

  92. Jenne posts:

    [[Scientists should embrace the open scientific debate, and anyone who challenges that should be made very, very clear that without open debate, there simply is no science, no matter how much one is in favor of or opposes to particular people, statements and actions.]]

    What do you mean by “open debate?” Allowing unqualified people to stick their two cents in? They can do that already. What they can’t do is have their ignorant ramblings accepted by scientists. Like it or not, all opinions on a subject are not equally valid, and the opinion of someone familiar with the subject always outweighs the opinion of someone who has never studied it.

    Comment by Barton Paul Levenson — 7 Apr 2008 @ 6:48 AM

  93. It’s a pity that Myles Allen has had to drop out.

    I would have liked to ask him why he thinks that testimonies obtained by Fiona Fox (comment 50) from journalists who attended his 2005 press briefing are reliable evidence of what happened when she had so obviously jogged their memories. It would also be interesting to see the whole letter, rather than just a single paragraph.

    Richard Black’s coverage for the BBC can be found here:
    http://news.bbc.co.uk/1/hi/sci/tech/4210629.stm

    An interview with Myles Allen on the BBC’s Today programme on 27th Jan 2005 can be found here:
    http://www.bbc.co.uk/radio4/today/listenagain/ram/today1_climate_20050127.ram

    A transcript from Simon Cox and Richard Vadon’s BBC Radio4 programme ‘Overselling Climate Change’, including the interview with Myles Allen, can be found here:
    http://ccgi.newbery1.plus.com/blog/?p=70

    Comment by TonyN — 7 Apr 2008 @ 8:11 AM

  94. Constanza, I’m curious how one would develop a theoretical model of a supernova w/o computer modeling. Or do orbital dynamics calculations? Or extract signal from noise for extra-solar planets?
    In particle physics, how would one establish error bars w/o Monte Carlo simulations? How do you know your acceptances? In the dark and distant past, when I was a grad student in particle physics, lattice gauge theory was central to understanding quantum chromodynamics.
    Look, I agree that computer modeling doesn’t give you “the answers”. Rather, as with all models it should be used to obtain insight into the central mechanisms of the phenomena under study. And what I object to is the distinction being drawn between computer modeling and modeling of any other type. The tools used to model do not matter. What matters is the insight gained.

    Comment by Ray Ladbury — 7 Apr 2008 @ 9:48 AM

  95. Here’s my memory. I remember reading about the Climate Prediction project both before and after the results. I distinctly remember a range being mentioned, from some low number up to 11C, and I distinctly remember that the 11C was less likely. And, of course, I was more focused on the high end.

    In any case people living up in the cold north are not impressed with 3C warming, and would not be very impressed with 11C warming. To them it’s very tiny (as the temp fluctuates even 20C within 24 hours sometimes). Most people just don’t know what 3 or 11C means.

    Which brings me to accusations of alarmism. There is just no way anyone can say anything that would be alarmism re GW. We’ve increased our GHGs emissions here in the U.S., I believe by 20% since 1990. I guess the logic goes, “if I don’t perceive I’m suffering from it right now, then it’s not a problem.”

    When and if real alarmist talk starts happening, then we’d see it in the results — people lowering their GHG emissions.

    Comment by Lynn Vincentnathan — 7 Apr 2008 @ 9:48 AM

  96. Re 94:
    Ray,
    Point taken on the astrophysics. I will, however, split a supersymmetric hair. The Monte Carlo analysis and acceptances calculations are in the realm of experimental particle physics, which I excluded, and lattice work is a fairly small corner (theory and phenomenology are where the action is…look at the publication statistics). But again, you and I are in agreement on this issue. I simply think there’s too much rhetoric and hyperbole around this place.

    Comment by Costanza — 7 Apr 2008 @ 12:50 PM

  97. Costanza, You have to understand that one of the constant refrains coming from the denialosphere is that “climate change needn’t be taken seriously because the only support for it comes from computer models and blah, blah blah…” That this is false is easily demonstrable given the daily increasing evidence that climate change is happening before our eyes. That it also denigrates and ignores the critical importance of modeling (computer or otherwise) in science today is inexcusably ignorant. As such, one way to ensure a reception involving both barrels is to imply that computer modeling is unreliable or unimportant. Unfortunately, on-line we tend to be unaware of the impression our rhetoric may be making–and given the prevalence of anti-science types this issue attracts, it is not uncommon to shoot first and clarify later.

    Comment by Ray Ladbury — 7 Apr 2008 @ 1:25 PM

  98. Your quote
    “we like to think that RC plays a positive role in highlighting some of the more important and exciting results that appear.”

    What are your most outstanding 5 examples, as you see them?

    Would you include your opening paragraph to this thread?

    Comment by Geoff Sherrington — 7 Apr 2008 @ 7:58 PM

  99. Re # 10 Eli Rabett

    Amateurs dabbling in science.

    You are rather mixed up. The amateurs who write on this blog are good for an occasional laugh, as are some of the pros.

    Why don’t you include Medicine and Dentistry as sciences where Joe Citizen can futz around and make significant conrtibutions through “tacit understanding” of the poor quality of the qualified work?

    Comment by Geoff Sherrington — 7 Apr 2008 @ 8:10 PM

  100. re: models and such, for people who don’t do them, or for people who do one kind and want to understand others [because they can be very different].

    Albeit a little old now, I recommend “Supercomputing and the Transformation of Science” by William J. Kaufmann III & Larry Smarr, 1993, W. H., Freeman.

    It’s a beautiful book, and you can basically get one from Amazon ~price of shipping.

    Comment by John Mashey — 7 Apr 2008 @ 11:56 PM

  101. Does 11C have any more merit in relation to the new CO2 article just posted? If Hansen is projecting increased climate sensitivity to around 6C from 3C from the previous average possibility then does 11C become more likely or have I got this wrong?

    In addition does it matter much that AGW is hapenning 30x faster than previous episodes did?

    Comment by pete best — 8 Apr 2008 @ 6:40 AM

  102. Geoff Sherrington, I think you may have misinterpreted Eli’s post. He was not suggesting that amateurs contribute significantly to the state of knowledge in complex fields such as climate change, but rather that their participation where they can allows them the experience of doing science.
    Moreover, I think it is beyond doubt that amateurs have contributed significantly to the fields he mentioned–or do you consider the Shoemaker-Levy collision with Jupiter to be trivial?

    Comment by Ray Ladbury — 8 Apr 2008 @ 7:58 AM

  103. pete best (101) wrote “…does it matter much that AGW is hapenning 30x faster than previous episodes did?” Yes. It is difficult for organisms to evolve quickly to survive in the new conditions. So The Sixth Mass Extinction seems to be underway…

    Comment by David B. Benson — 8 Apr 2008 @ 1:50 PM

  104. Ray, the models do offer “insights” and many times have helped us make sense of the data and new insights have sent us back to the data to find confirmation that we weren’t aware of before.

    However, there are issues about when and if the models transition from providing “insight” into the plausibility of hypotheses to having the skill and accuracy to resolve a quantitative issue such as attribution of the 20th century warming.

    The “peer review” of climate models does not seem up to this task. Models have often been used in several peer review articles touting bold results about the climate, before they have been subjected to diagnostic studies and found wanting in various ways. What part of the early publications were a real scientific result? The authors certainly don’t have to retract the model results, because those were what they were and barring mistake or fraud, model results are model results. But the bold extrapolations from model to climate are not retracted, and involved embarassing assumptions in reasoning that should not have passed peer review, and unfortunately are being repeated even today.

    For example, should a model based leap in reasoning such as “The observed signal falls outside the range expected from natural variability with high confidence (P

    Comment by Martin Lewitt — 8 Apr 2008 @ 2:28 PM

  105. Martin, the left angle bracket symbol is mistaken by the blog software for the beginning of a HTML code, and the stupid-paperclip-feature fills in what it imagines you wanted to type. “View Source” and see what it did to your posting. There’s some trick to avoid this.

    Comment by Hank Roberts — 8 Apr 2008 @ 2:49 PM

  106. Ray, the models do offer “insights” and many times have helped us make sense of the data and new insights have sent us back to the data to find confirmation that we weren’t aware of before.

    However, there are issues about when and if the models transition from providing “insight” into the plausibility of hypotheses to having the skill and accuracy to resolve a quantitative issue such as attribution of the 20th century warming.

    The “peer review” of climate models does not seem up to this task. Models have often been used in several peer review articles touting bold results about the climate, before they have been subjected to diagnostic studies and found wanting in various ways. What part of the early publications were a real scientific result? The authors certainly don’t have to retract the model results, because those were what they were and barring mistake or fraud, model results are model results. But the bold extrapolations from model to climate are not retracted, and involved embarassing assumptions in reasoning that should not have passed peer review, and unfortunately are being repeated even today.

    For example, should a model based leap in reasoning such as “The observed signal falls outside the range expected from natural variability with high confidence (P less than 0.01). … We conclude that natural internal climate variability alone cannot explain either the observed or simulated changes.” pass peer review muster, when there are peer review results showing that the model is unable to reproduce the signature of the solar cycle found in the observations and has other published issues in the diagnostic studies? I wouldn’t let such a leap get past conference reviews such as I have participated in, yet even prestigious journals seem to have a lower standard of review for climate science.

    Models and climate are said to “match” each other or to be “realistic” in peer reviewed papers, usually without any rigorous analysis or explanation of what is being called a match, and whether that match is sufficient to prove the skill required for the bold results claimed.

    I submit that assessing quantitative climate model results and claims is beyond the scope of normal journal peer review, and must await the diagnostic studies that often occur years later. Yes, it is professionally limiting to have to wait years after you have completed your work to be able to publish original research results that might be argued to be relevant to the climate. By then you will already be deep into working on version n+1, which hopefully has higher resolution and fewer and better parameterizations. If it is any consolation, I’ve heard that other fields, such as fusion research, have it worse.

    Comment by Martin Lewitt — 8 Apr 2008 @ 4:27 PM

  107. Martin, the attribution of the current warming epoch to greenhouse gasses is in no way dependent on computer models. The fact that warming is occurring and that the warming is exceptional rests purely on empirical observations, historical data and analyses of various proxies. Now you could argue that some of these could be flawed–indeed, some are. However, when the all point to the same conclusion, that is what provides high confidence. The level of greenhouse forcing is determined from a variety of sources–and this says we should have warming, whether you put it into a computer model or not.
    The models are not a fit to temperature data, but rather a best fit for forcers to a variety of independent data, with the goodness of fit to temperature data (among other things) as a validation. And over all, the models fare very well in these validations.
    I agree that there are many residual uncertainties in climate models–greenhouse forcing just doesn’t happen to be one of them.

    Comment by Ray Ladbury — 8 Apr 2008 @ 4:29 PM

  108. Ray, Agreed. The GHG forcings are well established as exceptional. The uncertainties are in the feedbacks, the resultant signatures and in the solar forcing and coupling. The current plateau in solar forcing is arguably just as exceptional (and questionable), as the recent climate is. Unfortunately, anthropogenic greenhouse forcing alone only gets you less than a third of the way to the recent warming. We need modeling of the feedbacks, etc. for a reason, I just don’t see evidence that they are ready for this particular task yet.

    Comment by Martin Lewitt — 8 Apr 2008 @ 5:10 PM

  109. Gavin: A sensitivity analysis of models in my line of work would be to take all the parameters to their error bar extremes and find out what is the total potential locus of results. I’d say that aerosols at their extreme values would always produce cooling curves in some scenarios. This is not climate sensitivity but model parameter sensitivity. If this kind of analysis is never done then I’d be amazed.

    [Response: You confuse testing the model against climatology (which has a fixed (but uncertain) aerosol distribution) and a transient experiment with changing aerosol distributions. Now there are no limits whatsoever on what you could have for theoretical transient - so there is not much use to try anything other than good estimates of the observed trends. And for dealing with the cliamtology you aren't looking at the transient signal at all. - gavin]

    Ray Ladbury:”both the agreement of model predictions AND the thick tail are important results. The fact that predictions agree by and large supports the contention that the most important contributors to climate are well understood.”
    No. The only proper validation test for model runs is how well they agree with reality. If we understood climate well then of course the model results would agree but the reverse is very obviously untrue. I have seen this argument used before (eg Held with relation to droughts) and I still can’t believe any scientist thinks in this backwards fashion.

    Comment by JamesG — 9 Apr 2008 @ 9:11 AM

  110. James G. I think you misunderstand my point–the agreement of the models says that there is agreement on the most important forcers–it suggests consensus.
    James G and Martin,
    The fact that the models agree as well with overall trends in the very noisy climate data suggest that on the whole they are correct. The uncertainty comes in for the less well constrained aspects, and the takeaway message here is that that uncertainty is overwhelmingly on the positive side. Since that is also where the highest costs are, those fat tails could wind up dominating the risk.

    Comment by Ray Ladbury — 9 Apr 2008 @ 10:03 AM

  111. > model runs … agree with reality
    http://www.inscc.utah.edu/~reichler/publications/papers/Reichler_07_BAMS_CMIP.pdf

    Comment by Hank Roberts — 9 Apr 2008 @ 10:46 AM

  112. I worked on a government research project and the results were not released. Neither myself, nor my project lead JD Reynvaan are allowed (under the NDA we signed) to make public any of the results independently, so if the agency we were working with doesn’t release it themselves, the data will not be publicized. I wonder how much research data gets suppressed utilizing NDA terms?

    Comment by Lydia Montoya — 9 Apr 2008 @ 6:35 PM

  113. Speaking of the denialosphere, sound editorial policies etc., The Australian on 9 April published a page 1 op ed on why AGW is rubbish by an eminent (wait for it) political scientist.

    The only thing memorable about this piece was he managed to assemble a large fraction of the current canards in one place, so I thought it worth rebutting on my blog.

    I’d appreciate it if anyone here with the time would mosey over and comment (corrections and clarifications especially welcome).

    Comment by Philip Machanick — 9 Apr 2008 @ 7:08 PM

  114. Lydia, In aerospace research, we are not allowed NOT TO PUBLISH our results, although if a particular vendor is involved, we may call them “Vendor A”. I’m not sure how this is legal if the research is government funded.

    Comment by Ray Ladbury — 9 Apr 2008 @ 8:18 PM

  115. > I wonder how much research data gets suppressed utilizing NDA terms?

    A question one’s Senator or Congressperson might well want to ask the Government Accounting Office, for instance, to answer.

    Comment by Hank Roberts — 9 Apr 2008 @ 9:47 PM

  116. Dear Myles,

    thanks for coming over here to discuss this issue with us.
    In my view our contentious sentence “we feel that the most important result … is that by far most of the models had climate sensitivities between 2ºC and 4ºC” is not wrong, it is certainly not an indication of a lack of fact checking at RealClimate, and also it is not a criticism of your paper. Rather it is a valid interpretation of your results. Other scientists may interpret your or my data differently – this is sometimes annoying but part of a healthy scientific discourse. Even today I still think that this in fact is your most important result. Let me try to explain.

    You argue that you started from a model with about 3 ºC climate sensitivity, and then randomly perturbed parameters in all directions – so of course the peak of your distribution is near 3 ºC. I agree with that. As you know we also work with model ensembles, and what I found the most interesting question both with our own and with your ensembles is: how broad is this peak? I.e., how quickly do you get away from those 3 ºC when you change the parameters? How strongly do you need to tweak a model to get a really different climate sensitivity? Note that we did not write that the most interesting result is that you get a peak near 3 ºC – we wrote that the most interesting result is that most models remain inside the range 2-4 ºC when you perturb the parameters. I still find this small spread far more interesting than the outliers.

    Concerning the outliers, my prior expectation (and maybe this is where we differ) is that a model version with 11 ºC climate sensitivity is very likely just an unrealistic model, which would fail a number of quality checks – we discuss this in detail in our original post. This is why per se I do not find those tails very interesting – yes, by tweaking parameters enough I can get a model to behave very strangely, but so what? I certainly would not go to the mass media suggesting that 11 ºC could be a realistic climate sensitivity (not even as an extreme case of a wide range), before I had performed some pretty rigorous testing on these high-sensitivity models. (Now this is a criticism – not of your paper but of your media outreach.) If I had a climate model with 11 ºC climate sensitivity that had passed the kind of validation tests discussed in the IPCC report – e.g., which gives a realistic present-day climate, including seasonal cycle, a realistic response to perturbations like the Pinatubo eruption, a realistic 20th-Century climate evolution and a realistic Ice Age climate – well, then I would call in a press conference. But I don’t think anyone has ever produced such a model.

    Thanks also to all the other contributors to this discussion – I think this is excellent, and a good advertisement for science blogging.

    Comment by stefan — 10 Apr 2008 @ 1:07 AM

  117. Phillip Machanick posts:

    I’d appreciate it if anyone here with the time would mosey over and comment (corrections and clarifications especially welcome).

    You got it.

    Comment by Barton Paul Levenson — 10 Apr 2008 @ 8:31 AM

  118. If I may semi-digress into the issue of skeptics: One thing to pin them down on, considering their rejection of the clearness of “global warming”: Ask, “OK, so you’re not sure that global warming is definitely occurring, or that if so it is mostly man-made. But do you at least accept the action and presence of “greenhouse gases?” Water vapor is one [some skeptics like to talk of how the effects of water vapor swamp those of CO2], do you also accept that CO2 is a “greenhouse gas?” And if so, shouldn’t we at least concerned about a long-term rise in its concentration, even if we can’t agree on just what the outcome has been and will be?” That would take away a lot of steam from their evasions, and force some acknowledgment of the causal stresses in any case.

    BTW, with possible solar cooling etc. we really should IMHO take a closer look at the interaction of all stimuli and not focus narrowly on the CO2 issue.

    tyrannogenius

    Comment by Neil B. — 10 Apr 2008 @ 12:03 PM

  119. Some of you seem oblivious to the circular reasoning:
    . If a climate sensitivity of 3 degrees is an input then it isn’t too surprising that 2-4 degrees should be your dominant output range.
    . If you make a best guess on the highly uncertain aerosol parameter then you are obviously selecting it on the basis of what it takes to match real world temp trends. It is then guaranteed that the output temperature trends will match.
    . If models all come from similar roots with similar parameters, it’s not surprising they’d have similar results. In fact even if they had different equations and/or different inputs but still achieved similar outputs we still wouldn’t be confident which inputs were actually correct. So model run matching is virtually meaningless.
    . And how do you know a “good estimate” in an uncertain parameter? Is it by a show of hands? I know that sparsity of data makes such estimates necessary but we still need to match predictions with reality to be confident in the models.

    Hank: Now point out where the obs don’t agree with reality and look for the obvious shortcomings in that paper. I’m happy though that coupled models are improving – the fewer the inputs and the greater the internal dependencies the more we can trust them.

    Comment by JamesG — 11 Apr 2008 @ 5:48 AM

  120. JamesG, your missive makes clear that you are utterly ignorant of how climate modeling is done. First, parameters are not “fit” to the temperature data. Rather they are independently constrained by other datasets. The process of constraint and datasets vary from one modeling team to the next, so the models really are independent tests. The fact that the models agree as well as they do demonstrates that the conclusions of these independent analyses converge more or less on the forcings. The validation of the models is the degree to which they match trends (not values) in the temperature data and other tests (e.g. Pinatubo response).
    Until you understand how the modeling is done, it’s hard to take your criticisms seriously.

    Comment by Ray Ladbury — 11 Apr 2008 @ 7:17 AM

  121. a short reply to Neil (118): I accept the action (physics) and presence of “greenhouse gases. I also accept the greenhouse gas capability of CO2 and H2O (though maybe not in the exact proportion professed). I think we ought to be concerned (or at least very curious) over the potential increase in global CO2 and curious of its effect on global climate. (I’m not using “curious as a throw-away or a pejorative.) I’m not clear at all how this “takes the steam” out of my skepticism.

    Comment by Rod B — 11 Apr 2008 @ 3:33 PM

  122. Rod B: I suppose that by “curious” you mean, just how much effect CO2 has had/would have is not patently clear. But its stimulative effect surely calls out for some sort of (albeit if in “reasonable” moderation) of mitigation strategy, does it not?

    PS Feel free to explain more of your skepticism, something I surely tolerate in modest doses.

    Comment by Neil B. — 11 Apr 2008 @ 8:23 PM

  123. Neil (re 122, and 121, 118): That’s a close enough definition of “curious” in this context to understand my point. But, no, I don’t think the mathematical precision and uncertainty in some of the processes yet warrant mitigation. With one philosophical exception that hangs in the back of my mind, and that is the “inevitable dooms day” scenario — holding off any mitigation until (and assuming if) my skepticism is scientifically answered but finding it’s then too late to make any difference in the progression to extinction. Plus a small pragmatic exception for those “mitigation” activities whose costs are insignificant or that produce other desirable benefits. For instance putting some effort and resource into reducing our use or dependency on fossil fuels, like alternate energy sources, is helpful in its own right (to a degree), and if it also buys a little insurance against AGW, that’s seems a good thing.

    One example of my maybe three areas of skepticism relates to the mathematical and physical relation between concentration and forcing (the old ln of the fifth power of the concentration ratios), the degree of radiation absorption broadening as concentration increases, and in the general area of molecular absorption, transfer, redistribution and/or re-radiation of energy and/or temperature. However, my current mode is an onus on me to research the science a bit more before I resurrect it fully in RC. My areas of skepticism have been discussed quite a bit on RC in the past. It became evident I had to improve the science behind my questioning so others didn’t waste their time reciting the same arguments over and over.

    Comment by Rod B — 12 Apr 2008 @ 12:34 PM

  124. Despite the fact, it sure would be nice if there was a proper peer-reviewed debunk of Douglass 2007 and Spencer 2007.

    Kind of like how William Connolly finally went out and made a peer reviewed paper squashing the whole global cooling myth.
    http://www.terradaily.com/reports/Study_Global_cooling_a_1970s_myth_999.html

    Removing all doubt is very important in climate change public perception, since skeptics will leverage any amount of it.

    Comment by GreyFlcn — 13 Apr 2008 @ 11:25 AM

  125. Re: Ray Ladbury #120

    I would think that the model results should converge on the climate, not the forcings, so I am not quite sure what you are saying.

    The benefit of considering the model efforts “independent” disappears, if despite this independence, they are documented to have correlated error, as the AR4 models have been shown to share a positive surface albedo bias, a delay or deficit in Arctic ice cap melting, and a missing or weak solar cycle signature, all of these relative to the observations of the actual climate.

    The real question for the peer reviewers is what level of agreement of the models with the climate, or with each other should be required before the claims that authors want to make in peer review papers should be allowed to pass.

    The diagnostic studies show that the models have significant disagreement with the climate, that is arguably relevant to the relative attribution to solar activity and anthropogenic GHGs, and thus any validation of the models for projection of GHG scenerios. A positive albedo bias differentially impacts solar response, and a failure to reproduce the solar cycle signature that is present in the observations hurts the models credibility in attribution to GHGs rather than solar.

    Comment by Martin Lewitt — 14 Apr 2008 @ 4:17 AM

  126. Re # 103 Ray Ladbury

    What is really misunderstood that model comparisons, in spite of a lot of swapping of code and data, are all over the place like a mad woman’s breakfast. It is ludicrous to say that the good model agreement endorses the modelling processes, because there is not good model agreement.

    Oh, I forgot. There is one comparison in Douglass et al Int J Climatol (2007) “Comparison of tropical temperature trends with model predictions”, which has one model agreeing in places with the mean of 22 models. The units are in millidegrees C per decade and are they calculate delta T for 13 altitudes. How’s this for a match?
    Good model, first column. Average of 22 next, then SD next column.

    163 156 64 (SURFACE)
    213 198 443
    174 166 72
    181 177 70
    199 191 82
    204 203 96
    226 227 109
    271 272 131
    307 314 148
    299 320 149
    255 307 154
    166 268 160
    53 78 124 (100 KpA)

    Must be a good model if it agrees to one part in a thousand degrees C/decade for 3 different altitudes. Now how do you suppose that might have happened? Is this reality or is it Dreamtime? How come we use all those significant figures, when the overall error is perhaps of the order of 5,000 millidegrees?

    Are these models good when the SD several times exceeds the value? (Why you guys don’t work in proper units like degrees Kelvin eludes this scientist).

    Sorry Ray, It’s not me who misunderstands. I understand only too well.

    [Response: Not really. It's easy to find metrics where the SD among models is larger than the expected signal. Just make it regionally restrictive enough and use a short time period. I personally would not use such cases to argue for model robustness, and so picking a clearly coincidental match to argue against robustness, is just odd. - gavin]

    Comment by Geoff Sherrington — 14 Apr 2008 @ 6:05 AM

  127. Come off it Gavin, you are supposed to be a math person. What are the odds of 3 consecutive numbers agreeing so closely? 10^n, will you provide the n?

    As a person used to looking at numbers, does this not arouse even a teeny bit of innate suspicion?

    You must have looked at spaghetti plots of model comparisons lately. You must agree they are a mess. I suggest it is well past the time that modellers had a coference to solidify criteria that must be met in order to justify any further modelling. It must be costing zillions for the tiny increments of progress that come in year after year. Maybe the whole premise is impossible given present technology and data. Intractible problems are real and known.

    [Response: Wooo..... numerology! And if you take the digits of the numbers that match you get my telephone number. Must mean something right? 'Zillions'? - I wish. - gavin]

    Comment by Geoff Sherrington — 15 Apr 2008 @ 1:52 AM

  128. Martin Lewitt posts:

    a failure to reproduce the solar cycle signature that is present in the observations hurts the models credibility in attribution to GHGs rather than solar.

    WHAT solar cycle signature? As far as I know nobody has been able to detect a clear effect from the 11- or 22-year cycles, though many people have tried. Every time someone seems to find one, someone else points out a statistical flaw in their method. Read Tamino’s “Open Mind” blog for a thorough discussion of this issue:

    Open Mind

    The empirical fact seems to be that if there is a solar cycle effect on Earth’s temperature — and there probably is — it is trivially small in size.

    Comment by Barton Paul Levenson — 15 Apr 2008 @ 7:22 AM

  129. Re: Barton Paul Levinson #128,

    Here is the some of the very recent work which finds a signature of the solar cycle. The temperature effect is about 0.2 degrees C and the pattern includes polar amplification.

    Tung, K. K., and C. D. Camp (2008), Solar cycle warming at the Earth’s surface in NCEP and ERA-40 data: A linear discriminant analysis, J. Geophys. Res., 113, D05114, doi:10.1029/2007JD009164

    Although the title appears to have been changed in peer review, this appears to be the full text, perhaps not the peer reviewed version however:

    http://www.amath.washington.edu/research/articles/Tung/journals/solar-jgr.pdf

    The pdf at least discusses the methods used.

    Comment by Martin Lewitt — 15 Apr 2008 @ 3:16 PM

  130. Re: “Scientists should embrace the open scientific debate, and anyone who challenges that should be made very, very clear that without open debate, there simply is no science, no matter how much one is in favor of or opposes to particular people, statements and actions.”
    Yes, but open scientific debate should happen among scientists (and it often does). A climate scientist and professor I recently met commented on such wisdom by asking “what would you do if your kid were sick? Would you go to a doctor or find 100 random people and take a democtratic vote (or open scientific debate) about what they think may be ailing your child?” I love the question because I think it makes obvious that if you want/need a real, scientific answer, you go to an expert/authority and not for open debate.
    BUT I don’t mean I am against science blogging. Absolutely not. I have become an avid science blogger thanks to Realclimate and know it is agreat way of reaching and interacting with the lay public. It is a necessary and very effective tool for educating the lay public about the science. And it’s fun and RC is the best of the best. But it is not necessarily intended to debate the science.. IMHO.

    Comment by Figen Mekik — 15 Apr 2008 @ 6:41 PM

  131. > Tung

    Worth looking up.

    Raypierre wrote, in an inline response here:
    http://www.realclimate.org/?comments_popup=504#comment-77327

    “…the puzzling thing about the C&T results, which will need to be sorted out by a more detailed analysis of the oceanic response in the IPCC AR4 models. I can only explain their result by a combination of high sensitivity and low thermal inertia. But why should the thermal inertia that explains the solar cycle response be so much lower than the thermal inertia needed to explain the seasonal cycle over oceans. Camp and Tung are on to something interesting, but I don’t think it can be said that we understand what is going on yet. …”

    Tamino commented here:
    http://tamino.wordpress.com/2008/04/01/how-not-to-analyze-data-part-3/#comment-16096

    I haven’t found any papers citing them; has anyone? (If so there’s probably a current thread where it’s more likely to see useful discussion)

    Comment by Hank Roberts — 15 Apr 2008 @ 10:48 PM

  132. Martin Lewitt,

    I read the abstract. They don’t account for the Earth’s albedo in their heating calculation. That’s a 31% error right there. If the rest of the paper is that sloppy I don’t think it will get very far.

    Comment by Barton Paul Levenson — 16 Apr 2008 @ 7:01 AM

  133. Re 132. In a quick read of the paper, I did not find albedo either, though a more careful look might reveal that it is there. Hard to see how they would miss something that basic.

    In any case, this paper does not in any way that I can see undermine our current understanding of AGW. The authors analyse the solar cycle as an oscillating, 11-year-cycle forcing that adds to or subtracts from AGW, depending on the part of the cycle one is in.

    They state in the abstract: “From solar min to solar max, the TSI reaching the earth’s surface increases at a rate comparable to the radiative heating due to a 1% per year increase in greenhouse gases, and will probably add, during the next five to six years in the advancing phase of Solar Cycle 24, almost 0.2 °K to the globally-averaged temperature, thus doubling the amount of transient global warming expected from greenhouse warming alone.”

    Comment by Ron Taylor — 16 Apr 2008 @ 8:30 AM

  134. Re: Ron Taylor #133,

    The paper doesn’t undermine the current understanding of AGW, it undermines the evidence for credible quantitative attribution (.e.g. claims of “most”) of the recent warming to GHGs. Such claims are largely model based. Note this result from the Camp and Tung paper:

    “Currently no GCM has succeeded in simulating a solar-cycle response of the observed amplitude near the surface. Clearly a correct simulation of a global-scale warming on decadal time scale is needed before predictions into the future on multi-decadal scale can be accepted with confidence.”

    Of course, confidence in attribution (not just projection) of the recent warming, probably would also be enhanced by models able to reproduce the solar response, since solar is the competing hypothesis for the recent warming. I don’t see models as ready yet to differentiate between the competing hypotheses.

    Based on conference presentations by Tung and Camp, raypierre believes there is another paper that will be more detailed in its diagnostics on the various AR4 models.

    Comment by Martin Lewitt — 16 Apr 2008 @ 11:41 AM

  135. Ray Ladbury
    I last clashed with you when I said that picking a few “good” rural stations from Anthony Watts effort should independently verify the GISS US48 graph. You vehemently disagreed, somehow arguing that there could be a cooling signal introduced. I had argued that small amounts of very accurate data are intrinsically better than huge amounts of dross that need adjusting but you argued that all data was valuable, even obviously bad data. Well I was right and you were wrong. But then I was only stating the obvious. There are always certain things that are blindingly obvious in science and yet there are always frustrating characters like you who fight tooth and nail to avoid seeing them. The many circular arguments in climate science, some of which I highlighted, are indeed obvious too, if you are prepared to step out of your comfort zone of blind faith and start behaving like a scientist.

    Of course the models are tuned to match 20th century temperature trends and the lassitude of the aerosol parameter “constraint” is what allows a lot of that tuning. It’s as plain as the nose on your face and even if it hadn’t been then you could turn to the peer reviewed studies from aerosol experts (non-skeptics I stress) who confirm it.

    Your original reasoning was that if the model runs matched then they must be correct which was by itself laughable in it’s naiveté. Now you say that if they all match the real trends then they must be correct. Well no! Because a) If they are set to match the trends then it would be a pretty bad model that didn’t manage it; so all this exercise accomplishes is to show up the really bad models, And b) Hind-casting is a necessary but not a sufficient condition of modeling as your teachers should have drilled into you before you were allowed to ever practice science. It is very easy to reach the “correct” result by the wrong method in modeling. Which you’d know if you’d done any modeling. Such is the value of knowing the answer beforehand. Prediction is the only proper method of finally validating models and even that isn’t really sufficient if you (like Hansen) make such a wide range of predictions that it is almost impossible to fall outside of it.

    Comment by JamesG — 16 Apr 2008 @ 7:02 PM

  136. Wait, is this what you’re referring to?
    http://www.nature.com/nature/journal/v430/n7001/full/430737a.html

    found via
    http://www.google.com/search?q=climate+models+are+tuned

    Comment by Hank Roberts — 16 Apr 2008 @ 7:09 PM

  137. JamesG, I don’t have the first idea of what you are blathering about. Do you? You seem to be under the misapprehension that the only purpose data can serve is to give you “the answer”. They also serve the purpose of error modeling, etc. Sensitivities are not “tuned” to temperature datasets. They are independently constrained to the extent possible. Aerosols are less constrained than other forcers. However, there are multiple studies that suggest that aerosol forcing is in the right ball park, including those on the Mt. Pinatubo eruption, etc.
    I see you’ve gotten no better at providing supporting documentation for your assertions. As such, I do not feel bound to respond to groundless assertions except to say that it is clear from your posts that you don’t have much experience analyzing data. So, have a good life with your pseudoscience. I’ll stick with physics.

    Comment by Ray Ladbury — 16 Apr 2008 @ 7:49 PM

  138. JamesG said:

    I had argued that small amounts of very accurate data are intrinsically better than huge amounts of dross that need adjusting but you argued that all data was valuable, even obviously bad data. Well I was right and you were wrong.

    Ray Ladbury replied:

    it is clear from your posts that you don’t have much experience analyzing data

    I do have “much experience analyzing data.” It’s my life’s work.

    I’ve often told skeptical astronomers that visual photometry — imprecise estimates made by eye, by amateurs — can easily outperform more precise instrumental photometry, by virtue of sheer numbers. At first they generally refuse to believe it. Then I show them results from visual photometry which detected fine structure which was nowhere to be found in instrumental work. Usually they still refuse to believe it. Then I show that the visual-data results were later confirmed by instrumental measurements — but only when they became numerous enough to do the job.

    Ray is right, JamesG is wrong.

    Comment by tamino — 16 Apr 2008 @ 9:55 PM

  139. Re # 137 Ray Ladbury

    By sticking to physics, might you please have the courtesy to give a physics explanation for my # 126, which Gavin tried to deflect with small talk?

    Neat, precise physics, not impressions or armwaving. I’ve provided the overly-precise numbers, might you explain the physics please?

    Comment by Geoff Sherrington — 17 Apr 2008 @ 5:31 AM

  140. JamesG writes:

    Of course the models are tuned to match 20th century temperature trends

    Of course nothing. Global climate models are not in any way, shape or form tuned to 20th century temperature trends. The only climate data that goes into those models is grid data — albedo, elevation, terrain type, normal cloud cover, etc. for each grid square — and things like the composition of the atmosphere and the solar constant. The rest is physics. When a GCM is revised, it’s because someone has found a way to represent the physics better.

    Comment by Barton Paul Levenson — 17 Apr 2008 @ 6:54 AM

  141. Re 134 Martin, it seems to me that you are really grasping at straws. I see nothing in the C&T paper that would justify your statement that “it undermines the evidence for credible quantitative attribution (.e.g. claims of “most”) of the recent warming to GHGs.” Nor do the authors make such a claim.

    Comment by Ron Taylor — 17 Apr 2008 @ 11:51 AM

  142. Re #126 Geoff Sherrington:

    … There is one comparison in Douglass et al Int J Climatol (2007) “Comparison of tropical temperature trends with model predictions”, which has one model agreeing in places with the mean of 22 models. The units are in millidegrees C per decade and are they calculate delta T for 13 altitudes. How’s this for a match
    Good model, first column. Average of 22 next, then SD next column.
    163 156 64 (SURFACE)
    213 198 443
    174 166 72
    181 177 70
    199 191 82
    204 203 96
    226 227 109
    271 272 131
    307 314 148
    299 320 149
    255 307 154
    166 268 160
    53 78 124 (100 KpA)

    Must be a good model if it agrees to one part in a thousand degrees C/decade for 3 different altitudes. Now how do you suppose that might have happened? Is this reality or is it Dreamtime? How come we use all those significant figures, when the overall error is perhaps of the order of 5,000 millidegrees?

    Are these models good when the SD several times exceeds the value? (Why you guys don’t work in proper units like degrees Kelvin eludes this scientist).

    First off, the paper you are referring to has “issues”. Big ones. See

    http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends#more-509

    BTW when you say that the model results are “all over the place”, you shouldn’t quote Douglass et al., because all they really show is that model runs are all over the place… it’s called weather. Natural, unforced variability, on many different time scales. The fact that models somewhat faithfully display this real-world behaviour shows that they are good, not bad.

    But to get to your question, your first column is not a “good” model, it is model 15, which just happens to be very close to the average of all 22 models. Very close, because it was picked to be close, out of 22 alternatives on offer. And indeed it is, as the table shows: two very similar, smooth curves crossing once. Very close in three points. Duh.

    “All those significant figures” were chosen by Douglass et al., and for good reason if you do numerical manipulations. Agreed, they don’t mean much, but you don’t want to drop precision along the way.

    About the SDs (which contain weather variability on top of true model uncertainty), none of them exceed the value “several times”, or even once — except the second one for the 100 hPa level, and the reason for that is in the paper (hint: it’s based on corrupted values.) And yes, the last one, but that happens to be a small value. (A measurement value of zero is invariably exceeded by its own SD — wonder why?)

    And BTW trends are the same in C and K. And it’s Kelvin, not “degrees Kelvin”, if nit-picking is the game :-)

    Comment by Martin Vermeer — 18 Apr 2008 @ 1:33 AM

  143. Re: Ron Taylor 141,

    The authors only claim that model credibility should be questioned for multidecadal prediction (should be “projection” of course). However, put on your peer reviewer hat for a moment, are you going to accept model based attribution claims to AGW, now that there is a question about the model skill in reproducing solar effects on the climate that are in the observations? Wouldn’t that be like accepting the results of an election, when it appears that the electronic machine doesn’t count the votes of one of the leading candidates? There are other documented reasons to question the skill of the AR4 models, this failure goes specifically to the question of whether the current models can by used to reject or dismiss the relative importance of the possible solar contribution.

    Peer reviewers will be aware of this and other diagnostic literature, and will reject attribution and projection claims based on models that continue to demonstrate these problems. Past published results based on these and presumably also earlier models will be questioned as well. This is the scientific process.

    Comment by Martin Lewitt — 18 Apr 2008 @ 3:42 AM

  144. Martin, I am not really qualified to debate this, so I will leave that to others who are. But it seems to me that C&T are dealing with an oscillating forcing of 11 years period, with an amplitude of roughly the same order as the annual increase in GHG forcing. Though this may be an area where the models need additional refinement, it is hard to see how it could overturn the current conclusions about the role of AGW. I will give the paper a more careful read.

    Comment by Ron Taylor — 18 Apr 2008 @ 8:16 AM

  145. Re # 142 Martin Vermeer

    What evidence can you adduce that it was Douglass et al who composed the averages I cited – and not the people producing the model simulations?

    Are not model runs that are all over the place named “disagreement” or “failure to work”? It is not my argument that this effect arises because these are “weather” figures. I would question them no matter what the nature of their origins.

    How does a model “just happen” to be close to the average, so really darned close, given the lottery-like odds of the numbers?

    Don’t you accept there is the possibility of more than statistical coincidence? If you don’t, never ask me for a job.

    I used the term “degrees Kelvin” because I thought some scientists should be reminded of a name they seldom use, Kelvin, when it should be widely used.

    The referenced critique of Douglass et al is full of holes and unreliable. For example the rebuttal quotes -
    “However, the chances that any one realisation would be within those error bars, would become smaller and smaller. Instead, the key standard deviation is simply sigma itself. That defines the likelihood that one realisation (i.e. the real world) is conceivably drawn from the distribution defined by the models.

    There is no innate relationship between the convergence of models and the connection with the real world. All there is, is a convergence of models. How, in the future, do you model volcanic eruptions and ENSO effects? That’s plain stupid.

    Is it not time to cast in stone the criteria that must be met before more modelling is deemed worthwhile? The payback on investment so far makes good spaghetti graphs, but tell me one – just one – REAL WORLD physical thing of commensurate value that it has done.

    [Response: You seem to have relevant experience to bring to this discussion, yet you spend your time indulging in rhetorical excesses. That's just a waste of everyone's time. If you really think that people are just making up terabytes of data in order to fake some desired convergence, there is no point in further discussion. It's just paranoia. As to the real world practicality of modelling, it's easy: Attribution and prediction. You can choose to go into the future blind to the consequences of your actions, or you can learn from the past and adjust your future behaviour accordingly. To the extent that models can consistently explain past climate change (and they can), they have credibility when projecting those changes ahead. For metrics where they indicate that that weather and interannual variability dominate, one would expect that to continue. That is practical information, whether you agree or not. - gavin]

    Comment by Geoff Sherrington — 19 Apr 2008 @ 8:35 AM

  146. Re: Gavin’s response to #145,

    I think you understate the value the models, there is much qualitative value independent of attribution and projection. There have been insights into ocean circulation subsequently verified by observations, notably in the southern ocean. There have been insights into mode changes that are possible in ocean circulation that would have significant climatic effects and probably explain data in the paleo record. Exploration of the effects of the closing of the Isthmus of Panama due to plate tectonics have generated insights and hypotheses. There have even been insights into the ENSO and PDO phenomena, and the behavior of the stratospheric gravity waves and other phenomena. Questions about the models in resolving issues around this recent warming require a level of skill and accuracy higher than that which produced the many past successes of the models. We are moving from qualitative insight and hypothesis generation, into quantitative attribution and projection of near term climate forced by small changes in current levels of forcing.

    While we would expect progress to marked by a convergence in model results due to the shared physics of the climate, such convergence is not a guarantee of progress and may be partially due to correlated error or due to poorly constrained degress of freedom.

    Comment by Martin Lewitt — 20 Apr 2008 @ 2:12 AM

  147. And who produced the figures? Douglass et al, or the modellers themdelves? Do answer the question, not avoid it.

    Comment by Geoff Sherrington — 20 Apr 2008 @ 3:21 AM

  148. #145 Geoff Sherrington:
    As to who did the ensemble averaging, I
    don’t know. Shouldn’t matter if it was done
    correctly. The number of runs per ensemble
    is given in the table.

    Are not model runs that are all over the
    place named “disagreement” or “failure
    to work”?

    No, only disagreements between properly
    constructed ensemble averages may be called
    “disagreement”. This here is (largely) weather /
    interannual variability. Weather really exists
    you know, just look out of the window :-)

    How, in the future, do you model
    volcanic eruptions and ENSO effects?

    You don’t. The models also don’t prepare
    breakfast for you, expecting them to is
    plain stupid.
    (Now hindcasting for model verification
    can take these things into account.
    Different story.)
    About the statistical agreement, it’s a mix
    between physics and dumb luck. The physics
    is in the general arc shape that the models
    have in common: in the lower troposhere,
    the adiabatic lapse rate gets smaller when
    it warms, in the upper troposphere, dry,
    it doesn’t; even further up it is cooling.(*)
    Net result: an arc. Your “coincidence” is
    just two such smooth curves cutting under
    a small angle. Result: three successive
    values agree within 0.001 degs.
    (*) See article I referred to for good explanation.

    Riddle: how big should a group of people be for
    two of them to have the same birthday with 50%
    probability? (A year having 365 days and probability
    being evenly distributed.). People very
    commonly overrate the unlikelyhood of
    coincidences — especially such that aren’t
    specified in advance…
    BTW I’m left wondering where Gavin Schmidt
    got the impression you know what you are
    talking about… polite as always, I guess.

    Comment by Martin Vermeer — 20 Apr 2008 @ 4:39 AM

  149. Still re #145 Geoff Sherrington: I see that your “refutation” of the critique of Douglass et al. doesn’t even understand what the critique is about. The point is that the radiosonde results presented there, all contain one and the same realization of “weather noise”, that of the one and only real world we live in — but this is artfully covered up by showing the nice agreement among several (in one case cherry-picked) radiosonde post-processing result sets.

    Your questioning of the value of climate modelling reminds me a story (IIRC by Robert Junkh, “Brighter than a thousand suns”) where General Grover during the Manhattan project, was told by his scientists that that sturdy steel-frame tower in the Nevada desert, with the complicated apparatus on top, would completely evaporate as a result of the experiment in preparation. “Yeah, sure,” he is reported to have answered.

    It is hard to sell your stuff when working on something that has never ever happened before…

    Comment by Martin Vermeer — 20 Apr 2008 @ 7:54 AM

  150. Geoff Sherrington, you seem to have some fundamental misunderstandings of how scientific modeling–and climate modeling in particular–is done. Many of the climate models use quite different datasets to constrain the forcers, so the convergence of the models does in fact provide evidence in support of anthropogenic causation. What is more, the fact that most of the uncertainty lies on the high side of the consensus projcetion is also significant, since this high side may well drive risk. I would suggest that you become familiar with how climate models are implemented and used before blindly assuming your experience is 100% applicable without modification.
    Might I suggest that Realclimate serves as a wonderful resource for scientists not directly involved in climate science to come and learn about the subject. The peer reviewed literature is the more appropriate venue if you feel you have substantive criticisms of methodology.

    Comment by Ray Ladbury — 20 Apr 2008 @ 8:17 AM

  151. ##147 Geoff Sherrington:

    And who produced the figures? Douglass et al, or the modellers
    themdelves? Do answer the question, not avoid it.

    If you mean the values in Table II, the answer is in Section 2.2: the individual runs were archived by teams participating in the GCM intercomparison project; Douglass did the ensemble averaging, and presumably all calculations after that.

    Comment by Martin Vermeer — 20 Apr 2008 @ 9:28 AM

  152. Sherrington who once worked for CSIRO, who wrote “There are greenie scientists in CSIRO and there are honest ones.” ?

    Comment by Hank Roberts — 20 Apr 2008 @ 9:41 AM

  153. Or this Sherrington? http://tamino.wordpress.com/2007/10/19/not-alike/

    Comment by Hank Roberts — 20 Apr 2008 @ 9:44 AM

  154. Is there a broadly accepted figure on the ideal range beneficial to living things insofar as the percentage of CO2 in the atmosphere?
    Assuming that we are aspiring to control this level, one assumes that there is a specific target. My question is, what is this target?

    Comment by Don Worley — 20 Apr 2008 @ 11:36 AM

  155. Geoff Sherrington (145) — The SI unit for temperature is ‘Kelvin’, not ‘degrees Kelvin’. But to confuse matters, the derived SI unit for temperature (often used) is ‘degrees Celcius’, equal to Kelvin-273, not ‘Celcius’.

    Comment by David B. Benson — 20 Apr 2008 @ 1:35 PM

  156. Re #152 Hank: you mean the self-same CSIRO that contributed model run number 15 in the Douglass et al. paper? What’s this about, a claim of scientific fraud?

    Comment by Martin Vermeer — 20 Apr 2008 @ 2:26 PM

  157. I’m not sure what it’s about, Martin; Google the quote and you’ll know as much as I do. No way for me to even tell it’s the same person, just wondering.

    Comment by Hank Roberts — 20 Apr 2008 @ 2:59 PM

  158. Maybe this one: http://www.jennifermarohasy.com/blog/archives/001281.html

    Comment by Hank Roberts — 20 Apr 2008 @ 3:31 PM

  159. Don Worley (154) — The atmospheric carbon dioxide is now about 385 ppm. Dr. James Hansen states ‘under 350 ppm’. But at 315 ppm in 1958 CE, the glaciers in the Swiss alps were already melting back at about 4 m/y. However, at 288 ppm in 1850 CE, those glacierss were advancing. So somewhere in between seems best to me, but in any case vastly lower than now.

    Comment by David B. Benson — 20 Apr 2008 @ 4:38 PM

  160. If blogs should be peer-reviewed, should lectures also be peer-reviewed?

    For example, http://cires.colorado.edu/events/lectures/allen/

    For example, past emissions of carbon dioxide amount to approximately 500 gigatonnes of carbon, and preliminary results suggest that releasing 800 more gigatonnes would yield approximately a 20% risk of a carbon-dioxide-induced global warming exceeding 2 degrees, provided emissions cease altogether after that containment target is reached.

    That statement – should it have been subjected to peer review?

    Peer review should be reserved for things like journal publications, funding decisions, and faculty hiring decisions.

    Comment by Ike Solem — 20 Apr 2008 @ 5:44 PM

  161. Re 156 Martin Vermeer

    You quote -
    “Re #152 Hank: you mean the self-same CSIRO that contributed model run number 15 in the Douglass et al. paper? What’s this about, a claim of scientific fraud?”

    When you have gained the experience that I have, if you ever do, you will know when to reexamine numbers that appear to have unusual characteristics.

    Please guys, don’t talk down to me. Assume I understand unless or until I show otherwise, by proper scientific criteria, not made-up stories.

    [Response: Well, O wise one, tell us what conclusions should we draw? Don't just leave us with implicit accusations - remember that we do not have your vast experience in these matters. Perhaps you would like to demonstrate that the Douglass et al analysis is somehow amiss by actually doing the calculation yourself? All the data is available at the PCMDI website.... - gavin]

    Comment by Geoff Sherrington — 20 Apr 2008 @ 10:42 PM

  162. Re #161 Geoff Sherrington

    So the implied story is that the good folks at CSIRO doing the simulations somehow looked at what got checked in to the GCM Intercomparison Project archive, computed the average of the stuff already there, and fraudulently submitted a result close (but not equal; too obvious) to that average…

    Incredible, the lengths to which AGW middle-of-the-roaders will go in order to have their beliefs accepted :-)

    Comment by Martin Vermeer — 21 Apr 2008 @ 4:26 AM

  163. At 110,

    “Ray Ladbury Says:
    9 April 2008 at 10:03
    James G. I think you misunderstand my point–the agreement of the models says that there is agreement on the most important forcers–it suggests consensus.
    James G and Martin,
    The fact that the models agree as well with overall trends in the very noisy climate data suggest that on the whole they are correct. The uncertainty comes in for the less well constrained aspects, and the takeaway message here is that that uncertainty is overwhelmingly on the positive side. Since that is also where the highest costs are, those fat tails could wind up dominating the risk.”

    This seeded the discussion. I gave a rather good example of agreement that looks odd to a numbers man. It’s up to you guys to admit or disagree that the numbers look odd. That way we can tell a bit about how objective you are.

    Comment by Geoff Sherrington — 21 Apr 2008 @ 5:08 AM

  164. Geoff, I’m afraid I agree with Gavin: I don’t see anything out of the ordinary. I also fail to understand the point you are trying to make. Why is it surprising that at least one element in a group should be near the mean? It would be much more surprising to me if none of the models were near the mean. Also, keep in mind that much of the physics is common between the models. It is usually the data used to constrain the forcings that varies from model to model.
    You say: “Please guys, don’t talk down to me. Assume I understand unless or until I show otherwise, by proper scientific criteria, not made-up stories.”

    Well, Geoff, I’d say we’re there.

    Comment by Ray Ladbury — 21 Apr 2008 @ 9:29 AM

  165. Any relation to the Sherrington linked above?

    Comment by Hank Roberts — 21 Apr 2008 @ 9:47 AM

  166. Re 125

    I know 4 people with the same first and last name as me, with tertiary qualifications, 3 in Australia, one a Prof, one a maths/surveyor, one a generalist, all of whom post on the Internet. There might be more. At least I’m not ‘Jones’. The name is not important, the science is.

    I do not live on the coat tails of Sir Charles Scott Sherrington, who, upon his death in 1952, had the longest entry ever in Who’s Who, a Nobel Prize, a Knighthood and twice presidency of the Royal Society of London. The name is not important, only the science.

    Indeed, under a pseudonym, I have had several hundred Letters to the Editor published in the largest National newspaper here. It proves nothing except that perhaps successive Editors liked my style and content. I also have cases of others using the same pseudonyms as I use (used because of death threats). The name does not matter.

    I have not made an implied story or any allegations of scientific fraud. I am testing how critically people on this Blog accept data without study of contained anomalous characteristics. As I have written above, the fact that the data relate to climate is somewhat incidental. A set of numbers can fail because they agree to well, just as they can fail because they agree too poorly. It’s the science that matters.

    Comment by Geoff Sherrington — 21 Apr 2008 @ 9:01 PM

  167. Ah, this is a test. Okay. Bye.

    Comment by Hank Roberts — 21 Apr 2008 @ 9:48 PM

  168. Bye Hank. Sorry you are afraid of failing tests.

    Next?

    Does just one of you admit that the numbers I noted are extraordinary? Does not one of you see the surprise of the explanation that “It would be much more surprising to me if none of the models were near the mean” when most are not. Does any one of you really comprehend the real maths of the physical world?

    [Response: Now we are back to numerology. Let's assume I picked out my telephone number from a series of unrelated figures. Would that be extraordinary? yes and no. Yes because picking exactly those numbers would be unusual, but no because I would have been equally astonished if I had seen any number of different telephone numbers (or birthdays, or PIN numbers or license plates etc). Thus since you didn't define what would be extraordinary ahead of time, your definition of extraordinary having already looked at the data is simply an exercise in finding numerical coincidences. Since the alternatives in front of us are a) that a modelling group fixed their output to produce something that was nearly the same as an average of an arbitrary set of other models for a metric that spanned an arbitrary number of years two years before the analysis was done, b) that the authors of the paper themselves made up the data despite it being publically available and checkable by anyone or c) it's just a coincidence, it is hardly surprising that everyone has gone for (c). If you think otherwise, it is incumbent upon you to demonstrate that by checking the calculation (which you could easily do). Despite my opinion that the Douglass paper is flawed, your continued insinuations are out of line. Either put up or shut up. - gavin]

    Comment by Geoff Sherrington — 22 Apr 2008 @ 7:39 AM

  169. Re #168: I doubt that we’re thinking of the same thing, but I’m inclined to agree with you that my objectivity can be measured by my willingness to buy your coy hints that scientific fraud and conspiracy are afoot.

    Comment by spilgard — 22 Apr 2008 @ 8:40 AM

  170. Geoff Sherrington, I comprehend the Central Limit Theorem. Do you? I also comprehend that it is kind of difficult to draw conclusions from a sample size of one–particularly when that sample is cherry-picked. I also comprehend that scientific fraud is rare precisely because it is so easily detected, that the results of a single model do not reveal much about the current understanding of climate science as a whole and that on the basis of 3 numbers (while ignoring 10 others), you are alleging scientific fraud.
    If you are really surprised by such a fortuitous coincidence of 3 numbers, all I can say, is that this must be one of the first times you’ve analyzed a dataset.

    Comment by Ray Ladbury — 22 Apr 2008 @ 11:03 AM

  171. On the central point of the main article — accuracy of what gets published: The Australian today published an op ed alleging that not only was global warming over but we are headed for a new ice age, because temperatures in 2007 dipped by 0.7°C over 2006. Unfortunately this crucial fact on which the whole article hung was wrong, as was pointed out in several comments including mine. So, guess what? The paper took the reference away from their main page, and deleted all the comments! That’s honesty for you.

    Comment by Philip Machanick — 23 Apr 2008 @ 7:56 AM

  172. Geoff Sherrington (126 et seq.). Are you unfamiliar with the concept of “coincidence”? Yesterday I went shopping. At the deli counter, I asked for “about 200g” of something in irregular sizes. The deli guy was obviously new at the job, and his first attempt was 103g. His second attempt was exactly 200g to the accuracy of the scale. Was he replaced between the two attempts by a space alien with a gramme-accurate sense of weight? Must be something like that — what are the odds otherwise?

    Here’s an exercise for you. Look at 100 papers with similar comparisons of numbers. You won’t see the effect you saw often. So no conspiracy. You were just lucky this one time.

    I also publish often in letters to the press (#166), using my own name (try google). My observation: the Australian press shuns factual corrections. I have a good hit rate unless I point out an error in an article, which is almost never published — particularly not in matters of climate change. So I wouldn’t consider a high count of letters in the Australian press as a badge of honour.

    Update on my #171: I found the original article with comments (hidden, not in the usual places for articles with lost of comments, not deleted — and they didn’t publish any corrections in letters — including mine — the following day).

    Comment by Philip Machanick — 25 Apr 2008 @ 2:26 AM

  173. Gavin, you are plain evasive. If you saw a set of numbers like I have shown, would you reexamine them or not? Yes or No? Either put up or shut up?

    If NO, perhaps that’s why you are where you are and I am where I am – able to see such evasion from 30 paces.

    That stuff about numerology is so 1970s. We have moved on since then. I first saw the coincidence of birthdays in a party group in a Martin Gardiner column about 35 years ago, from memory. (You might not even know who Martin Gardiner was. If you ask nicely I’ll give you a clue).

    Comment by Geoff Sherrington — 25 Apr 2008 @ 6:01 AM

  174. Geoff Sherrington says:

    Gavin, you are plain evasive. If you saw a set of numbers like I have shown, would you reexamine them or not? Yes or No? Either put up or shut up?

    If NO, perhaps that’s why you are where you are and I am where I am – able to see such evasion from 30 paces.

    That stuff about numerology is so 1970s. We have moved on since then. I first saw the coincidence of birthdays in a party group in a Martin Gardiner column about 35 years ago, from memory. (You might not even know who Martin Gardiner was. If you ask nicely I’ll give you a clue).

    Gee, Geoff, I have no idea who Martin Gardiner was, either. Do you by any chance have him confused with Martin Gardner, the late science writer?

    And maybe Gavin isn’t so much being evasive as just refusing to entertain a crackpot in the style the latter would prefer. Playing with numbers and hunting patterns in them, however much fun it may be, is not the same thing as statistical analysis.

    Comment by Barton Paul Levenson — 25 Apr 2008 @ 6:46 AM

  175. Geoff, the only thing that is amazing in this is that you are amazed by it. If you look at the actual data, you find that the large standard deviations are dominated by a couple of models for each altitude. Other than that, the agreement between the models is pretty darned good–to the point where your coincidence is not significant at the 95% confidence level, and that is purely statistical modeling, not taking dynamics into account. I think I’d want better than 95% significance to issue an allegation of fraud, but you seem to have much lower standards.

    Comment by Ray Ladbury — 25 Apr 2008 @ 8:51 AM

  176. Gavin, thanks for the assistance.

    (this is cleaned up and clarified a little from eariler)

    As long as scientists are working on the tax payer’s tab, the work must be open to public scrutiny. The worse thing that will happen is those producing the work will be forced to do a better job of packaging the methods facts findings and conclusions. If the work CAN NOT be defended in a public forum, it doesn’t meet the basic requirments. The best thing(s) that will happen is the public will become more educated about the work, and the scientists will gain better public support.

    If a scientist wishes to start a career as a poltical lobbyist or a text book censor, so be it. They are however no longer useful as an objective scientist. Its a simple matter of personal choice. The chosen course of action is one or the other but not both. And certainly not on the tax payer’s tab if the attempt is for both. The private sector would be a separate issue, although an employer would have the final say, not the individual.

    I can’t think of ANY other adequate way to keep scientists pn task other than open peer review. Not saying all are dishonest, but there are a few who need closer public review.

    It seems ALL TOO OFTEN some forget who the customers / owners really are. Public servants are employed with the blessing of the public, a blessing which can and from time to time should be revoked.

    Operating behind closed doors on public issues is a BAD idea.

    So, while peer review of advanced science papers is obviously best conducted by people who know the language and understand the underlying science, still, in the open is an opportunity not a hinderance. Further, peer review of the issues are absolutely required given the way media and polticians use the science for personal and political gains. The citizens deserve far better than they’re getting in this regard.

    Comment by SteamGeek — 25 Apr 2008 @ 7:52 PM

  177. I’m a skeptic and I think Geoff is making much ado about nothing. The model agreement to average occurs at a few spots, is not exact, and the standard deviations are high regardless. Also, why/how would people be building a model to match an average of other models? Who goes first? The faker and the others match up to him? Or the others and he makes his match theirs? (But nothing is told us about the time of the different runs, if the “faker” had access to knowledge of the results of the others.)

    Geoff: Please. This kind of fever swamp thing makes us look paranoid. Also makes us look stupid. And when expressed with too much bluster, makes us look pompous, pretentious and windbagged. Settle down and be a good skeptic.

    Comment by TCO — 27 Apr 2008 @ 9:03 AM

  178. TCO (#177): well put. The role of a genuine sceptic is to keep scientists on their toes and make them recheck their results. There’s precious little of that going on under the climate sceptic brand. Why bluster and reshooting spent cartridges has to be part of armoury if there is a case to be made escapes me.

    May I quote some Shakespeare here? “… full of sound and fury, signifying nothing.” Look up what comes before this fragment to see how apt it is.

    Comment by Philip Machanick — 28 Apr 2008 @ 6:22 AM

  179. re 168,

    Gavin, you say that your opinion is that Douglass’ paper is flawed.
    Fine, but in the way that science is meant to progress via ‘peer review’ surely you need to publish a paper rebutting it?

    A blog statement, after all can’t count for much because that’s the sort of thing sceptics do!?!

    [Response: The flaw is obvious and doesn't need a peer-reviewed paper to demonstrate it, but who knows what tomorrow will bring... - gavin]

    Comment by Dave Andrews — 28 Apr 2008 @ 3:19 PM

  180. Dave Andrews, how many refutations do you require be published?
    Read at least the last paragraph, first post, in the previous thread:
    http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
    which references
    http://www.agu.org/pubs/crossref/2007/2007GL029875.shtml

    Comment by Hank Roberts — 28 Apr 2008 @ 9:13 PM

  181. Mark Stewart #28 says, incorrectly so far as Nature is concerned: “Journal articles, especially in ‘leading’ journals such as Nature and Science, are getting ridiculously short, and many details of analysis are necessarily omitted, and much can be buried in a simple figure.”

    Has he read Nature in the past 5 years? Nature Articles and Letters are several (between 4 and 8, usually) pages long in the print/online edition of the journal, and have associated with them online-only, peer-reviewed supplementary information and methods, sometimes extremely extensive. We also publish other journals of high quality and impact, to allow further publications in a topic– for example Nature Geoscience, in which the two commentaries on blogging have formed the basis for this fascinating discussion. (Which by the way I am highlighting on Nature’s Peer to Peer blog at http://blogs.nature.com/peer-to-peer . Our peer-review debate, which is mentioned by a kind person in the comment thread above, is also archived here.) Best wishes, Maxine, an editor at Nature.

    Comment by Maxine Clarke — 29 Apr 2008 @ 4:45 AM

  182. Re #177 TCO:

    Geoff: Please. This kind of fever swamp thing makes us look paranoid.
    Also makes us look stupid. And when expressed with too much bluster,
    makes us look pompous, pretentious and windbagged. Settle down and be a
    good skeptic.

    What makes you think there is any “us” left after you take away the paranoia, the stupidity and the mendacity? “Skepticism with an honest face”, Gorbachev style?

    “Good skepticism” already exists, TCO. It’s called science. That’s the “us” you should be joining.

    Comment by Martin Vermeer — 29 Apr 2008 @ 9:21 PM

  183. I think blog posting and commenting are a wonderful aspect of “new science” and I’d hate to see the legitimate but overblown concerns about how this might impact the peer review process lead to less blogging and commentary. I’d wildly guess that something like 1000 times as many people read blogs about a paper as read the paper itself. That presents some very important interpretation issues, but we should recognize that finally there is a mechanism for much broader exposure of scientific research and work to improve that mechanism, not reject it in favor of what many believe to be a seriously compromised peer review process.

    Comment by Joseph Hunkins — 2 May 2008 @ 4:24 PM

  184. It would be curious to have a machine, in the Turing sense, that compared people’s accuracy on answers on clear questions which will be answered in a time period tau, compared and publicly ranked.

    Is there ‘judgment’? How does Bush rank versus, e.g. Clinton? How about Lee Smolin vs Newt Gingrich? Peer review committee members?

    Lots of clusters needed, but not much software.

    ‘twould be curious.

    Comment by Jay Nickson — 29 Oct 2008 @ 11:41 PM

  185. a late comment on this post:

    we need the whole continuum from informal oral comments in private lab meetings, to public science blogging, to open post-publication peer review, to peer-reviewed response papers to peer-reviewed target papers.

    the missing link in this continuum is open post-publication peer review (OPR). OPR is different from science blogging in that it is crystallized (each review is an official digitally signed publication with a doi). OPR is different from publishing a real paper in that a review will typically have limited original content, will refer mainly to one paper, and will include numerical ratings of the reviewed paper.

    i’m exploring this idea here: futureofscipub.wordpress.com

    –nikolaus kriegeskorte

    Comment by Nikolaus Kriegeskorte — 17 Feb 2009 @ 6:05 PM

Sorry, the comment form is closed at this time.

Close this window.

0.599 Powered by WordPress