or…Some thoughts on Personal Responsibility and the Peer Review Process
Ryan O’Donnell made a series of serious of allegations against me at ClimateAudit, in the context of our friendly dispute about whether his new paper in the Journal of Climate supports or ‘refutes’ my own results, published in Nature.
To his credit, Ryan has offered to retract these allegations, now that he is a little better acquainted with the facts. However, it is still important, I think, to set the record straight from my point of view. There were such a great number of claims about my “dishonesty,” “duplicity” and [implied] stupidity, all of which are untrue, that it really isn’t worth trying to respond in any detail. Just responding to the main two ought to suffice to make the point.
“Eric recommends that we replace our TTLS results with the ridge regression ones (which required a major rewrite of both the paper and the SI) and then agrees with us that the iRidge results are likely to be better . . . and promptly attempts to turn his own recommendation against us.”
“[in his RealClimate post]…he tries to … misrepresent the Mann article to support his claim [about the iridge routine] when he already knew otherwise. How do I know he knew otherwise? Because I told him so in the review response.”
While it is quite possible that O’Donnell believes both of these claims, they are both false, as it is rather easy to demonstrate.
First, I never suggested to the authors that they use ‘iridge’. This was an innovation of O’Donnell and his co-authors, and I merely stated that it ‘seems’ reasonable. As O’Donnell’s co-authors are fond of pointing out, I am not a statistician, and I did not try to argue with them on this point. I did, however, note that previously published work had shown this method to be problematic:
“The use of the ‘iridge’ procedure makes sense to me, and I suspect it really does give the best results. But O’Donnell et al. do not address the issue with this procedure raised by Mann et al., 2008, which Steig et al. cite as being the reason for using ttls in the regem algorithm. The reason given in Mann et al., is not computational efficiency — as O’Donnell et al state — but rather a bias that results when extrapolating (‘reconstruction’) rather than infilling is done.
Second, I was the reviewer of the first three drafts of O’Donnell et al submission. However, I did not the review draft four, which was the published one.
, and which is markedly different from draft 3 [note correction: it has been pointed out that it's not really very different; in other words, my criticisms of draft 3 were ignored]. Nor was I ever shown their response to my comments on draft 3, so I did not in fact ‘already know’ what O’Donnell claims I did. It appears that the editor was swayed by the arguments that I was not a helpful reviewer. In other words, even if one believes that I was “bullying” them into showing particular results, they still had the last word (as any author should).
The fact of the matter is that my reviews of O’Donnell’s paper were on balance quite positive. I wrote in the confidential comments to the editor in my very first review that
I emphasize that I think that a fundamentally reworked version of this manuscript could potentially provide a useful scientific contribution, and many of the points made do indeed have scientific merit. Indeed, the authors have done a very thorough analysis, and are to be congratulated on this.
In my second review, I wrote that “O’Donnell et al. have substantially improved their manuscript … and clarified a series of items that led to some confusion on my part.”
With respect to O’Donnell’s lengthy discussion of the technical aspects of the difference between our papers, I’m not complaining. It is possible to have a disagreement — or even to be wrong — about the technical aspects of a paper without being ‘duplicitous’. The dependence of any analysis on the technical aspects of the methodology are completely legitimate subjects of discussion, and it is important to be clear about what does and what does not depend on those choices. People who want to see what the data are saying about the real world will focus on the similarities, people who are focussed on proving people wrong will focus on the differences. This is how O’Donnell and I can (legitimately) disagree about what their results mean.
The reality is that editors, not reviewers, make decisions about what is acceptable and what is not. Any comments I made as a reviewer of O’Donnell et al.’s work would have been weighed against what other reviewers said (and obviously were, since the main criticism I had of the paper was not ever addressed), not to mention the responses of the authors themselves. And the decision about what content eventually winds up published is still ultimately up to the authors. If the authors feel that they are being bullied into presenting their results in a particular way (as is the allegation here), then they have the choice to withdraw the paper and submit it elsewhere, or complain to the editor. But once they have signed off on the paper, it is their paper, and blaming someone else — reviewer or editor — for its content is simply passing the buck.
It’s perhaps also worth pointing out that the *main* criticism I had of O’Donnell’s paper was never addressed. If you’re interested in this detail, it has to do with the choice of the parameter ‘k_gnd’, which I wrote about in my last post. In my very first review, I pointed out that as shown in their Table S3, using k_gnd = 7,
“results in estimates of the missing data in West Antarctica stations that is further from climatology (which would result, for example, from an artificial negative trend) than using lower values of k_gnd.”
Mysteriously, this table is now absent in the final paper (which I was not given a chance to review).
Some months ago, O’Donnell cordially (though quite inappropriately) asked me if I was one of the reviewers, and also promised not to reveal it publicly if I didn’t want him to. I told him I was, but that I would prefer this not be public since the ‘opportunity for abuse’ was simply too great. Talk about prescience!
Many of my colleagues have warned me many times not to trust the good intentions of O’Donnell, Condon, and McIntyre. I have ignored them, evidently to my peril. But you know what has given me the most pause? The fact that a number of my colleagues and many otherwise intelligent-seeming people still seem to treat these guys as legitimate, honest commenters, whose words have equal weight with, say, those of Susan Solomon or J. Michael Wallace, or, for that matter, Gavin Schmidt or Mike Mann or myself. As a reporter wrote to me today “it’s simply impossible for a lay observer to make a judgement on his/her own.” Really?!
Perhaps there is a silver lining here. Perhaps the utter silliness of the shrill accusations that O’Donnell made against me — based on a version of the facts, in his head, that are demonstrably and unequivocally false, coupled with the fact that he then retracted them (or at least has promised to do so), will help more people see what the steadily growing list of other scientists who’ve been accused by McIntyre and his associates of plagiarism, dishonesty, data manipulation, fraud, deceit, and duplicity have been telling me for years: these people are willing to say anything, regardless of the cost to others’ reputations and to the progress of legitimate science, to advance their paranoid worldview.
I’ve even got a name for the clarity this affair would seem to offer: O’Donnellgate.
Sadly, attacking climate scientists by mis-quoting and mis-representing private correspondences or confidential materials appears now to be the primary modus operandi of climate change deniers. To those that still don’t get this — and who continue to believe that these people can be trusted to present their scientific results honestly, and who continue to speculate that their may be truth in the allegations made over the years against Mike Mann, Ben Santer, Phil Jones, Stephen Schneider, Andrew Weaver, Kevin Trenberth, Keith Briffa, Gavin Schmidt, Darrell Kaufmann, and many many others, just because they ‘read it on a blog somewhere’ — I’d be happy to share with you some of the more, err, ‘colorful’, emails I’ve gotten from O’Donnell and his coauthors.
If you still don’t get it, then I have a suggestion for a classic short story you should read. It’s called, The Lottery, by Shirley Jackson.