The publication ‘Learning from mistakes in climate research’ is the result of a long-winded story with a number of surprises. At least to me.
I have decided to share this story with our readers, since it in some aspects is closely linked with RealClimate.
The core of this story is the reproduction and assessment of controversial results, and it has unfolded with this publication.
Almost at the same time, discussions from the session reproducibility at the Royal Society meeting on the Future of Scholarly Scientific Communication were released. Similarities may suggest that my story is not unique.
The story I want to share started in 2012, after a response to my blog post here on RealClimate and a plea to publish formal rebuttal of Humlum et al. (2011).
Rather than assessing one specific paper, however, I wanted to know Why are there conflicting answers concerning climate change in the scientific literature?
So what is the best strategy for answering this question? I started off with replicating the past analyses, and both the results and the code (Mac/Linux & Windows) for doing the analysis have been made openly available on Figshare.com. It is important that the replication also is replicable.
I also managed to assemble a team for writing this paper, which also included people from SkepticalScience.
I was naive at first, thinking that we could persuade with the provision of open source code and detailed description of how a study is invalid. But it is not uncommon that the publication process is long-winded, as Bill Ruddiman explains in A Scientific Debate.
We first submitted our work to a journal called ‘Climate Research’.
The opinion of one of the reviewers on our manuscript was “profoundly negative”, with a recommendation to reject it (29 June 2012):
“The manuscript is not a scientific study. It is just a summary of purported errors in collection of papers, arbitrarily selected by the authors.”
But what does it mean being a “scientific study”? Perhaps not very well-defined, and google only returned one hit when I searched, with a vague description on Wikipedia.
A clue about the lack of science could be gleaned from the reviewers comment:
“It is also quite remarkable that all the papers selected by these authors can be qualified in some way or another as papers that express skepticism to anthropogenic climate change. I wonder why is this so?”
The same reviewer, also observed that “I guess that any one of us could collect their own favorite list of bad papers. My list would start with the present manuscript“, and remarked that “It may be published in a blog if the authors wish, but not in a scientific journal”.
That’s an opinion, and perhaps a reaction caused by expecting something different to what our paper had to offer. Apparently, our paper did not fit the traditional format:
“This manuscript itself is not a research paper but rather a review, more in the style that can be found nowadays in internet blogs, as the authors acknowledge.”
Because we disagreed with the view of the anonymous reviewers, we tried the journal ‘Climatic Change’ after some revisions (26 July 2012).
When the verdict came, we were informed that it had been very difficult to decide whether to accept or reject: “We have engaged our editorial team repeatedly (as well as a collection of referees), and the decision was not unanimous.”
The manuscript was rejected, and we were in for a surprise when we learned the reason the editor gave us:
“Nonetheless, we have agreed with reviewers who have offered some serious sources of concern and have not been persuaded (one way or the other) by blog conversations. Some of the issues revolve around the use of case studies; others focus on the appropriateness of criticizing others’ work in a different journal wherein response would not be expected.”
We were not entirely discouraged as the editor said the board was “intrigued” by our arguments, and suggested trying a different journal.
After rejection by ‘Climatic Change’, we decided to try an open discussion journal called ‘Earth System Dynamics Discussion’ (ESDD), where the manuscript and supporting material were openly accessible and anybody could post comments and criticism.
It was from the discussion on ESDD that I learned that the critical reviewer swaying the decision at ‘Climate Research’ or ‘Climatic Change’ had been Ross McKitrick (link). He was an author of one of the papers we included in our selection of contrarian papers that we had replicated.
McKitrick had apparently not declared any conflict of interest, but had taken on the role as a gatekeeper, after having accused climate scientists to do so himself in connection with the hack and the so-called “ClimateGate” incident:
“But academics reading the emails could see quite clearly the tribalism at work, and in comparison to other fields, climatology comes off looking juvenile, corrupt and in the grip of a handful of self-appointed gatekeepers and bullies”.
Nevertheless, ESDD (May 3 2013) turned down our manuscript for publication in the final and formal version ‘Earth System Dynamics’ (ESD), and the editor of ESD thought there were several problems with our manuscript: its “structure“ since there was “no paper in this paper”, “no actual science (hypothesis, testing of a hypothesis) in the main body”, that the case studies were “inflammatory and insufficiently supported fashion”, “authors’ stated opinion”, and that R-scripts do “not reveal mistakes”.
I guess we had not explained carefully enough the objective of our paper, and again, the editor expected a different type of paper. We also thought the verdict from ESDD was unfair, but it is not uncommon that authors and editors have different opinions during the reviewing process.
The paper was revised further according to many of the reviewers comments, explaining more carefully some of the critical points and taking into account the counter-arguments. Since it didn’t fit into the format of ESD, we thought it could be more suitable for the journal called ‘Nature Climate Change‘ (6 February 2014).
It was rejected, surprisingly despite quite positive reviews.
One reviewer thought it had “potential of being a very important publication”, and found the “information in the Supplementary Material to be important, compelling and well-presented”.
We were pleased by the view that it was “an important contribution to the climate science debate”, but was intrigued by the response that it was “unlike any other paper I have been asked to review in the past”.
Another comment also suggested that our work was fairly unique: “Reviewing the manuscript has been an unusual assignment”. Nevertheless, the tone was quite positive: “The manuscript is clearly written”, noting the effort going into reproducing the controversial analyses: “extensive discussion of the specific replication attempts in the supporting material, including computer code (written in R) that is available to the reader.”
But another reviewer took a different stance, thinking our manuscript was “poorly written” and that it failed “to adequately capture the importance of the project or convey its findings in an interesting and attractive manner”.
Since the reviews actually were quite encouraging in general, and we decided not to give up. We thought our paper could be suitable for another journal called ‘Environmental Research Letters’ (ERL).
The rejection from ERL came fairly promptly (21 February 2014) with the statement that our paper was an “intriguing” but “not sufficiently methodologically based for consideration as an ERL letter”.
The reviewer thought the manuscript was not suitable for the journal, as it was “more of an essay than a scientific study”, and recommended some kind of perspective-type outlet. Reason for rejection: (a) “not a research Article in the ERL style” and (b) “number of methodological concerns”.
It appeared that the greatest problem with our paper was again its structure and style, rather than the substance, however. I therefore contacted the editor of the journal ‘Theoretical and Applied climatology’ (TAAC) in May 2014 and asked if they would be interested in our manuscript.
The TAAC was interested. It has now been published (Benestad et al., 2015).
So what was special with our manuscript that was “intriguing”, “unlike any other paper“ but did not fit the profile of most of the journals we tried?
Not only was Humlum et. al. (2011) rebutted, but we had examined 38 contrarian papers to find out why different efforts give conflicting results. We drew a line at 38 papers, but could have included more papers too.
Many of these have been discussed here on RealClimate and on Skeptical Science, and are the basis for think tanks such as the Heartland Institute and their output such as the “NIPCC”. The relevance for our readers is that many of these have now formally been rebutted.
We had been up-front about our work not being a statistical study because it did not involve a random sample of papers. If we were to present it as a statistical study, then itself would be severely flawed as it would violate the requirement of random sampling.
Instead, we specifically chose a targeted selection to find out why they got different answers, and the easiest way to do so was to select the most visible contrarian papers.
Of course, we could have replicated papers following the mainstream, as pointed out in some comments, but that would not address the question why there are different answers.
The important point was also to learn from mistakes. Indeed, we should always try to learn from mistakes, as trial and error often is an effective way of learning. There must also be room for disagreement and scholarly dialogue.
Our selection suited this purpose as it would be harder to spot flaws in papers following the mainstream ideas. The chance of finding errors among the outliers is higher than from more mainstream papers.
Our hypothesis was that the chosen contrarian paper was valid, and our approach was to try to falsify this hypothesis by repeating the work with a critical eye.
Colleagues can know exactly what has been done in our analyses and how the results have been reached with open-access data code such as R-scripts that were provided. They provide the recipe behind the conclusions.
If we could find flaws or weaknesses, then we would be able to explain why the results were different from the mainstream. Otherwise, the differences would be a result of genuine uncertainty.
Everybody makes mistakes and errors some times, but progress is made when we learn from trial and error. A scientists job is to be to-the-point and clear as possible; not cosy up to colleagues. So, is is really “inflammatory” to point out the weakness in other analyses?
Hence, an emphasis on similarities and dissimilarities between the contrarian papers was a main subject in our study: Are there any similarities between these high-profile “contrarian” papers other than being contrarian?
So what were our main conclusions?
After all this, the conclusions were surprisingly unsurprising in my mind. The replication revealed a wide range of types of errors, shortcomings, and flaws involving both statistics and physics.
It turned out that most of the authors of the contrarian papers did not have a long career within climate research. Newcomers to a scientific discipline may easily err because they often lack tacit knowledge and do not have the comprehensive overview. Many of them had authored several of the papers and failed to cite relevant literature or include relevant and important information.
The motivation for the original plea for a formal rebuttal paper was that educators should be able to point to the peer-reviewed literature “to fight skepticism about the fundamentals on climate change”.
Now, educators can also teach their students to learn from mistakes through replication of case studies.
The important question to ask is where does the answer or information come from? If it’s a universally true result, then anybody should get similar answers. It is important to avoid being dogmatic in science.
- R.E. Benestad, D. Nuccitelli, S. Lewandowsky, K. Hayhoe, H.O. Hygen, R. van Dorland, and J. Cook, "Learning from mistakes in climate research", Theoretical and Applied Climatology, vol. 126, pp. 699-703, 2015. http://dx.doi.org/10.1007/s00704-015-1597-5
- O. Humlum, J. Solheim, and K. Stordahl, "Identifying natural contributions to late Holocene climate change", Global and Planetary Change, vol. 79, pp. 145-156, 2011. http://dx.doi.org/10.1016/j.gloplacha.2011.09.005
99 Responses to "Let’s learn from mistakes"
forrest curo says
Will your paper be helpful for contrarian publicists in need of methods for producing new invalid results?
Hank Roberts says
One thought, solely from reading the original post, before reading the paper:
the phrase “why they got different answers” might refer to two different kinds of comparison:
A) papers about, say, sunspots (“it’s the Sun”) inconsistent with the larger number of papers on the same subject (“solar variation is very small compared to anthropogenic CO2”)
B) a widely noted oddity about contrarian papers, where many different papers proclaim many different conclusions (“it’s the Sun” or “it’s the wobbles” or “it’s subsea volcanos” or “it’s not happening” or “it’s the ozone layer” …) — all agree that it must be “anything but the IPCC”
Possibly the original abstract/summary could clarify “they got different answers” to make clear. I think it means “these skeptic authors got different answers than [other skeptic authors] or [IPCC authors]”?
That’s for the benefit of those who never read past the summary text.
Uma Bhatt says
Kudos that your persisted this long to get this study published.
Tim Jones says
I for one welcome the introduction of this paper. A comprehensive review and rebuttal of contrarian deception is a valuable contribution to the literature and a useful guide for refreshing our arguments in the face of intransigent right wing climate denial.
We here in Wimberley, Texas are on the front lines of climate change. The Memorial Day floods that swept through the Blanco River valley in the middle of the night have been devastating both to the natural environment, public infrastructure and human lives. We have gotten our taste of the results of our warming oceans and the freaky jet stream loop amplification of the current El Niño.
When what is depicted below occurs at the beginning of an El Niño weather event
…and the newspapers warn of a “Godzilla” around labor day we need much more pressure from the scientific community on elected officials than we’re getting through our erudite research journals. This paper should have been published in all of them.
Nick Stokes says
“McKitrick had apparently not declared any conflict of interest, but had taken on the role as a gatekeeper…”
I often hear this kind of thing from the contrarian side, and it is reasonable to point out the hypocrisy involved. But not to switch sides. McK was presumably selected as referee precisely because he was one of those criticised. An editor should listen to his arguments, to see if there are any not otherwise before him, but should not be influenced by a predictably negative overall view. I don’t think McK would normally be expected to declare a conflict, and there would be no point anyway, since the conflict is why he was selected. There is no reason why he should refrain from expressing a negative view – I don’t think we should legitimise “gatekeeper” talk. It is the editor who should use his judgment and make a decision.
richard pauli says
Terrific post. Thank you so much.
Looking forward to AGU and sessions on Scientific Reticence. Tobis discussed it in his blog in 2007 http://initforthegold.blogspot.com/2007/06/reticence-and-excess.html It may be worthy of greater attention.
Joel Shore says
I am no fan of Ross McKitrick, to put it mildly, but I don’t think that his work being critiqued in your paper would constitute a conflict of interest that should necessarily prevent him from reviewing it. In my experience in the field of physics, editors have twice sent me papers to review that were, to a very great degree in one case and a somewhat lesser degree in the other, saying “Shore et al. are wrong” and this was presumably exactly why they chose to send it to me for my input.
I would hope, however, that the editors would have submitted it to him for review with the understanding of his lack of impartiality and have weighed the opinions accordingly. And, on the reviewer’ side, I think it is important to recognize your own biases and account for that accordingly. In the two cases that I was involved in, that amounted to being very explicit in explaining exactly where the authors were incorrect in the first case (and, eventually, in collaboration with them, agreeing with…and helping them flesh out…a smaller part of their argument where they were correct). In the second case, I frankly admitted that I remained somewhat skeptical of the authors’ numerical results but that I thought the evidence was probably strong enough to convince a more impartial observer and hence that it should be published. (I did pose several questions to the authors, prompting the authors in their response to thank me for my conscientious review and to note, a bit tongue-in-cheek, that in fact they were quite sure that they had never seen a longer positive review!)
[Response: A fair point, but in the two first submissions, there was just one person that had a very strong position against the publication: One was Richard Tol (disclosed in the thread here) and the other Ross McKitrick. Their criticism was strong but not very to-the-point (on par with the arguments they offer in this thread). The other reviews were quite fair as far as I remember. -rasmus]
dave andrews says
Your abstract says 97% of the papers stating a position endorse AGW and 2% reject AGW. I’m no scientist but even I know that doesn’t add up to 100%. As a non-scientist, but as someone who has read extensively on the subject, the big question I have is… how are scientists so certain about the greenhouse effect of CO2 in the real world, and outside of a laboratory? I lean towards the opinion that climate scientists do not really understand it as well as they claim they do. Certainly the model results have not supported the hypothesis, theory, or whatever it is being called. I know there are many scientists in the climate science field who also reject the “CO2 as control knob” assertion because the facts over the past couple of decades of observations just do not support the hypothesis.
Chris McGrath says
Thanks for sharing your story of endurance through the peer review process. It is inspiring for others who have been discouraged by poor reviews.
Ross McKitrick says
You have a strange notion of conflict of interest (not to mention “gatekeeping”). I hardly needed to explain to the editors that you were critical of my work. I would have insulted their intelligence if I had done so. It was right there in the paper itself, and it is what prompted the editors to send it to me for response. You would have me preface my response by explaining to them something they obviously knew? But clearly they did not hand me a veto, since they solicited other reviews, and then they rejected the paper. And of the two journals you mention (CR and CC) I provided a review at only one. I must be some kind of remarkable gatekeeper that I can get a paper rejected at a journal by telepathic messages to the editor when I don’t even know it’s under review. Alas but sorry, the reality is simply that other reviewers obviously felt the same way as I did and the editors did too.
When I saw the paper a second time at ESDD I indicated that I had previously refereed it, which is one of those courtesies to editors so they know you don’t bring fresh eyes to the paper, so they should get others to read it as well. Which they did, and rejected it. And, evidently, this process repeated itself several more times.
I see you have added more authors to the paper, and may have fixed some of the more egregious errors in the versions I reviewed. Incidentally, thank you for linking to my review of your paper. I encourage Realclimate readers to read it and make up their own minds about the possible reasons why so many reviewers and editors did not think the paper to which I referred should be published.
Richard Tol says
Like Ross McKitrick, whose comment has yet to be displayed, I reviewed this paper for one of the journals listed. I forget which one.
My verdict was negative. 38 papers are too many to replicate, and too few to discover any patterns. Comments on the papers were often peripheral to their core findings, some of the objections raised reflected the poor understanding of the current authors, and some of the suggested alternatives were a methodological step back. The earlier versions of the paper were simply not very good.
[Response: Sometimes, it takes a few iterations to improve a paper;-) But why are 38 papers are too many to replicate? I think it depends on their nature. Sometimes, you could easily replicate 100 really poor papers where you spot the flaw straight away, other times it takes more analysis and thought. And why do you think that you are a better judge about understanding the subject? Your comments then were just as vague as those you provide here, even when you had access to the source code. You could have pointed out and said that the functions so-and-so gives the wrong answer. One great trick when it comes to falsification is to use Monte-Carlo simulation techniques, and as the question: what if there was no real connection? Others, are simple by looking at the statistics. There are surprisingly frequent misapplication of statistical tests. -rasmus]
Edward Greisch says
Arrived at the R download page. Not sure what to download.
This paper is interesting and useful.
[There are two R-packages, one for Windows and one for Mac/Linux. They can be installed in R (a free software: http://cran.r-project.org), but also provide the source code. However, they require some familiarity with using R. -rasmus]
Oh sh*t… G&T is still something to comment on. As a german reader seeing this shame of a paper still “cited” is very disturbing. Thank you very much for this shame, EIKE & their ilk.
Richard Tol says
Replicating N papers in one go only makes sense if (a) the papers all used the same methods and data and (b) they all have similar drawbacks.
For instance, I can imagine a useful paper on Yule-Slutsky: Explain what it is, select N papers that need improvement, and show a table with original and revised findings.
Unless there is such strong commonality, you just end up with a laundry list and indeed, earlier versions of the paper were just that.
Think about it this way: You could have written 38 useful papers. Instead, you wrote one useless one.
[Richard, this is where we see things differently. You still don’t get it… Learning from mistakes? What amazes me is that economists have been asked to participate in peer review of manuscripts submitted climate science journals. But economists and climate researchers are not peers. Why would a climate scientist like me review economy articles? -rasmus]
I followed the ESDD discussion back when. I can only admire the work the authors put into rebutting mistakes in the literature, so I could only regret having to agree with Matthew Huber, the editor, that the writing style made it unpublishable. But when “skeptics” have referred triumphantly to this paper’s rejection in the past, I have enjoyed reminding them that the paper was rejected also because many of the “skeptic” papers criticized were anyway too obviously silly to be worth rebutting at such length in a scientific journal. I think Huber’s words are worth quoting here as well.
[I can see that point. In the end, the paper benefited from those discussions and we were able to take into account the responses from the discussion. And you are probably right that ESD maybe wasn’t the right outlet. -rasmus]
The authors of this paper have to couch their conclusions in dispassionate terms, but the obvious truth is that ‘contrarians’ are not honestly mistaken scientists; they are dishonest individuals who seek to deceive the public by getting knowingly false studies published in nefarious ways, claiming a scientific credibility that is not deserved, and thereby polluting the discussion about AGW and our response to it.
Kevin McKinney says
“Your abstract says 97% of the papers stating a position endorse AGW and 2% reject AGW. I’m no scientist but even I know that doesn’t add up to 100%.”
Well, to quote from the abstract of “Learning from mistakes…”, “Other typical weaknesses include false dichotomies…”
The 97% figure surely comes from Cook et al 2013 (since John Cook is a co-author of the present paper). Referring to the abstract of that paper, we find that:
Multiplying out the 32.6% and the .03% factors gives the ‘uncertain’ 1%.
Hope you’re learning from this, dave!
Kevin McKinney says
Typo correction: 0.3%, not .03%.
But you knew that…
Barton Paul Levenson says
DA 6: how are scientists so certain about the greenhouse effect of CO2 in the real world, and outside of a laboratory?
BPL: Because they’ve been measuring it since the 1950s. Google “pyrgeometer.”
Barton Paul Levenson says
RM 8 — well, the central point of the paper, that papers like yours are pretty much worthless, was well supported. That’s one point you can’t get around.
Mal Adapted says
Since your comment is off the topic of this RC post, my reply is too. But here goes…
If you can ask that question despite have “read extensively on the subject”, you obviously haven’t read enough (hint: uncertainty is quantifiable)! All non-scientists interested this topic should read The Discovery of Global Warming by Spencer R. Weart, published by the American Institute of Physics. The AIP is the parent body of a number of scientific societies, including the American Physical Society and the American Meteorological Society. You can read the hypertext version at my link, or the one under “Science Links” in the RC right-hand menu above. The printed version was easier for me to read, with the web version handy as a reference. Copies should be available at your local library, or you can purchase one for a few dollars from online sellers.
Rebuttals/responses to high-profile ‘contrarian’ papers are certainly valuable to have, but if the aim was to determine why there are conflicting answers concerning climate change in the scientific literature, why didn’t you examine a random sample of the ‘contrarian’ papers in the literature?
Toby Joyce says
Can you name some (a half-dozen?) of the “many scientists” you assert reject the “CO2 as control knob”, and where this rejection may be objectively reviewed (as in a published paper)?
Could you also provide of a more extensive description of the “the facts” in the sentence “the facts over the past couple of decades of observations just do not support the hypothesis” ?
Toby Joyce says
I would tend to discount Ross McKittrick’s review on the grounds of “protesting too much”, and give other reviewers more weight.
Lars Karlsson says
Why did two of the journals each appoint an economist to review this paper? It has only marginally to do with economics.
It’s nice too see that the auteurs Rasmus E. Benestad, Dana Nuccitelli, Stephan Lewandowsky, Katharine Hayhoe, Hans Olav Hygen, Rob van Dorland and John Cook concluded that it is important to avoid being dogmatic in science.
Jim Baird says
When it comes to climate change and its implications, would it not be better to learn from the natural analogies of when sea levels fell in 2010 and atmospheric warming was held to a minimum most of this century as opposed to mistakes?
The latter was the consequence of ocean heat being moving to deeper water; ergo produce energy in a heat engines through which surface heat is moved to waters three times deeper than where Nature moved it in producing the hiatus.
The drop in sea level rise was the result of extreme movements of ocean volume to land induced by a strong La Niña cycle. (http://www.skepticalscience.com/sea-level-fall-2010.htm).
To get electricity generated in mid ocean to shore it is necessary to convert it to an energy carrier like hydrogen. A 100 MW ocean thermal energy conversion plant produces about 35,000 kg of H2 per day or 12,775,000 kg a year. It is estimated the oceans can produce 14 terawatts worth of energy from OTEC which would produce 16 trillion kgs of water when converted back to water and energy on shore in a fuel cell or combustion engine. This is 600 gallons of water per year for every person on the planet and is water that would not have to be pumped from aquifers, which in turn exacerbates sea level rise because most of it ends back in the ocean. Instead this transference of liquid volume from the sea to the land would reduce sea level rise just as it did in 2010.
Richard Tol says
That decision is, of course, made by the editor. I would guess that the editors reckoned that both McKitrick and I have a track record of publications in peer-reviewed journals on the application of time series analysis to climate records. That is, we were asked in our capacity of statisticians rather than economists.
[Response: Funny! You should read the criticism of McKitricks’ paper in the supporting material (SM) of our paper, then Richard. It exposes a lack of statistical understanding – the idea that the data are independent in a sample. Also, the discussion in ESDD reveals that he does not know the difference between the confidence interval of a mean estimate and the sample population, which is now spelled out in the SM. -rasmus]
I can see why this may have had a rough ride through the review process of technical journals. How interesting are weak or fatally-flawed publications, after all?
James Powell notes that a reviewer of his own submitted paper on the minuscule number of peer-reviewed papers that explicitly reject the AGW consensus is now so small that they might be better examined individually, rather than to try to derive indirect or ad hoc statistical metrics of what climate scientists ‘think’.
Susan Anderson says
Terrific material. Richard Tol appears not to perceive how his bias shows him out in public in the figurative nude sometimes. If you have time to follow him through the various arguments about how income denotes authority, it would be funny if it weren’t so sad. Those gremlins!
And since I was fossicking around at aTTP to look for this, I found this well presented metaphorical treatment of how people search for answers they want and ways out instead of facing facts. Terrific stuff:
It’s good enough that I’m going to provide the headers here for flavor (remember it’s metaphorical):
“What amazes me is that economists have been asked to participate in peer review of manuscripts submitted climate science journals. But economists and climate researchers are not peers. Why would a climate scientist like me review economy articles?”
It seems hardly a revelation that members of other sciences are asked to participate in climate science publications (or even offer their own). Social Scientists seem to be dabbling a bit more in it these days, so why not economists? After all, since climate policy has put a lot of pressure on economic policy.
How much farther should it go? “Contrarian” scientists are not peers either? Eventually, wouldn’t only “pals” suffice?
Susan Anderson says
Oops, important to add, the medical situation just mentioned is real. So regardless of the metaphorical usefulness of the line of logic, easy on the snark. Don’t regret reading it how I did, but an embarrassed not to have noticed some earlier comments … best wishes to him.
Edward Greisch says
9 Ross McKitrick: Why would an economist be a reviewer of a climate science paper?
10 Richard Tol: Another economist. I agree with Rasmus’ comment at 13.
Money is a human invention and therefore not natural/real. Food is not a human invention. All organisms need food. Global Warming [GW] will take away our food supply soon if GW continues as usual. Humans existed long before the invention of money. Humans will not survive the non-existence of food, if the non-existence of food happens. There is nothing more immoral than risking the extinction of humans. Therefore: Economics is immoral if the economist cannot think outside of his field.
Economics is useful as long as it is remembered that economics works on a subset of human experience, not on universal reality as science does. Since economics works only on a human-contrived subset, economics is not a science.
Climate science works on the reality outside of and surrounding economics. Therefore, climate science overrules economics.
Reading this exchange one imagines a gaggle of Washington Times editors expressing their deep shock at letters accusing them of propaganda, and pointing in mitigation to the lack of statistical rigor in complaints from non-journalists about the objectivity of Pravda or Godwin help us, the weather reports in Der Stürmer.
When will sociologists address the historical question of when self-parody became the Climate Wars lowest common denominator?
Barton Paul Levenson says
Interestingly enough, although it’s in a different fault, I recently had a paper published which was about roughly the same thing–the flaws in a previous paper. The difference was that the scientist whose work I was analyzing was, in fact, a real scientist who knew what he was talking about, but who was tripped up by assumptions and mistakes in previous work by others.
AFAIK with respect to peer-reviewed journal articles social scientists are “dabbling” in social science, not climate science, which of course makes sense because they are experts in social science, not climate science.
Economists too are not experts in climate science; non-experts “dabbling” in fields that they are not experts in leads to a high rate of “beginner” mistakes.
For example, per the supplementary material of the paper that this article is the subject of economist Ross McKitrick made the “beginner” mistake of incorrectly assuming data independence in his climate science paper.
Here you are fallaciously conflating science with policy, which is a “beginner” mistake that innumerable non-scientists continually make.
Hank Roberts says
Because if some scientists don’t review economics articles, the unquestioned assumptions that economists aren’t aware they’re making will persist.
“Biologically rational decisions may not be politically possible once investment has occurred.”
Science v315, 5 Jan. 2007, at 45, DOI: 10.1126/science.1135767
t marvell says
It would be helpful to provide a link to the paper w/o the pay wall.
Did TAAC provide the authors of the criticized studies a chance to rebut the criticisms? That is the procedure when critical comments are published in the journal where the original paper was published.
If not, this is raw meat for the contrarians and deniers. Judging from climate-gate and the like, they search for instances were they can argue that climate scientists do not operate fairly.
[Response: The paper is open access. no pay wall.]
Agree with a reviewer, the study is pretty exceptional. I for one wouldn’t have taken it upon me to try this wrt papers that are already published in literature, but would have used some easier material, such as university student essays or their first attempts on writing a scientific paper. Of course the errors present on these would very likely be different from those are presented here. Likely there would be a whole lot of more stats errors found or some others in such a source material. Skimmed through your article checking for some sort of table identifying the number of error-types you present, didn’t find this. In order to try to publish something like this you have to be pretty sure of yourself. Also, I’ve seen such a study done on student essays, I think it is rather a common practice to use anonymity wrt the source material in these sorts of studies, why have you done this differently I can’t say, maybe this is common ethics between practicing scientists.
I did read your paper and the supplement. I knew some of the papers you analyzed, read some more which were somewhat new to me (and on the whole i wish they still were new to me)
But I must agree with the journals who declined to publish.
1)why bother? the solar thing and the cycle things are numerology. the statistics things are what the likes of Tol and McKitrick et al. peddle. Anyone versed in the field and unbiased would read the examined papers, and quit reading quickly, because they know more than the authors do of the context and the physics.
2) So you must be addressing more a lay audience. But then, why not publish in a lay medium ? Something like, i hate to say it, Scientific American … sorry …
3) that said, i am mystified by the choices of Tol and McKitrick as referees. Tol has a terrible track record on the assessment models on damages or crop yield, has no particular climate expertise or physical insight, while McKitrick has quite thoroughly been exposed as merely ludicrous byr Mashey, desmogblog and deepclimate.
4) You guys are good at climate science. I think exposing deniers is a waste of your time.
John Mashey says
I’m glad this finally got published, and it is really worth enumerating the patterns of error. I especially liked Table S2, p.63 of the Supplement, which gives the various periodicities that people have managed to find. Cycles, cycles everywhere, just different ones.
That is a lot of work, thanks.
Richard Caldwell says
“The chance of finding errors among the outliers is higher than from more mainstream papers.”
I’ve been hoping somebody would research something like this for years. Glad you got published. Congratulations.
I would have liked to have seen a rating of error severity. (I just scanned, so I’m not sure if you only included severe or fatal errors.) I also agree with those who desired a control group of some sort. You could expand your paper by analyzing the least-cited paper from each of five top climatologists. When differences are large, it doesn’t much data to say something, especially when your selection process deliberately handicaps the expected “winners”.
“So, is is really “inflammatory” to point out the weakness in other analyses?”
It is personal in effect. Regardless of intent, the input was selected based on perceived beliefs and the output includes a list of names and performance. Of course, in science “inflammatory” can be a compliment.
“We first submitted out work to a journal called ‘Climate Research’.
“out” should be “all”
Climate scientists cannot claim the mantle of knowledge to all branches of education when publishing (including statistics, btw). You’re acting like economists can’t ‘dabble’ in the climate arena when the economic research direction enters it, yet climate scientists have carte blanche.
Consider what’s all-the-rage today — the pricing of carbon. You really think that economists have no contribution on this; that climate scientists alone decide what all goes into the equation? That’s some moxie.
And it’s not me conflating climate policy with climate science. Apparently you don’t get out of the tower much– because this has already become well conflated. So many science conferences these days have a big sign that says “climate” on the front of the building that doesn’t put out a document that has “policy” (and “economy”) written all over it.
You can say all you want that climate scientists always/only stick to the science, and that they are the only pure-as-the-wind-driven-snow field to do so, but it will just continue the tone deafness that plagues the field and continues to give rise to more skepticism.
Barton Paul Levenson says
Salamano, the skepticism of science illiterates based on political opposition has not yet succeeded in unrounding the world. The evidence is what it is. Deal with it.
You haven’t clarified who you are referring to here; apparently however you are referring to me. In any event I don’t see anyone “acting” like economists can’t “dabble” in economics; perhaps what you are instead trying to say here is that economists can apply their economic statistics expertise to other fields? Of course they can, but again when an expert in one field applies their expertise to other fields they are far more prone to make “beginner” mistakes.
Did I mention for example that per the supplementary material of the paper that this article is the subject of economist Ross McKitrick made the “beginner” mistake of incorrectly assuming independence of the data in his climate science paper?
I do believe that I did.
You argued above that economists should “dabble” in climate science “since climate policy has put a lot of pressure on economic policy,” which is to say you are conflating policy with science.
“I’m not doing X because others do X and did it first.”
Yeah, that’s valid “logic.” /sarcasm
Creating a strawman as an excuse for “skepticism” isn’t real skepticism.
Ray Ladbury says
I share Rasmus’s frustration with crap papers and their longevity. I find the same in my own discipline. I think that many reviewers are often reluctant to reject a paper that deviates from the discipline consensus, especially if the reviewer is not an expert in the subject. The attitude seems to be, “Well publish it and let the community decide.” The thing is that when a crappy paper gets published, it just waits there waiting to trap other inexperienced or otherwise inexpert researchers. It serves as a basis for other incorrect papers.
In some cases, a convincing slap down can serve to blunt the subsequent influence of the bad paper, but in some cases, the paper is so bad that it isn’t even wrong (Think G&T, which illustrates that bullshit is forever.
re: 45. “So many science conferences these days have a big sign that says “climate” on the front of the building that doesn’t put out a document that has “policy” (and “economy”) written all over it. – See more at: https://www.realclimate.org/index.php/archives/2015/08/lets-learn-from-mistakes/comment-page-1/#comment-635276”
That is pure rubbish and amounts to being a thinly disguised anti-science diatribe. That is not one shred of evidence to support that absurd statement. None.
The supplemental material linked from Springer appears to lack Table S3, the overview of the contrarian papers examined, entirely – pages 64-66 are blank except for the label for the missing table.
Does Theoretical and Applied Climatology accept letters? If so you might expect the journal to receive a whole bunch of letters from disgruntled authors whose “work” you critique. In fact it would be extraordinary if they didn’t feel compelled to respond to attempt to justify their “efforts”.
In the normal course of things you should be able to have the last word – i.e. you should be able to respond to any letters either individually or en masse….
… it could be rather interesting!