Introducing PubPeer.com

Guest post from PubPeer.com

The process of reviewing published science is constantly occurring and is now commonly being called post-publication peer review. It occurs in many places including on blogs such as this one, review articles, at conferences around the world, and has even been encouraged on the websites of some journals. However, the process of recording and searching these comments is, unfortunately, inefficient and underused by the larger scientific community for several reasons: To successfully impact the publication process, this database of knowledge has to accomplish two important tasks. First it requires participation by a large part of a given scientific community so that it reflects an average impression instead of an outlier’s impression. Second, it requires that the collective knowledge is centralized and easy to search in order find out what the community collectively thinks about an individual paper or a body of work. A recent initiative, the San Francisco Declaration on Research Assessment (DORA), echoes many of these same concerns.

In an attempt to assemble such a database, a team of scientists, have put together a website called PubPeer.com that is searchable and encourages participation by the larger scientific community. With a critical mass of usage an organized system of post-publication review could improve both the process of scientific publication as well as the research that underlies those publications.

Those of us involved in the creation of PubPeer.com believe that in an ideal world, a scientist’s main goal would be to discover something interesting about the world and simply report it to other scientists to use and build upon. This idealistic view of the scientific process is however not matched in reality because, for academic scientists, our publications count for much more than a simple contribution to the scientific record. For example, the majority of candidates are eliminated from consideration for tenure track positions at a major universities based on the names of the journals that have published their recent findings.

Review committees use this method because publications are the best measure of past and potential scientific output, but by potentially overvaluing “high impact” journal names, these committees and study sections effectively defer to journal editors to help them choose the best candidates for jobs and grants. However, these journals select their articles based on more than just good science – the papers also need to be of ‘wider interest’ and this can sometimes skew the publications towards ‘exciting’ results over those that are more measured, and perhaps more likely to be correct (for instance). The sometimes disproportinate attention given to a high profile paper also makes it a tempting target for more unscrupilous scientists.

It’s never going to be possible for us to thoroughly read all of the papers submitted to a job advertisement, nor all of the papers referenced in grant applications, but we can easily reduce the importance that journal names play in decisions and replace it with something that is more meaningful and directly in our hands instead of the hands of publishers. After reading any publication, we all have impressions about whether the reported observations are useful, interesting, elegant, irrelevant, flawed, etc. If a particular scientific field that is interested in a given publication were able to compile all of it’s impressions of that publication, that collective information would be infinitely more useful to search committees and study sections than the name of the journal in which it was published.

Page 1 of 2 | Next page