Presubmittal peer review for high-impact research

By Karen Shashok; Translator – Editorial consultant, Granada, Spain; kshashok@kshashok.com

We thank Karen Shashok for this post that presents arguments on how pre-submission peer review can benefit scholarly communication by increasing the quality of published research and reducing retractions.

In more collegial, less competitive times researchers used pre-submission peer review to obtain frank, constructive feedback from trusted colleagues before sending their manuscript to a journal. But pre-submission review now appears to be rare in the current publish-or-perish environment.

Journal-managed peer review sometimes fails to screen out unreliable research. The reasons why peer review can fail as a quality filter have been debated, and many calls have been made for better training for reviewers and editors. Yet dissatisfaction with journal peer review remains widespread and appears to be on the rise. And good reviewers are becoming increasingly hard to find, according to editors in some fields. Although reviewing for journals is undertaken voluntarily, problems with late, superficial, biased and promised but never-delivered reports suggest that journal reviewers do not always take their commitment very seriously.

Post-publication peer review identifies potentially unreliable information (as in this recent example at PubPeer 2014). Although often necessary to correct the record, post-publication peer review on popular sites such as PubPeer, Retraction Watch or ChemBark can become rancorous. If the errors that readers find are serious enough, the result can be retraction – a frustrating and demoralizing experience for researchers who submitted their manuscript in good faith and trusted the journal’s peer reviewers to save them from public embarrassment.

Retractions of peer-reviewed, published articles are on the rise. They occur even in the most prestigious journals, where “groundbreaking” work is more likely to be published, and where editorial policies to correct the scientific record are more likely to be in place. Prompt, transparent retraction is laudable, but every time a peer-reviewed article is retracted because of errors that were identified by readers shortly after publication, I can’t help wondering why the peer reviewers did not notice the errors before publication.

A retraction is stressful for all stakeholders because it is often slow, painful and expensive. For the authors, a retraction is potentially career-threatening. For readers who used the flawed article as a basis for their own work, a retraction can be a major setback if it means they have wasted time and resources. For journal editors and publishers, retractions are problematic because they may be viewed not as a sign that the journal cares about correcting the record, but as an admission of quality control failure.

Even reviewers for glamour journals, where editorial quality control is assumed to be the most rigorous, make mistakes and overlook errors. So even at journals with a double-digit impact factor, “passing peer review” is no guarantee that potentially career-making research is ready for prime time.

Researchers who plan to submit to very-high-impact journals perhaps stand to benefit most from LIBRE review. LIBRE provides an environment where pre-submission peer review can take place in the spirit of collegiality, without the pressures of deadlines, nagging reminders from the editorial office, conflicts of interest and gaps in specialized knowledge, all of which can make reviewing a challenge. Feedback provided spontaneously by researchers with expertise in the subject is more likely to be constructive and helpful than feedback provided anonymously and perhaps grudgingly, out of a sense of duty or obligation, possibly by reviewers who lack the required expertise but are unable to admit this to themselves or to the editor.

Motivations for reviewing vary (Kumar 2009, Shashok 2005). LIBRE reviewers are more likely to be motivated by altruism, e.g., a desire to contribute to the field by helping colleagues. For some, public credit for their reviews will of course be an additional attraction. Moreover, by signing and dating their reports, LIBRE reviewers opt out of the nefarious temptation to appropriate new ideas and scoop the authors. In contrast, journal-selected reviewers, especially if they are anonymous, may be motivated by self-interest, e.g., privileged access to information about new developments ahead of the crowd, and the ego gratification of being approached by a prestigious journal and having power over the fate of the manuscript.

Pre-submission LIBRE review can thus benefit the entire research communication system. Assuming that LIBRE reviewers help authors detect errors that might undermine their conclusions, subsequent journal peer review would be easier because there would be fewer shortcomings to criticize, the risk of retraction could be reduced, and reliable information could be disseminated more efficiently.

 

References

Kumar M. A review of the review process: manuscript peer-review in biomedical research. Biology and Medicine 2009; 1(4): Rev3. //www.biolmedonline.com/Articles/vol1_4_Rev3.pdf Accessed 13 March 2014

PubPeer The online journal club. Science self-corrects – instantly. Posted on March 11, 2014. //blog.pubpeer.com/?p=117 Accessed 13 March 2014

Shashok K. Standardization vs Diversity: How Can We Push Peer Review Research Forward? Medscape 2005; 7(1): 11. //www.medscape.com/viewarticle/498238_4 Accessed 13 March 2014

3 thoughts on “Presubmittal peer review for high-impact research”

  1. Many thanks for this post. One of the problems we face with current research on peer review is that it is based on ‘unscientific’ understanding of peer review and is usually laden with assumptions… What you propose here reaches journals such as those at Copernicus (see publications by Ulrich Poeschl) with post-publication review – the manuscript is visible, the reviews are open to the public, referee reports are open to the public, and editors’ decisions are also visible… My sociological research into the relations in peer review help understand that when a referee is non-anonymous, the original manuscript is visible, and judgements and decisions are open to review – there is maximal scientific exchange, the potential for infinite review, and these types of relations also contribute to maximal potential for rational decision-making for referees and editors. This is based on legal scholarship that grapples with ‘faceless courts’. I have posted models and preprints at . Your input is welcome.

  2. This may be interesting as well: a software platform that I have developed in the past weeks, motivated by my own experience as a PhD student:

    DocRev – //www.docrev.org

    Briefly, the idea is to transfer the effort of reviewing documents to a crowd-sourced platform: a user provides feedback to other users’ documents, and in return obtains feedback for his own documents.

    My motivation stems from “yet another review” of a research paper of my own that I had already read many times for re-submission. At the time I was already so fed up of reading it that I’d rather read someone else’s work instead, hoping that another person would instead read my own work.
    Not only would that be refreshing, not to be reading always the same thing and maybe learn something new, but it would also be way more productive to catch problems in the document with a fresh pair of eyes.

    In fact, this is just a use case specific to research, but I believe the concept is applicable to any area and types of documents. Examples:

    – Send a formal letter to a lawyer, which requires familiarity with certain terms. Why not ask a law student or even a young lawyer to look at it in exchange for our own expertise?

    – Maybe you wrote a blog post or some meaningful content and would like to get some feedback before publishing it.

    – You have some work in progress in a specific area for which you know no one and would love some feedback.

    – Camera-ready research papers, which you have read countless times, but that probably still have some bugs/typos.

    – You are sending an application to a job offer / grant / project, and would love a review of it.

    If you are still reading, then I really encourage you to try it out:

    DocRev – //www.docrev.org

Take part in the conversation

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: