Fun with F1000: publish it and the peers will come

This content will be cross-posted to Synthetic Daisies. Please also see the update before the notes section.

new-banner

For the last several months, I have been working on a paper called “Animal-oriented Virtual Environments: illusion, dilation, and discovery” [1] that is now published at F1000 Research (also available as a pre-print at PeerJ). This is a paper that has gone through several iterations, from a short 1800-word piece (first draft) to a full-length article. This includes several stages of editor-driven peer review [2], and took approximately nine months. Because of its speculative nature, this paper could be an excellent candidate for testing out this review method.

The paper is now live at F1000 Research.

Evolution of a research paper. The manuscript has been hosted at PeerJ Preprints since Draft 2.

F1000 Research uses a method of peer-review called post-publication peer review. For those who are not aware, F1000 approaches peer-review in two steps: the submission and approval by an editor stage, and the publication and review by selected peer stage. Let’s walk through these.

The first step is to submit an article. For some articles (data-driven), they are published to the website immediately. However, for position pieces and theoretically-driven articles such as this one, a developmental editor is consulted to provide pre-publication feedback. This helps to tighten the arguments for the next stage: post-publication peer review.

The next stage is to garner comments and reviews from other academics and the public (likely unsolicited academics). While this might take some time, the reviews (edited for relevance and brevity) will appear alongside the paper. The paper’s “success” will then be judged on those comments. No matter what the peer reviewers have to say, however, the paper will be citable in perpetuity and might well have a very different life in terms of its citation index.

Why would we want to have such alternative available to us? Such alternative forms of peer review and evaluation can both open up the scope of the scientific debate and resolve some of the vagaries of conventional peer review [3]. This is not to say that we should strive towards the “fair-and-balanced” approach of journalistic myth. Rather, it is a recognition that scientists do a lot of work (e.g. peer review, negative results, conceptual formulation) that either falls through the cracks or does not get made public. Alternative approaches such as post-publication peer review is an attempt to remedy that, and as a consequence also serve to enhance the scientific approach.

COURTESY: Figure from [5].

The rise of social media and digital technologies have also changed the need for new scientific dissemination tools. While traditional scientific discovery operates at a relatively long time-scale [6], science communication and inspiration do not. Using an open science approach will effectively open up the scientific process, both in terms of new perspectives from the community and insights that arise purely from interactions with colleagues [7].

One proposed model of multi-staged peer review. COURTESY: Figure 1 in [8].

UPDATE: 9/2/2014:

I received an e-mail from the staff at F1000Research in appreciation of this post. They also wanted me to make the following points about their version of post-publication peer review a bit more clear. So, to make sure this process is not misrepresented, here are the major features of the F1000 approach in bullet-point form:

* input from the developmental editors is usually fairly brief. This involves checking for coherence and sentence structure. The developmental process is substantial only when a paper requires additional feedback before publication.

* most papers, regardless of article type, are published within a week to 10 days of initial submission.

* the peer reviewing process is strictly by invitation only, and only reports from the invited reviewers contribute to what is indexed along with the article.

* commenting from scientists with institutional email addresses is also allowed. However, these comments do not affect whether or not the article passes the peer review threshold (e.g. two “acceptable” or “positive” reviews).

NOTES:

[1] Alicea B.   Animal-oriented virtual environments: illusion, dilation, and discovery [v1; ref status: awaiting peer review, https://f1000r.es/2xt] F1000Research 2014, 3:202 (doi: 10.12688/f1000research.3557.1).

This paper was the derivative of a Nature Reviews Neuroscience paper and several popular press interviews [ab] that resulted.

[2] Aside from an in-house editor at F1000, Corey Bohil (a colleague from my time at the MIND Lab) was also gracious enough to read through and offer commentary.

[3] Hunter, J.   Post-publication peer review: opening up scientific conversation. Frontiers in Computational Science, doi: 10.3389/fncom.2012.00063 (2012) AND Tscheke, T.   New Frontiers in Open Access Publishing. SlideShare, October 22 (2013) AND Torkar, M.   Whose decision is it anyway? f1000 Research blog, August 4 (2014).

[4]  By opening up of peer review and manuscript publication, scientific discovery might become more piecemeal, with smaller discoveries and curiosities (and even negative results) getting their due. This will produce a richer and more nuanced picture of any given research endeavor.

[5] Mandavilli, A.   Trial by Twitter. Nature, 469, 286-287 (2011).

[6] One high-profile “discovery” (even based on flashes of brilliance) can take anywhere from years to decades, with a substantial period of interpersonal peer-review. Most scientists keep a lab notebook (or some other set of records) that document many of these “pers.comm.” interactions.

[7] Sometimes, venues like F1000 can be used to feature attempts at replicating high-profile studies (such as the Stimulus-triggered Acquisition of Pluripotency (STAP) paper, which was published and retracted at Nature within a span of five months).

[8] Poschl, U.   Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Frontiers in Computational Science, doi: 10.3389/fncom.2012.00033 (2012).