Archive for category Publishing
A few days ago, a piece was posted at ASAPbio which reminded me of an idea I had a long time ago about a new model for publishing and peer review in the life sciences.
I described my idea on Twitter, but I decided to provide a little more detail about it.
My idea was based on submitting to a suite of journals, one that would include both a sound-science title (like PLoS One, Health Science Reports, Biology Open, PeerJ, and others) and journals using ‘significance’ or ‘impact’ as a criteria for publication (or any other type of journals; the more, the merrier). These journals may differ on a number of issues besides acceptance criteria.
This idea is based on a “bottoms-up” approach, which differs from the current “top-down” cascading strategy normally employed by authors and publishers. By top-down, I mean submitting to a high profile journal and then having to ‘cascade’ down to other journals until you find a home. In many cases, the last part of that ‘journal-shopping’ journey is a sound-science title. Here, I propose the opposite, where you would start there.
In my proposal, you would submit your manuscript to the suite. There, it would be evaluated, through standard peer review (which could include some nice modifications), for scientific and technical soundness only. In a way, this means you would be submitting to “the bottom” title. If the manuscript, after revisions when necessary, is determined to be scientifically and technically sound, it will be accepted for publication “in the suite”.
What does that mean? It means that your manuscript will be published, by one of the journals in the suite. After acceptance based on objective criteria (technical soundness), the editors of the other journals in the suite can take a look and see if they are interested. The more selective titles in the suite -those interested in featuring groundbreaking reports and studies that they think “significantly advance a field- will evaluate the already accepted manuscript, and if the topic is of interest to them, they will offer the author to publish it in their journal. The author will be able to choose where to publish if many offers are received. If none of the other titles in the suite are interested, then the manuscript is published in the sound-science journal.
A crucial aspect of this proposal, is that after acceptance based on technical soundness, the other journals in the suite can no longer request for additional experiments: the manuscript has already been reviewed and accepted. This would eliminate one of the complains authors usually have: “they only asked for this experiment because I submitted to [insert glam title]; if I had submitted to [insert lower tier title], they wouldn’t have asked for this“.
Now, the suite of titles can have any numbers of origins. It could be ‘publisher-based’, ‘society-based’ or just a regular centralized system (connecting authors to many titles from different publishers), as long as the peer review is done with the criteria outlined above.
I think that uncoupling the required revisions from the journal the manuscript is submitted to, is an important step to take. By this, I mean that reviewers should not be requesting experiments based on the journal the manuscript was submitted to; it either is scientifically and technically sound or it isn’t. If it is, then editors of specific journals can then offer to publish it based on subjective criteria like ‘significance’ or ‘impact’. That’s fine, that’s their prerogative: they can choose what they want to publish, but this should not delay communication of technically sound manuscripts. Preprints are a way to try to avoid the latter though, and I’m all for preprints, as you can see in the pic on the right. In my proposal, manuscripts could also be deposited as preprints; disclosure and “initial validation” are different things.
Point is, a fair, ‘sound-science’-based peer review, independent of the journal, is probably a step in the right direction. People might argue though, that journal-independent platforms (e.g. Axios) have not worked, as journals nevertheless asked for additional reviews, from reviewers they liked and trusted. Here, that wouldn’t exist: journals in the suite would not ask for additional revisions after the initial peer review. The suites would be created among journals and editors who trust each other. In some of the other ideas that have been proposed (e.g. the “Peer Feedback” proposal), however, the aforementioned problem of independent review is not fully addressed, although they mention that ” reviewers will be selected through partnerships with several scientific societies”; if the journals in the ‘Peer Feedback’ project would take the reviewed manuscripts without asking for additional experiments, then both of our proposals would be quite similar. Indeed, in the “Peer Feedback” proposal they state that “journals may indicate whether they will accept [the revised manuscript] in its revised form or request additional experiments or peer review”.
I was excited to read something similar to what I’ve been thinking of; as I said, ideas of this kind are, in my opinion, a step in the right direction.
Part 2 of “Publishing is not the end: thoughts about discussions on published manuscripts and author engagement”
Recently, I posted an idea for a mechanism journals could have in place to not only allow people to comment on published articles, but more importantly, to get authors to actually take part of these discussions and respond, and when appropriate, amend their articles.
This is because presently, authors don’t really engage and actually, have no incentive to respond; there’s a culture of “published, ergo forgotten”. Considering the importance of discussions in science, and our present technology, there’s no need for this to continue.
Based on discussions raised by my previous post, some on Twitter (thanks Tim!), I wanted to propose another mechanism (which might actually be way simpler than the other) that could be employed by journals to be able to make use of the ‘wisdom of crowds’ and encourage authors to take part of discussions raised by the community. Similarly, this mechanism makes the community be a part of the whole process and encourages participation, as their comments can certainly make a difference.
As I stated in that previous post,
This is particularly important considering that in our current system, only a handful of people get to see a paper before it’s published in a journal, and while such peer review can be relevant for finding some of the flaws and generally improving a paper, it’s silly to think that it will find all possible problems/issues.
Allowing the community to comment on a published paper and knowing that their comments can have a real impact, is the important issue. I strongly support preprints and think that it would be crucial to get feedback at that stage, but at present, that is not common. For the sake of discussion, let’s again propose a way for encouraging authors to engage with the discussions taking place around their paper published in a journal.
Here, an author would submit the article to a journal (and at the same time, to a preprint server ;)), where the editor would subject it to pre-publication peer review. The editor would choose the reviewers (s)he likes best for this manuscript, and know are suitable experts to evaluate this work. If after the necessary rounds of review (ideally, consultative peer review or one with ‘cross commenting’), the editor considers the article to be acceptable for the journal, the manuscript would be “Conditionally Accepted”, which would basically mean that the manuscript can move onto the next stage of the process: open peer review.
I know this term can mean different things to different people (like ‘epigenetics’ 😉 ), but here, I mean “open to whomever wants to participate”. In this setting, the peer-reviewed and editor-approved, non-typesetted version, is made public and the community is invited to comment on the manuscript, within a certain time window. Anyone could comment and if they prefer, they can maintain their anonymity. A system could be put in place for these comments to have a DOI, if the community would like that. People commenting are also free to cross-post their comments on other platforms, for instance, if the manuscript is deposited on a preprint server.
During that window, the authors can address individual comments if they want, or wait to address them all together at the end of the time window. The authors may decide to revise their manuscript in response to these comments or simply respond to them.
A certain amount of time (which would need to be defined) after the window closes, the authors must submit their responses to the comments posted by the community. The editor will review their responses and decide, at this point, the next step: a) accept (the manuscript would be moved into production), b) revisions, or c) reject. If the manuscript is accepted, the reviewers’ comments (anonymous if they prefer so) and author responses, would be posted along the manuscript.
This would avoid some of the problems raised on the previously proposed approach, which involves a series of steps, and is overall, more cumbersome and would likely involve a lot more work for all parties involved.
Problems with this new approach however, include: 1) the time from submission to acceptance will increase and 2) original peer reviewers might feel they wasted their time, particularly if the ‘open peer review’ stage leads to a decision that is different from their recommendation (I’d argue that happens today, anyway). This is of course, disregarding the problem of ‘authors have no time to reply to comments’, commonly raised against any proposal that attempts to encourage authors to participate in discussions of their manuscripts.
Remember, the idea is to encourage authors to respond to concerns raised by the community. In this way, the incentive is clear: no response, no publication. If there are no comments, the paper is published as approved in the pre-publication peer review process.
Yes, at the beginning, most papers would have no comments on the ‘open peer review’ stage, but as Nikolai said:
Even big things start small… even if participation rate is low at the beginning, it might grow. The @biorxivpreprint is a great example of low participation rate exploding into a mainstream cultural shift that is transforming scientific communication
— Nikolai Slavov (@slavovLab) November 17, 2017
In a way, this system is a modified version of what is currently in place at Atmospheric Chemistry and Physics (ACP),
Of course, one could also propose a system in which the “pre-publication” peer review stage is discarded altogether, as in ACP. In this scenario, the editor would still personally invite some experts to review (and again, they could do it anonymously, in this case, in a single blind fashion), just to make sure every article is reviewed by some experts. Is this something the life science community would be willing to try? The experience with F1000 Research and the like seems to suggest so.
Lastly, note that this is not something to compete with or replace post-pub peer review. That would still exist and could play a relevant role after the time window of open peer review has ended. Also, researchers would be welcome to publish rebuttals of published papers in the same journal, using mechanisms as the one described before.
Publishing is not the end: thoughts about discussions on published manuscripts and author engagement
A few days ago I tweeted the following:
Publishing is not the end; it’s the beginning of a general discussion. This discussion can take place a lot sooner with preprints
— Alejandro Montenegro (@aemonten) November 5, 2017
I said this, because I’ve been thinking of ways to encourage discussions, to maybe stop thinking of publications as the “end”, such that once a paper is published in a journal, I (author, editor) forget about it and don’t have to worry about it anymore. This is particularly important considering that in our current system, only a handful of people get to see a paper before it’s published in a journal, and while such peer review can be relevant for finding some of the flaws and generally improving a paper, it’s silly to think that it will find all possible problems/issues.
For this, we need the “wisdom of crowds”.
I mentioned that with preprints, in which a paper is accessible to everyone (and hence, everyone can -potentially- comment on it) as soon as the authors consider it ready, such a discussion can take place a lot sooner, as the manuscript would be available immediately, without the typical delays associated with publishing in a journal. For this particular post, I will only talk about encouraging discussions after a paper has been published in a journal and the way this could be handled (for discussion on preprints, see this). By “discussions”, I mean getting authors to engage with the people commenting on their publication.
Notably, Nikolai Slavov was just talking about this a few days ago, which prompted me to put into words some of the stuff I’ve been thinking about in the last couple of days. He published a “Point of View” in eLife some time ago in which, among other things, he discussed the following idea:
(…) journals should agree to consider non-anonymous post-publication comments submitted to certain platforms within a certain timeframe after the paper has been published. This timeframe could be as short as a few weeks or a month. Journals would be obliged to publish a response from the authors to all the substantive concerns contained in the comments. Concerns that would require a response would include the following:
-crucial controls that are missing
-major inconsistencies between the data and the main conclusions
-inconsistencies with the published literature that are not discussed in the paper
-failures by readers to reproduce analyses described in the paper
-errors in mathematical proofs.
Science is crucially based on discussions and when research is shared with the community, one should expect that a certain amount of discussion around it will take place. On Twitter, I mentioned that the lack of responses by the authors might be a matter of incentives; authors currently have no incentive to reply to any comments on a published paper. How can we create such an incentive? Discussions benefit the scientific community and should be encouraged.
One way that could be proposed, is that journals basically force authors to respond to comments raising substantial concerns, as the ones mentioned by Nikolai. How could this work? One way could be the following, which is similar to the system in place at eLife. This would work for a journal that normally publishes the peer reviewers’ comments (regardless of anonymity):
a) People who have several concerns about the validity of a published paper would reach out to the authors for comment. (As you can see, in such case, the authors have no incentive to reply).
b) If the original authors ignore or disregard the correspondence, then the people raising the concerns (let’s call them the authors of a “letter”) can contact the journal, detailing the concern(s). They should provide evidence that they have tried to contact the authors to engage in a discussion. A deadline for receiving a response should be determined before the authors can reach out to the journal. After this time has passed, the authors of the letter can decide to summarize their concerns instead of immediately writing a full analysis. This should take place within a certain time frame after the original manuscript has been published (although I’m not sure what the best time window would be). Now, if the original authors do reply to the authors of the letter in their initial correspondence, and depending on their response, we could move to either point g) or i) below.
c) The editors would evaluate the letter and determine whether further evaluation is warranted, i.e. whether the letter represents a scientifically supported challenge to the manuscript. If they agree, then, if not submitted already, the editors would invite the authors to submit a full critique. The letter may be subjected to peer review, to help the editors reach a decision.
d) After the full critique is received (and peer reviewed, in cases where this is deemed necessary), the editors would reach out to the authors and ask them to address/comment on these issues, stating that failure to reply, could result in the publication of an expression of concern, or even a retraction, depending on the concerns raised. This would be “the incentive”. The authors will be given 15 days for a “first reply”, in which they would state whether they agree to the comments and are willing to revise, or if they disagree with the critique.
e) If the editors, on the other hand, decide that a response or revision is not absolutely required, and the concerns raised do not represent a challenge to the article (either because they decided that on their initial assessment or after peer review), then the journal will offer the authors to publish their letter along the article as an online comment. The editors would explain their decision and provide the comments of the reviewers (if available) and leave it to the authors if they want to publish their original or revised letter as a comment on the original article.
In this scenario, the authors of the original article will be informed about the letter, and asked if they want to reply before the letter is published as a comment (but it would not be mandatory). This avoids “silencing” any comment that may exist about the article and will allow the community to evaluate it, such that if the community feels that the comment does indeed merit a response, then the original editorial decision could be reversed. The authors would be of course, free to publish their letter on PubMed Commons, PubPeer, a preprint server, etc. at any moment.
f) If the editors agree to the concern, and the original authors agree to the full critique or parts of it, and are willing to revise the manuscript, then they are invited to submit a revised version that includes these comments. If the editors agree to the extent of the revision, then a new version of the manuscript is published, acknowledging the changes. This would use article versioning instead of a “corrigendum” or other “legacy” terms. The new version would have a new DOI (denoting the versioning, e.g. DOI XXX.1, XXX.2, etc. This is already in place in journals like F1000 Research). In this case, as the original authors have agreed to the comments on the critique, the authors of the letter would be acknowledged in the article and their letter would be published as a non-anonymous supplementary material, together with the comments of the original peer reviewers (as they are also peer reviewers). As mentioned in the beginning, this proposal is oriented towards journals that publish the peer review comments alongside the article.
g) If alternatively, a response is received, such that it opposes the critique, and the authors refuse to make any changes, then, similarly, they will be invited to submit their full response; the response will be analyzed by the editors, subjected to peer review if necessary, and published along the article if deemed scientifically valid. In this case, the full letter would also be published. This would end the discussion.
h) If the response is deemed unsatisfactory by the editors, and the remaining concerns are major, the editors can reject the response (and publish the critique anyway). They may also decide to publish the response, just for full disclosure. At this point, the editors can decide to publish an expression of concern or retract the manuscript if they consider it necessary.
i) If the authors of the letter are not satisfied by the author response and would like to continue the discussion further, they will be invited to submit an article to the journal detailing their full critique; the manuscript would be peer reviewed and if accepted, would be published in the journal as a “Discussion”. This article will be fully linked to the original article and indexed and citable on its own. If the original authors would like to respond, then the whole cycle starts anew.
The best thing about versioning is that you will always land on the latest version of the article and you will be able to not only see the changes/updates made, but you can check out the previous versions if you want. This would be on top of being able to review the reviewer’s comments.
Now, an important point for discussion -and I’d really like your input on this- is that, if faced with such a system, would authors simply choose a journal that doesn’t have such a mechanism in place, i.e. one in which they know they won’t have to engage with criticisms? This, of course, would defeat the whole purpose. One way around this, would be that this conversation took place first at the level of societies and publishers, so that many journals would embrace it. One could argue however, that it is not a matter of authors, but it is in fact journals and editors, the ones that not only don’t have an incentive to promote such discussions, but would actually prefer not to go down this route, as it would involve more work and lead to potentially problematic discussions and in some cases, author alienation. This is another aspect to discuss.
Now, as Michael Hoffmann pointed out to me, there’s also the other aspect to all this: what’s the incentive for researchers to do post-publication peer review in the first place?
You need two to tango.
PubPeer shows us that many scientists do find it relevant to comment on work related to their research, and this might be further encouraged if journals gave a strong sign that they would consider these comments. I’d say that for at least the few people that are currently doing it, and are willing to publicly discuss and criticize a manuscript published in a journal -in a civic and scientific way-, a formal mechanism should be in place, which would guarantee a debate, or at least, disseminate their concerns.
On a related note, one might argue, “why would I go out of my way to antagonize people who will very likely be reviewers of my next paper/grant?”. One could propose the authors of the letter to be able to remain anonymous to the public and the original authors, but not to the editors., i.e. a single blind review. An argument against this, is that there may be bias in either supporting or rejecting the letter based on the identity of the letter’s authors. For instance, editors might be more willing to consider a letter coming from a big lab. A system of fully anonymous review would resemble the system in place at Pubpeer.
Anyway, I just wanted to mention some of the issues I’ve been thinking about can impact the post-publication discussion of manuscripts published in journals, and author engagement. I’d love to hear your thoughts on how we can address these issues.
For further comments and discussion, follow the threads on Twitter:
I think this might be a matter of incentives. Authors have no incentive (of those used currently in academia) for revising a published paper. You suggest that journals force authors to do it; should they retract if author’s don’t engage? That’s an incentive 🙂
— Alejandro Montenegro (@aemonten) November 14, 2017
I’d love to hear your thoughts! Latest blog post, “Publishing is not the end: thoughts about discussions on published manuscripts and author engagement”. Thanks to @michaelhoffman and @slavovLab for (pre-publication) discussions. https://t.co/gzDhxSVbFT
— Alejandro Montenegro (@aemonten) November 17, 2017
I was interviewed, along with other fellow Chilean scientists, for a piece (in Spanish) on the publish-or-perish culture, the tyranny of the Impact Factor and the pressure on scientists to publish in top journals for career advancement, among other things. The article, written by Tania Opazo and entitled “The Tyranny of Scientific Publications”, appeared today in “La Tercera”.
Go check it out!
Embracing minimal guidelines for the reporting of RT-qPCR experiments: responsibility lies on both ends
In mid-2012, Stephen A. Bustin, Jo Vandesompele and myself, decided to send a letter to the editor of a glam magazine asking for journals to demand authors to provide at least minimal information for the critical evaluation and reproducibility of published RT-qPCR experiments. The lack of information regarding these experiments is inversely proportional to the IF of the journal: the higher the IF, the lower the amount of information provided for these experiments (See Nat Methods. 2013 Nov;10(11):1063-7). It was no surprise then, considering that they were the ones we targeted in the letter (although not explicitly), that glam journals (you know which…) refused publishing the letter.
I found the letter searching for something else in my computer and decided to share it with you, just as it was written back in 2012. The main theme is as true as it was back then.
Stephen A. Bustinb
Jo Vandesompele c
a Department of Molecular Genetics and Microbiology, Faculty of Biological Sciences, Pontificia Universidad Católica de Chile.
Email: firstname.lastname@example.org. Tel: (+562) 6862348
b Queen Mary University of London, UK
Email: email@example.com. Tel: (+44) 2073777000
c Center for Medical Genetics, Ghent University, Belgium
d Biogazelle, Zwijnaarde, Belgium.
Email: Joke.Vandesompele@UGent.be. Tel: (+32) 479353563.
To the Editor:
Reverse transcription real-time quantitative PCR (RT-qPCR) is currently the most widely used molecular method for the detection and quantification of RNA. RNA integrity and purity, primer sequences and their specificity, assay efficiency and identification of appropriate reference genes are just a few of the essential parameters that must be assessed and reported when using this quantitative technique. Most authors however, fail to include them, either in the methods or the online supplementary sections, with some arguably not even having performed the appropriate controls. As has been previously reported, this can lead to flawed data and wasted efforts in trying to reproduce results that can be, in some cases, artifacts and thus not biologically relevant1.
The MIQE guidelines2,3 were proposed to enhance experimental accuracy and transparency and to enable the research community to assess reported data and reproduce published qPCR experiments. The guidelines should be considered a complete “checklist” to be used as a reference for the reporting of results. While the response to these guidelines has been largely positive, in both commercial and research settings, most currently published articles that include qPCR experiments fail to properly report experimental procedures.
Although it is the authors’ task to embrace minimal guidelines that allow for reproducibility and transparency of their studies, journal editors have a responsibility to demand such information. Even though some publishers have implemented the MIQE guidelines or at least the bulk of the recommendations, many top-tier journals continue to publish research that grossly lacks the required information not only to reproduce the reported experiments, but also to evaluate properly the conclusions derived from them. Given that the number of retractions is on the increase, that the majority of retractions are caused by poor experimental protocols and that once published in the peer-reviewed literature, even a rebuttal does not affect a paper’s frequency of citation, we would like to issue a wakeup call to journal editors and publishers.
We urge journals to demand that all manuscripts include such minimum information, in the form of the MIQE or other guidelines, to ensure the accuracy and reproducibility of the reported results. The use of online supplementary sections makes the often used “space constraint” argument no longer valid.
1. Lanoix, D. et al. Quantitative PCR Pitfalls: The Case of the Human Placenta. Mol Biotechnol (2012).
2. Bustin, S.A. et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem 55, 611-622 (2009).
3. Bustin, S.A. et al. Primer sequence disclosure: a clarification of the MIQE guidelines. Clin Chem 57, 919-921 (2011).
Quick and simple post, considering it is Jan 1st and I’m still tired from last night, and the fact that I just came back from, you guessed it, the lab.
Anyway, I wanted to know what the most recurring topics were on the top two glam journals during 2013, so I obtained the 2013 PubMed-indexed abstracts from Nature and Science using EBOT and then used Wordle to generate a word cloud.
Here are the results:
It’s pretty easy to do it for any other journal or for any other query in PubMed using Ebot. If you want to do something similar for say, your country or institution and you need help, let me know.
Percentage-wise, there is less bullshit in specialized journals. But there is still a lot of it there. Notably, the percent of bullshit that draws attention and is dealt with in decisive fashion is definitely higher in glamour mags.
-DK, as a comment to a post discussing peer review.