A few days ago, a piece was posted at ASAPbio which reminded me of an idea I had a long time ago about a new model for publishing and peer review in the life sciences.
I described my idea on Twitter, but I decided to provide a little more detail about it.
My idea was based on submitting to a suite of journals, one that would include both a sound-science title (like PLoS One, Health Science Reports, Biology Open, PeerJ, and others) and journals using ‘significance’ or ‘impact’ as a criteria for publication (or any other type of journals; the more, the merrier). These journals may differ on a number of issues besides acceptance criteria.
This idea is based on a “bottoms-up” approach, which differs from the current “top-down” cascading strategy normally employed by authors and publishers. By top-down, I mean submitting to a high profile journal and then having to ‘cascade’ down to other journals until you find a home. In many cases, the last part of that ‘journal-shopping’ journey is a sound-science title. Here, I propose the opposite, where you would start there.
In my proposal, you would submit your manuscript to the suite. There, it would be evaluated, through standard peer review (which could include some nice modifications), for scientific and technical soundness only. In a way, this means you would be submitting to “the bottom” title. If the manuscript, after revisions when necessary, is determined to be scientifically and technically sound, it will be accepted for publication “in the suite”.
What does that mean? It means that after acceptance, your manuscript will be published, by one of the journals in the suite. After acceptance based on objective criteria (technical soundness), the editors of the other journals in the suite can take a look and see if they are interested in the manuscript for their journals. The more selective titles in the suite -i.e. those interested in featuring groundbreaking reports and studies that they think “significantly advance a field- will evaluate the already accepted manuscript, and if the topic is of interest to them, they can offer the author to publish it in their journal. The author will be able to choose where to publish if many offers are received. If none of the other titles in the suite are interested, or if the authors decline ever offer, then the manuscript is published in the sound-science journal.
A crucial aspect of this proposal, is that after acceptance based on technical soundness, the other journals in the suite can no longer request for additional experiments: the manuscript has already been reviewed and accepted. This would eliminate one of the complains authors usually have: “they only asked for this experiment because I submitted to [insert glam title]; if I had submitted to [insert lower tier title], they wouldn’t have asked for this“.
Now, the suite of titles can have any numbers of origins. It could be ‘publisher-based’, ‘society-based’ or just a regular centralized system (connecting authors to many titles from different publishers), as long as the peer review is done with the criteria outlined above.
I think that uncoupling the required revisions from the journal the manuscript is submitted to, is an important step to take. By this, I mean that reviewers should not be requesting experiments based on the journal the manuscript was submitted to; it either is scientifically and technically sound or it isn’t. If it is, then editors of specific journals can then offer to publish it based on subjective criteria like ‘significance’ or ‘impact’. That’s fine, that’s their prerogative: they can choose what they want to publish, but this should not delay communication of technically sound manuscripts. Preprints are a way to try to avoid the latter though, and I’m all for preprints, as you can see in the pic on the right. In my proposal, manuscripts could also be deposited as preprints; disclosure and “initial validation” are different things.
Point is, a fair, ‘sound-science’-based peer review, independent of the journal, is probably a step in the right direction. People might argue though, that journal-independent platforms (e.g. Axios) have not worked, as journals nevertheless asked for additional reviews, from reviewers they liked and trusted. Here, that wouldn’t exist: journals in the suite would not ask for additional revisions after the initial peer review. The suites would be created among journals and editors who trust each other. In some of the other ideas that have been proposed (e.g. the “Peer Feedback” proposal), however, the aforementioned problem of independent review is not fully addressed, although they mention that ” reviewers will be selected through partnerships with several scientific societies”; if the journals in the ‘Peer Feedback’ project would take the reviewed manuscripts without asking for additional experiments, then both of our proposals would be quite similar. Indeed, in the “Peer Feedback” proposal they state that “journals may indicate whether they will accept [the revised manuscript] in its revised form or request additional experiments or peer review”.
I was excited to read something similar to what I’ve been thinking of; as I said, ideas of this kind are, in my opinion, a step in the right direction.
Part 2 of “Publishing is not the end: thoughts about discussions on published manuscripts and author engagement”
Recently, I posted an idea for a mechanism journals could have in place to not only allow people to comment on published articles, but more importantly, to get authors to actually take part of these discussions and respond, and when appropriate, amend their articles.
This is because presently, authors don’t really engage and actually, have no incentive to respond; there’s a culture of “published, ergo forgotten”. Considering the importance of discussions in science, and our present technology, there’s no need for this to continue.
Based on discussions raised by my previous post, some on Twitter (thanks Tim!), I wanted to propose another mechanism (which might actually be way simpler than the other) that could be employed by journals to be able to make use of the ‘wisdom of crowds’ and encourage authors to take part of discussions raised by the community. Similarly, this mechanism makes the community be a part of the whole process and encourages participation, as their comments can certainly make a difference.
As I stated in that previous post,
This is particularly important considering that in our current system, only a handful of people get to see a paper before it’s published in a journal, and while such peer review can be relevant for finding some of the flaws and generally improving a paper, it’s silly to think that it will find all possible problems/issues.
Allowing the community to comment on a published paper and knowing that their comments can have a real impact, is the important issue. I strongly support preprints and think that it would be crucial to get feedback at that stage, but at present, that is not common. For the sake of discussion, let’s again propose a way for encouraging authors to engage with the discussions taking place around their paper published in a journal.
Here, an author would submit the article to a journal (and at the same time, to a preprint server ;)), where the editor would subject it to pre-publication peer review. The editor would choose the reviewers (s)he likes best for this manuscript, and know are suitable experts to evaluate this work. If after the necessary rounds of review (ideally, consultative peer review or one with ‘cross commenting’), the editor considers the article to be acceptable for the journal, the manuscript would be “Conditionally Accepted”, which would basically mean that the manuscript can move onto the next stage of the process: open peer review.
I know this term can mean different things to different people (like ‘epigenetics’ 😉 ), but here, I mean “open to whomever wants to participate”. In this setting, the peer-reviewed and editor-approved, non-typesetted version, is made public and the community is invited to comment on the manuscript, within a certain time window. Anyone could comment and if they prefer, they can maintain their anonymity. A system could be put in place for these comments to have a DOI, if the community would like that. People commenting are also free to cross-post their comments on other platforms, for instance, if the manuscript is deposited on a preprint server.
During that window, the authors can address individual comments if they want, or wait to address them all together at the end of the time window. The authors may decide to revise their manuscript in response to these comments or simply respond to them.
A certain amount of time (which would need to be defined) after the window closes, the authors must submit their responses to the comments posted by the community. The editor will review their responses and decide, at this point, the next step: a) accept (the manuscript would be moved into production), b) revisions, or c) reject. If the manuscript is accepted, the reviewers’ comments (anonymous if they prefer so) and author responses, would be posted along the manuscript.
This would avoid some of the problems raised on the previously proposed approach, which involves a series of steps, and is overall, more cumbersome and would likely involve a lot more work for all parties involved.
Problems with this new approach however, include: 1) the time from submission to acceptance will increase and 2) original peer reviewers might feel they wasted their time, particularly if the ‘open peer review’ stage leads to a decision that is different from their recommendation (I’d argue that happens today, anyway). This is of course, disregarding the problem of ‘authors have no time to reply to comments’, commonly raised against any proposal that attempts to encourage authors to participate in discussions of their manuscripts.
Remember, the idea is to encourage authors to respond to concerns raised by the community. In this way, the incentive is clear: no response, no publication. If there are no comments, the paper is published as approved in the pre-publication peer review process.
Yes, at the beginning, most papers would have no comments on the ‘open peer review’ stage, but as Nikolai said:
Even big things start small… even if participation rate is low at the beginning, it might grow. The @biorxivpreprint is a great example of low participation rate exploding into a mainstream cultural shift that is transforming scientific communication
— Nikolai Slavov (@slavovLab) November 17, 2017
In a way, this system is a modified version of what is currently in place at Atmospheric Chemistry and Physics (ACP),
Of course, one could also propose a system in which the “pre-publication” peer review stage is discarded altogether, as in ACP. In this scenario, the editor would still personally invite some experts to review (and again, they could do it anonymously, in this case, in a single blind fashion), just to make sure every article is reviewed by some experts. Is this something the life science community would be willing to try? The experience with F1000 Research and the like seems to suggest so.
Lastly, note that this is not something to compete with or replace post-pub peer review. That would still exist and could play a relevant role after the time window of open peer review has ended. Also, researchers would be welcome to publish rebuttals of published papers in the same journal, using mechanisms as the one described before.
Publishing is not the end: thoughts about discussions on published manuscripts and author engagement
A few days ago I tweeted the following:
Publishing is not the end; it’s the beginning of a general discussion. This discussion can take place a lot sooner with preprints
— Alejandro Montenegro (@aemonten) November 5, 2017
I said this, because I’ve been thinking of ways to encourage discussions, to maybe stop thinking of publications as the “end”, such that once a paper is published in a journal, I (author, editor) forget about it and don’t have to worry about it anymore. This is particularly important considering that in our current system, only a handful of people get to see a paper before it’s published in a journal, and while such peer review can be relevant for finding some of the flaws and generally improving a paper, it’s silly to think that it will find all possible problems/issues.
For this, we need the “wisdom of crowds”.
I mentioned that with preprints, in which a paper is accessible to everyone (and hence, everyone can -potentially- comment on it) as soon as the authors consider it ready, such a discussion can take place a lot sooner, as the manuscript would be available immediately, without the typical delays associated with publishing in a journal. For this particular post, I will only talk about encouraging discussions after a paper has been published in a journal and the way this could be handled (for discussion on preprints, see this). By “discussions”, I mean getting authors to engage with the people commenting on their publication.
Notably, Nikolai Slavov was just talking about this a few days ago, which prompted me to put into words some of the stuff I’ve been thinking about in the last couple of days. He published a “Point of View” in eLife some time ago in which, among other things, he discussed the following idea:
(…) journals should agree to consider non-anonymous post-publication comments submitted to certain platforms within a certain timeframe after the paper has been published. This timeframe could be as short as a few weeks or a month. Journals would be obliged to publish a response from the authors to all the substantive concerns contained in the comments. Concerns that would require a response would include the following:
-crucial controls that are missing
-major inconsistencies between the data and the main conclusions
-inconsistencies with the published literature that are not discussed in the paper
-failures by readers to reproduce analyses described in the paper
-errors in mathematical proofs.
Science is crucially based on discussions and when research is shared with the community, one should expect that a certain amount of discussion around it will take place. On Twitter, I mentioned that the lack of responses by the authors might be a matter of incentives; authors currently have no incentive to reply to any comments on a published paper. How can we create such an incentive? Discussions benefit the scientific community and should be encouraged.
One way that could be proposed, is that journals basically force authors to respond to comments raising substantial concerns, as the ones mentioned by Nikolai. How could this work? One way could be the following, which is similar to the system in place at eLife. This would work for a journal that normally publishes the peer reviewers’ comments (regardless of anonymity):
a) People who have several concerns about the validity of a published paper would reach out to the authors for comment. (As you can see, in such case, the authors have no incentive to reply).
b) If the original authors ignore or disregard the correspondence, then the people raising the concerns (let’s call them the authors of a “letter”) can contact the journal, detailing the concern(s). They should provide evidence that they have tried to contact the authors to engage in a discussion. A deadline for receiving a response should be determined before the authors can reach out to the journal. After this time has passed, the authors of the letter can decide to summarize their concerns instead of immediately writing a full analysis. This should take place within a certain time frame after the original manuscript has been published (although I’m not sure what the best time window would be). Now, if the original authors do reply to the authors of the letter in their initial correspondence, and depending on their response, we could move to either point g) or i) below.
c) The editors would evaluate the letter and determine whether further evaluation is warranted, i.e. whether the letter represents a scientifically supported challenge to the manuscript. If they agree, then, if not submitted already, the editors would invite the authors to submit a full critique. The letter may be subjected to peer review, to help the editors reach a decision.
d) After the full critique is received (and peer reviewed, in cases where this is deemed necessary), the editors would reach out to the authors and ask them to address/comment on these issues, stating that failure to reply, could result in the publication of an expression of concern, or even a retraction, depending on the concerns raised. This would be “the incentive”. The authors will be given 15 days for a “first reply”, in which they would state whether they agree to the comments and are willing to revise, or if they disagree with the critique.
e) If the editors, on the other hand, decide that a response or revision is not absolutely required, and the concerns raised do not represent a challenge to the article (either because they decided that on their initial assessment or after peer review), then the journal will offer the authors to publish their letter along the article as an online comment. The editors would explain their decision and provide the comments of the reviewers (if available) and leave it to the authors if they want to publish their original or revised letter as a comment on the original article.
In this scenario, the authors of the original article will be informed about the letter, and asked if they want to reply before the letter is published as a comment (but it would not be mandatory). This avoids “silencing” any comment that may exist about the article and will allow the community to evaluate it, such that if the community feels that the comment does indeed merit a response, then the original editorial decision could be reversed. The authors would be of course, free to publish their letter on PubMed Commons, PubPeer, a preprint server, etc. at any moment.
f) If the editors agree to the concern, and the original authors agree to the full critique or parts of it, and are willing to revise the manuscript, then they are invited to submit a revised version that includes these comments. If the editors agree to the extent of the revision, then a new version of the manuscript is published, acknowledging the changes. This would use article versioning instead of a “corrigendum” or other “legacy” terms. The new version would have a new DOI (denoting the versioning, e.g. DOI XXX.1, XXX.2, etc. This is already in place in journals like F1000 Research). In this case, as the original authors have agreed to the comments on the critique, the authors of the letter would be acknowledged in the article and their letter would be published as a non-anonymous supplementary material, together with the comments of the original peer reviewers (as they are also peer reviewers). As mentioned in the beginning, this proposal is oriented towards journals that publish the peer review comments alongside the article.
g) If alternatively, a response is received, such that it opposes the critique, and the authors refuse to make any changes, then, similarly, they will be invited to submit their full response; the response will be analyzed by the editors, subjected to peer review if necessary, and published along the article if deemed scientifically valid. In this case, the full letter would also be published. This would end the discussion.
h) If the response is deemed unsatisfactory by the editors, and the remaining concerns are major, the editors can reject the response (and publish the critique anyway). They may also decide to publish the response, just for full disclosure. At this point, the editors can decide to publish an expression of concern or retract the manuscript if they consider it necessary.
i) If the authors of the letter are not satisfied by the author response and would like to continue the discussion further, they will be invited to submit an article to the journal detailing their full critique; the manuscript would be peer reviewed and if accepted, would be published in the journal as a “Discussion”. This article will be fully linked to the original article and indexed and citable on its own. If the original authors would like to respond, then the whole cycle starts anew.
The best thing about versioning is that you will always land on the latest version of the article and you will be able to not only see the changes/updates made, but you can check out the previous versions if you want. This would be on top of being able to review the reviewer’s comments.
Now, an important point for discussion -and I’d really like your input on this- is that, if faced with such a system, would authors simply choose a journal that doesn’t have such a mechanism in place, i.e. one in which they know they won’t have to engage with criticisms? This, of course, would defeat the whole purpose. One way around this, would be that this conversation took place first at the level of societies and publishers, so that many journals would embrace it. One could argue however, that it is not a matter of authors, but it is in fact journals and editors, the ones that not only don’t have an incentive to promote such discussions, but would actually prefer not to go down this route, as it would involve more work and lead to potentially problematic discussions and in some cases, author alienation. This is another aspect to discuss.
Now, as Michael Hoffmann pointed out to me, there’s also the other aspect to all this: what’s the incentive for researchers to do post-publication peer review in the first place?
You need two to tango.
PubPeer shows us that many scientists do find it relevant to comment on work related to their research, and this might be further encouraged if journals gave a strong sign that they would consider these comments. I’d say that for at least the few people that are currently doing it, and are willing to publicly discuss and criticize a manuscript published in a journal -in a civic and scientific way-, a formal mechanism should be in place, which would guarantee a debate, or at least, disseminate their concerns.
On a related note, one might argue, “why would I go out of my way to antagonize people who will very likely be reviewers of my next paper/grant?”. One could propose the authors of the letter to be able to remain anonymous to the public and the original authors, but not to the editors., i.e. a single blind review. An argument against this, is that there may be bias in either supporting or rejecting the letter based on the identity of the letter’s authors. For instance, editors might be more willing to consider a letter coming from a big lab. A system of fully anonymous review would resemble the system in place at Pubpeer.
Anyway, I just wanted to mention some of the issues I’ve been thinking about can impact the post-publication discussion of manuscripts published in journals, and author engagement. I’d love to hear your thoughts on how we can address these issues.
For further comments and discussion, follow the threads on Twitter:
I think this might be a matter of incentives. Authors have no incentive (of those used currently in academia) for revising a published paper. You suggest that journals force authors to do it; should they retract if author’s don’t engage? That’s an incentive 🙂
— Alejandro Montenegro (@aemonten) November 14, 2017
I’d love to hear your thoughts! Latest blog post, “Publishing is not the end: thoughts about discussions on published manuscripts and author engagement”. Thanks to @michaelhoffman and @slavovLab for (pre-publication) discussions. https://t.co/gzDhxSVbFT
— Alejandro Montenegro (@aemonten) November 17, 2017
I was interviewed, along with other fellow Chilean scientists, for a piece (in Spanish) on the publish-or-perish culture, the tyranny of the Impact Factor and the pressure on scientists to publish in top journals for career advancement, among other things. The article, written by Tania Opazo and entitled “The Tyranny of Scientific Publications”, appeared today in “La Tercera”.
Go check it out!
I was asked to write a letter of recommendation for someone. This is the first one I’ve ever written. Well, technically, I’ve written letters of recommendation before…but for myself; some professors I’ve asked letters from, have asked me to provide a draft they could then edit. I know this is usually frown upon by some people, but apparently this is more common that I imagined.
Anyway, let’s rephrase my statement from before: this is the first letter I’ve written for someone else. A former undergraduate student in our lab asked me to write a letter for her to apply to grad school in the US. It was great to be asked to do this, but agreeing to write one is a big responsibility. The idea is to describe the applicant’s strength and weaknesses, with regards to the program they are applying to; to describe whether they are a good fit for the program and mention why. Agreeing to write one, in my opinion, should be done with the idea that the writer is actually supportive of the applicant’s plan: you want the person to be able to be accepted into the program and I guess that if you can’t really recommend the person in an honest way, maybe you should simply decline to write it or mention your reservations to the student and let he/she decide whether he/she still would want you to write it.
I’m positive I’ll be writing more letters in the future, so I took this opportunity to study a little more about writing effective letters. I read a bunch of sample letters online and also read an addendum to the HHMI book “Making the Right Moves: A Practical Guide to Scientific Management for Postdocs and New Faculty”, entitled “Writing a Letter of Recommendation”. It was very useful and I recommend it. It includes a bunch of tips which were very helpful for knowing what to mention and what not to mention in the letter.
Basically I started by asking the student about her plans and why she wanted to join the program. In the end, I simply asked for her intention letter which included all this information. Knowing her goals and motivation for the specific program was very useful for drafting the letter. Additionally, I asked her to send me her CV and asked if there was any particular aspect she wanted me to specifically discuss in the letter. In her case, she wanted me to center my discussion around her research experience. Note that I did not ask her to tell me which aspect I should compliment; simply which one she wanted me to focus on. I asked this because people usually get letters from different people highlighting different aspects of their CV/training.
What I wrote
In the letter, I basically introduced myself and described my relationship to the student. In my case, we shared a lab bench for over a year and I taught a few classes in which she was a student. I then mentioned how I ranked her among all undergrads I’ve met in a similar setting (i.e. in the lab). People usually do this by saying something like “In my opinion, candidate x is among the top 5 percent of the students I have known”. I then went on to describe the project she worked on while in the lab and her findings, highlighting not only the technical side of the project (her knowledge of lab techniques, the ones she had to implement and troubleshoot, etc.), but also aspects of her personality (personal attributes) that I considered were relevant for its development (i.e. can work independently, has a critical mind, is determined, etc). The idea is to be specific, to denote that you truly know the candidate. I then discussed writing and oral communication skills, as they relate to how she communicated her scientific findings.
I thought it would also be relevant to mention some shortcomings she had when she joined that lab that have now been improved, with specific examples as to how this has changed. As stated in the HHMI document I mentioned above, “You don’t just have to describe the candidate as he or she is right now—you can discuss the development the person has undergone”.
Then I discussed how good a fit her skills are to the specific program she applied to and gave my impression about her likelihood to be a successful student in that program.
I finished the letter summarizing my enthusiasm for the candidate and highlighting the skills I think can make her a good asset to the program. The last line was just my offer to help if further information about the candidate was required. In all, the letter was 2 pages long.
I think I did an acceptable job. When the time comes that I start reading letters from others, I’ll probably learn more tips on writing letters, and I’ll try to make them more effective and help students as much as I can.
I hope she gets in!
Together with Paulo Canessa and Luis Larrondo, we wrote an extensive review on circadian rhythms in fungi, which was published in Advances in Genetics. We focused on the well-characterized clock of the ascomycete Neurospora crassa, describing the molecular basis of its pacemaker, together with how it synchronizes with the environment and how it controls the rhythmic expression of thousands of genes. We mostly centered our discussion on recent research on all of these topics. We also described several studies reporting rhythms in other fungi and towards the end of the article, we focused on the clock of the pathogenic fungus Botrytis cinerea, for which we have recently described a functional circadian clock that plays a major role in determining the outcome of the Arabidopsis-Botrytis interaction.
Here’s the info:
Around the Fungal Clock: Recent Advances in the Molecular Study of Circadian Clocks in Neurospora and Other Fungi
Advances in Genetics
Available online 27 October 2015
In Press, Corrected Proof
Alejandro Montenegro-Montero, Paulo Canessa, Luis F. Larrondo
Night follows day and as a consequence, organisms have evolved molecular machineries that allow them to anticipate and respond to the many changes that accompany these transitions. Circadian clocks are precise yet plastic pacemakers that allow the temporal organization of a plethora of biological process. Circadian clocks are widespread across the tree of life and while their exact molecular components differ among phyla, they tend to share common design principles. In this review, we discuss the circadian system of the filamentous fungus Neurospora crassa. Historically, this fungus has served a key role in the genetic and molecular dissection of circadian clocks, aiding in their detailed mechanistic understanding. Recent studies have provided new insights into the daily molecular dynamics that constitute the Neurospora circadian oscillator, some of which have questioned traditional paradigms describing timekeeping mechanisms in eukaryotes. In addition, recent reports support the idea of a dynamic network of transcription factors underlying the rhythmicity of thousands of genes in Neurospora, many of which oscillate only under specific conditions. Besides Neurospora, which harbors the best characterized circadian system among filamentous fungi, the recent characterization of the circadian system of the plant-pathogenic fungus Botrytis cinerea has provided additional insights into the physiological impact of the clock and potential additional functions of clock proteins in fungi. Finally, we speculate on the presence of FRQ or FRQ-like proteins in diverse fungal lineages.
The Case for Transcriptional Regulation and Coupling as Relevant Determinants of the Circadian Transcriptome and Proteome in Eukaryotes
My latest paper is out, a commentary, in which together with Luis Larrondo, we evaluate evidence regarding the relative contribution of the different steps of gene expression (transcriptional, post-transcriptional, translational, etc.) in determining daily mRNA and protein rhythms in eukaryotes. We argue that it’s too early to assign a predominant role for one specific stage in this process, as some papers have done, due to a variety of biological and particularly, technical, reasons. We further propose that RNAPII recruitment is rhythmic on a global scale, setting the stage for global nascent transcription, but that tissue-specific mechanisms ultimately locally specify the different processes under clock control.
Here’s the link and info:
Published online before print October 7, 2015, doi: 10.1177/0748730415607321
J Biol Rhythms October 7, 2015 0748730415607321
Circadian clocks drive daily oscillations in a variety of biological processes through the coordinate orchestration of precise gene expression programs. Global expression profiling experiments have suggested that a significant fraction of the transcriptome and proteome is under circadian control, and such output rhythms have historically been assumed to rely on the rhythmic transcription of these genes. Recent genome-wide studies, however, have challenged this long-held view and pointed to a major contribution of posttranscriptional regulation in driving oscillations at the messenger RNA (mRNA) level, while others have highlighted extensive clock translational regulation, regardless of mRNA rhythms. There are various examples of genes that are uniformly transcribed throughout the day but that exhibit rhythmic mRNA levels, and of flat mRNAs, with oscillating protein levels, and such observations have largely been considered to result from independent regulation at each step. These studies have thereby obviated any connections, or coupling, that might exist between the different steps of gene expression and the impact that any of them could have on subsequent ones. Here, we argue that due to both biological and technical reasons, the jury is still out on the determination of the relative contributions of each of the different stages of gene expression in regulating output molecular rhythms. In addition, we propose that through a variety of coupling mechanisms, gene transcription (even when apparently arrhythmic) might play a much relevant role in determining oscillations in gene expression than currently estimated, regulating rhythms at downstream steps. Furthermore, we posit that eukaryotic genomes regulate daily RNA polymerase II (RNAPII) recruitment and histone modifications genome-wide, setting the stage for global nascent transcription, but that tissue-specific mechanisms locally specify the different processes under clock control.
Embracing minimal guidelines for the reporting of RT-qPCR experiments: responsibility lies on both ends
In mid-2012, Stephen A. Bustin, Jo Vandesompele and I decided to send a letter to the editor of a glam magazine asking for journals to demand authors to provide at least minimal information for the critical evaluation and reproducibility of published RT-qPCR experiments. The lack of information regarding these experiments is inversely proportional to the IF of the journal: the higher the IF, the lower the amount of information provided for these experiments (See Nat Methods. 2013 Nov;10(11):1063-7). It was no surprise then, considering that they were the ones we targeted in the letter (although not explicitly), that glam journals (you know which…) refused publishing the letter.
I found the letter searching for something else in my computer and decided to share it with you, just as it was written back in 2012. The main theme is as true as it was back then.
Stephen A. Bustinb
Jo Vandesompele c
a Department of Molecular Genetics and Microbiology, Faculty of Biological Sciences, Pontificia Universidad Católica de Chile.
Email: firstname.lastname@example.org. Tel: (+562) 6862348
b Queen Mary University of London, UK
Email: email@example.com. Tel: (+44) 2073777000
c Center for Medical Genetics, Ghent University, Belgium
d Biogazelle, Zwijnaarde, Belgium.
Email: Joke.Vandesompele@UGent.be. Tel: (+32) 479353563.
To the Editor:
Reverse transcription real-time quantitative PCR (RT-qPCR) is currently the most widely used molecular method for the detection and quantification of RNA. RNA integrity and purity, primer sequences and their specificity, assay efficiency and identification of appropriate reference genes are just a few of the essential parameters that must be assessed and reported when using this quantitative technique. Most authors however, fail to include them, either in the methods or the online supplementary sections, with some arguably not even having performed the appropriate controls. As has been previously reported, this can lead to flawed data and wasted efforts in trying to reproduce results that can be, in some cases, artifacts and thus not biologically relevant1.
The MIQE guidelines2,3 were proposed to enhance experimental accuracy and transparency and to enable the research community to assess reported data and reproduce published qPCR experiments. The guidelines should be considered a complete “checklist” to be used as a reference for the reporting of results. While the response to these guidelines has been largely positive, in both commercial and research settings, most currently published articles that include qPCR experiments fail to properly report experimental procedures.
Although it is the authors’ task to embrace minimal guidelines that allow for reproducibility and transparency of their studies, journal editors have a responsibility to demand such information. Even though some publishers have implemented the MIQE guidelines or at least the bulk of the recommendations, many top-tier journals continue to publish research that grossly lacks the required information not only to reproduce the reported experiments, but also to evaluate properly the conclusions derived from them. Given that the number of retractions is on the increase, that the majority of retractions are caused by poor experimental protocols and that once published in the peer-reviewed literature, even a rebuttal does not affect a paper’s frequency of citation, we would like to issue a wakeup call to journal editors and publishers.
We urge journals to demand that all manuscripts include such minimum information, in the form of the MIQE or other guidelines, to ensure the accuracy and reproducibility of the reported results. The use of online supplementary sections makes the often used “space constraint” argument no longer valid.
1. Lanoix, D. et al. Quantitative PCR Pitfalls: The Case of the Human Placenta. Mol Biotechnol (2012).
2. Bustin, S.A. et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem 55, 611-622 (2009).
3. Bustin, S.A. et al. Primer sequence disclosure: a clarification of the MIQE guidelines. Clin Chem 57, 919-921 (2011).
ENCODE Project Consortium et al. 2012. An integrated encyclopedia of DNA elements in the human genome. Nature 489: 57-74.
Graur D, Zheng Y, Price N, Azevedo RB, Zufall RA, Elhaik E. 2013. On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE. Genome Biol Evol. 5:578-590.
Eddy S. 2012. The C-value paradox, junk DNA and ENCODE. Curr Biol. 22:R898– R899.
Eddy SR. 2013. The ENCODE project: missteps overshadowing a success. Curr. Biol. 23: R259-61.
Eddy SR. 2013. Is junk DNA bunk? A critique of ENCODE. Proc Natl Acad Sci U S A. 2013 Apr 2;110(14):5294-300.
Kellis M et al. 2014. Defining functional DNA elements in the human genome. Proc Natl Acad Sci U S A. 2014 Apr 29;111(17):6131-8.
Niu D-K, Jiang L. 2013. Can ENCODE tell us how much junk DNA we carry in our genome? Biochem Biophys Res Commun. 430:1340–1343.
Palazzo AF and Gregory TR. 2014. The Case for Junk DNA. PLoS Gentics 10 (5): e1004351
Doolittle, WF, Brunet TDP, Linquist S, Gregory TR (2014). Distinguishing between “function” and “effect” in genome biology. Genome Biol Evol (2014) doi: 10.1093/gbe/evu098
Quick and simple post, considering it is Jan 1st and I’m still tired from last night, and the fact that I just came back from, you guessed it, the lab.
Anyway, I wanted to know what the most recurring topics were on the top two glam journals during 2013, so I obtained the 2013 PubMed-indexed abstracts from Nature and Science using EBOT and then used Wordle to generate a word cloud.
Here are the results:
It’s pretty easy to do it for any other journal or for any other query in PubMed using Ebot. If you want to do something similar for say, your country or institution and you need help, let me know.