Be positive! From witch hunts to the new reward culture

Disclaimer: This is a guest blog post by Sophien Kamoun, a highly respected group leader in Plant Biology at The Sainsbury Laboratory, Norwich. We decided to publish his post as a valuable contribution to the debate on ‘open science’. However the represented opinion does not necessarily reflect our both/both opinions.

I’m a proponent of open science. Science is continuously in flux. Our knowledge, theories and concepts are continuously evolving. The essence of science is to capture new information, integrate it into current models and regurgitate more elaborate concepts. Therefore science cannot thrive without a vibrant culture of discussion and debate. Open science widens the net. Anyone can access the data and comment on it. A tweet by someone you don’t know could lead you to think differently about your science and help you to develop new concepts. We move from elitist old boy clubs to an open door party. This is healthy for science.

As a native of Tunisia open science matches well with my personality. I grew up in the typical Mediterranean culture of vibrant discussion, constant arguing and yes the occasional bickering. These traits are embedded in me. I know they can be irritating to others. But I believe they did help me develop into an engaged citizen and scientist. A paucity of critical thinking and engagement among the citizen of modern and, presumably, well-educated societies is one of the drama of our age.

It is therefore natural for me to support all efforts of post-publication review. Platforms such as PubPeer aim at extending the discussion and analysis beyond publication in peer-reviewed literature. They are perfect for this era of journal proliferation and internet communication. They also address flaws in pre-publication peer-review that have been well documented.

But PubPeer has evolved, most certainly against the wishes of its anonymous founders, into a modern day “witch hunt” platform. Many comments seem valid. But pointless and frivolous comments are being posted, and have certainly increased in frequency in recent weeks (at least in plant biology). How to address this? How to buffer or eliminate such posts while maintaining the original goal of PubPeer as a vibrant journal club platform? My suggestions are two-fold.

First, PubPeer should encourage and promote the posting of positive reviews. Peer review is not about “Gotcha! I found a flaw!” It is primarily to endorse excellent science. It does shock me that the great majority of the posts I read only list negative comments. This is not what peer review or journal clubs are about. More often than not, we find positives in the literature we read and discuss. There is plenty of great science out there. Why shouldn’t we acknowledge it and promote it? Why are posters rushing to reveal “vertical lines” in a blot but failing to highlight a flawless figure? PubPeer and related platforms have a role to play here. They could help build the confidence and reputation of young scientists, strengthen their CVs – further shifting the obsession with impact factors and publishing in glam magazines to a focus on the quality of an individual’s work.

Second, PubPeer may consider recruiting an Editorial Board to help moderate the questionable posts. I expect many reputable scientists, junior and senior, to be willing to volunteer just as we do for scientific journals. An Editorial Board that reflects diversity in gender, geography, career stage, and research topic would improve transparency and credibility. It will also serve to temper criticism and cynicism about PubPeer that is prevalent among many in the scientific community.

The reality is that post-publication peer review is here to stay. The recent episode that my colleagues and I faced was a timely teaching moment. It reminded us of the importance of record keeping, archiving old data, ensuring that pictures integrate visible labels and so on. Several members of my lab told me that this sorry episode has prompted them to document and store their data more rigorously. Nobody wants to find out 10 years later that they cannot respond to an allegation about their paper. Mistakes do happen so we should be prepared to respond and revise.

At my host institution, The Sainsbury Lab, which is currently led by Head of Lab Cyril Zipfel, recent episodes have further justified the misconduct training initiatives that were already undertaken way before the current brouhaha. We need to raise awareness of these important issues. Scrutiny and discussion of the science post-publication should become part of the culture. A shift to a new reward culture is happening. It’s not only where you published but also what you published. Quality indicators other than the journal impact factor are becoming recognized. It’s you, the next generation of scientific leaders, who can ensure that the cultural shift takes hold. And PubPeer and other post-publication peer review platforms have a role to play in this new reward culture.

/// See also our first guest post of today by anonymous Unregistered Submission about the same issue. ///

Advertisements

Don’t judge too fast!

Disclaimer: This is an anonymous blog post submission by Unregistered Submission. We decided to publish it as a valuable contribution to the debate on ‘open science’. However the represented opinion does not necessarily reflect our both/both opinions. To keep with spirit of this post we gave Sophien Kamoun a 24-hours heads up before publishing. Some of his feedback was incorporated into this post by the anonymous author. Slighlty modified paragraphs are highlighted in italic.

There you go, find a duplicated figure panel in an article, make a figure, write one sentence and post it on pubpeer (https://pubpeer.com/), the online journal club where scientific peers can anonymously place comments on scientific publications. Little effort for one person, something that may have a huge effect on people far away from where I am living…

What followed was a tremendously fast response from the authors involved in this manuscript (http://tinyurl.com/lshhv4g) and I think a new world-record in correcting a scientific paper (http://tinyurl.com/mx4dope). I absolutely respect and admire the professionalism by which the authors (Mireille van Damme, Cahid Cakir, Sophien Kamoun et al.) handled the probably very unpleasant situation. I posted my concerns regarding one specific figure of the respective article on Saturday evening, already by Sunday the original data was provided on figshare by the authors to convince any skeptical colleague (http://dx.doi.org/10.6084/m9.figshare.1290790). Let me be clear, I never believed that the authors purposely published data to mislead the reader. Obviously, this was just a simple mistake, something that could happen to anyone actively involved in science.

Why did the authors rush so much to get the original data online so fast? My thought is that the authors wanted to avoid entering a harmful treadmill, in which other anonymous commenters start to dig further trying to add additional “evidence” that the authors purposely misled the reader. In fact, this process almost immediately started after I added my concern. With one person adding some more fuel to the starting fire by talking about the authors purposely rotating another panel in the same figure. What if the authors were unable to provide such a fast response? For example, when data could not be found immediately or someone was on a holiday for two weeks or more? Would the authors have had a fair chance to defend themselves against a growing group of anonymous commenters?

The last couple of weeks I have been following evolving stories around papers of Olivier Voinnet, David Baulcombe et al.,. Doubt about figures in a number of papers were posted on pubpeer in September 2014 (http://retractionwatch.com/?s=voinnet, https://pubpeer.com/search?q=O+Voinnet ). This was followed by an explosion of post refereeing to over 25 papers in January 2015. Worrisome? Yes, but I was a bit shocked by the way colleagues around me spoke with disgust about multiple scientists involved in any of these papers, and how on social media such as pubpeer and retractionwatch people carelessly provide their opinion and accuse scientists potentially involved in figure manipulations and duplications. Probably knowing little to nothing about the factual situation.

To my feeling the pubpeer website in its current form is too much a “hunt the scientist” website, a place where scientists can be suspected of publishing falsified data. Not really the “online community that uses the publication of scientific results as an opening for fruitful discussion among scientists” that it claims or wants to be (https://pubpeer.com/about). Why does a comment that I add myself on a Saturday evening have to appear online in public and to the authors on a Sunday? Why can’t the authors be informed far in advance before making a comment public? Giving the authors ample amount of time to sort things out and reply. The way pubpeer currently works, or better, the way it allows some people to use it, resembles a modern day witch-hunt.

I would like to stress that intentional figure manipulations are indeed extremely bad, in fact it is fraud and that is a very serious crime. This is especially why we should be extremely careful commenting on other scientists work. We should not allow ourselves to create a platform of which the primarily use currently seems to be a public “scientific execution site”. Potentially damaging innocent scientists reputations and that of co-authors who may have nothing or very little to do with the whole situation. To accuse someone from committing a crime is a big thing and I wonder whether this should be discussed so directly and openly in public, with maybe little chances for the authors to reply or defend themselves initially. Do we do the same with other types of crime? We don’t put anonymous notes in supermarkets with names of customers who are suspected of theft, at least not in the country where I live. No, we go to a respectable authority and led them investigate what is actually going on. Shouldn’t pubpeer have a more stringent editorial filter? An online open journal club is something different from an online open crime-report site.

Don’t get me wrong, I am very much pro “open science” and the more discussion the better. Pubpeer is a good initiative, but currently not working optimally. Authors should have a fair chance to defend themselves and one should not judge before all evidence is provided. Also scientists have the right of being “innocent until proven guilty”. In fact, how many of the suspected papers are actually truly worrisome? Yes, for a couple of papers it looks bad, but there also seem to be quite a few with marginal evidence for intentional figure manipulations (http://tinyurl.com/q34yomm, http://tinyurl.com/p3yh9mp, http://tinyurl.com/kyg8ba4, http://tinyurl.com/ojjktz5). Are all these authors and co-authors suspected of fraud or, alternatively, can we accurately point all these cases to one single person?

Pubpeer in its current form is surrounded with a negative and suggestive atmosphere, something you would not like your paper to be associated with. A site where comments seem to be frequently made by over-frustrated scientists. People like sensation: “big names struggling”, always a source of entertainment. Whether these big names are famous movie-stars, politicians or scientists. (Un)fortunately scientists are people too and on pubpeer (and sites like retractionwatch) it is shown that we are often little better then gossip loving yellow press readers in a supermarket.

That brings me to the fact of specifically bringing up the following manuscript/figure on pubpeer. Last week, tweets appeared on social media that were joking about the scientists suspected of fraud (http://tinyurl.com/k8k27yc , sensation!). Did it bring a smile on my face?, honestly yes, it was quite funny. However, would I like to be in the same position as any of the co-authors of the 25+ suspected papers?, and am I 100% sure that all my papers are spotless? My answer would be “No”. Several scientists actively re-tweeted this joke, but isn’t that also a tiny little bit hypocritical? Or are these scientists very sure that their own published work is the gold standard?

I decided to look into a number papers of the re-tweeting scientists present in my literature archive for troublesome data (I was only looking into manuscript main figures, without help of any software). A childish “gotcha game”? Maybe, but I guess that’s how the true wanna-be “science-detectives” on pubpeer work. I discovered one paper of the Kamoun lab with a “serious” issue. Did I suspect fraud?, not for a single moment. But I felt I should bring this up to show how vulnerable we all are as scientist at this moment. The Kamoun lab is without doubt one of the most active labs on social media within the plant sciences field. Something, I actually greatly appreciate and respect! I could thus expect that posting a comment on pubpeer would attract a lot of attention (well it certainly did, http://tinyurl.com/mhbnzlm). Something I hope would aware the community that at this moment we all can too easily be suspected of being a fraud.

Cynical and ironic jokes arise when questions remain unanswered. I understand that the Voinnet lab has had plenty of time to reply to any of the initial concerns posted. This is in stark contrast to the way the Kamoun lab handled the situation, by directly replying to all concerns raised. Nonetheless, with over 25 papers currently in doubt in the case of Voinnet I am aware that ironic jokes currently target a large group of innocent scientist who may not have a fair chance to reply at this very moment.

I would like to open a discussion in which standards of how to criticize a scientific paper are addressed. Should this always be done so open and direct? Shouldn’t websites such as pubpeer have a better editing process for certain type of comments?, especially when issues such as scientific integrity are at stage? At least scientists should be given sufficient time before they are being haunted by a group of “science-detectives”. Lastly, we as scientists should not judge too fast, suspecting or suggesting someone is a fraud is a big leap. It is time we start using pubpeer in a much more positive way by not just posting negative or suggestive comments. To the Kamoun lab: I promise to make a start by now placing my honest, positive and fair opinion on several of your great manuscripts! Pubpeer should be used as online journal club highlighting not only flaws but also the great science out there.

/// See also our second guest post of today by Sophien Kamoun about the same issue. ///

ASPB members and Plant Physiology/The Plant Cell authors for pre-prints

///Disclaimer: This is a copy of our request to the American Society of Plant Biology to support pre-prints.///

ASPB members and Plant Physiology/The Plant Cell authors for pre-prints

Based on a recent discussion on twitter we discovered that the American Society for Plant Biology (ASPB) currently discourages preprint deposition on publically available archives such as http://biorxiv.org/, http://arxiv.org/ or https://peerj.com/preprints/. We as engaged members representing a wide variety of plant biologist would like to encourage ASPB to change its stance and allow deposition of a pre-print when submitting to Plant Cell and Plant Physiology. Pre-print compatible policies are quickly becoming the norm at other journals ASPB members publish in, including: Nature Publishing Group, PLOS, PeerJ, Cold Spring Harbor Press, Science, PNAS, eLife, Frontiers journals, EMBO, Development, and Evolution (http://en.wikipedia.org/wiki/List_of_academic_journals_by_preprint_policy). We do not want to be in the position to have to chose between a pre-print and submission to these outstanding community journals.

Potential benefits to the plant science community include:

  • fast dissemination of important new discoveries in the field
  • fast dissemination of work in progress
  • public visibility of successive revised versions of a manuscript
  • feedback during the review process potentially leading to improved manuscripts. (See for example haldanessieve.org)
  • straightforward way to establish precedence

Thanks for considering our request:

If inclined, edit, sign, and distribute widely here.

Is being scooped the flip-side of a pre-print !?! It wasn’t for us with PLoS

Update 01/14, 11 PM, (Late night update while single dad…): The previous title of this blog-post read “Is being scooped the flip-side of a pre-print !?! It was for us.”. However the last night email from PLoS, while I was talking with my co-first author in Israel where it is already morning, made me change the title. To my surprise and with loads of awe and respect PLoS reconsidered their call of rejecting our manuscript based on the novelty issue caused by the publication of a competing manuscript (see details below). Now instead of considering novelty the editors gave us the opportunity to address the remaining reviewers comments within 60 days. AWESOME and OUTSTANDING. We will forge ahead and do so in the weeks to come and resubmit in due time. It is great to see PLoS taking this important issue seriously. Looking forward to read their updated editorial policies on their website soon so in future there won’t be any confusion about pre-prints and being scooped during the review process. Today PLoS improved the publishing landscape once again. That’s what I *love* about PLoS. Way to Go!

Go Pre-Print Go!

Update: A better title might have been -> Posting a preprint before a paper is in press puts you at risk for being scooped.

Update #2, January 07, 2014 ~ 5PM PST: PLoS got in touch and will have a ‘second’ look at the case. That is absolutely something I appreciate about PLoS actually listening to its community. I hope the major outcome of all this is that PLoS openly declares its editorial standards on preprints and (independently) on being ‘scooped’ during the review process. This is especially important for papers that got submitted after the initial PLoS submission date, because no one really knows where papers get passed around.

Let me get this straight right away. I love open science, open access, the idea of pre-prints and doing good reproducible science. Ever since I heard about the concept of pre-prints I started to put my articles on pre-print server, as much as possible, for everyone to enjoy while the manuscript is under review see here, here and here. This means people can actually read the science well before publication, which is often more than 6 months earlier. Also I really like the idea of and PLoS itself having worked with them on open access topics several time. I am also know to be more critical with fiends I like ;).

Here is the story so far in brief.

June 11, 2014 Submission of manuscript to a PLoS journal and deposition on biorxiv.

July 16, 2014 Rejection of manuscript after full review at this particular PLoS journal.

July 25, 2014 Re-submission of manuscript to another PLoS journal. Back to back with another paper of collaborators showing something similar in another plants species. This gets accepted December 4, 2014, also after major revisions.

September 10, 2014 Preliminary acceptance with major revisions within 60 days.

November 16, 2014 Re-submission of majorly revised version. Submission letter indicated that alternative study was published in another journal. (Initial submission August 12, 2014 and acceptance October 28, 2014. Author declare on pubmed that they missed our pre-print study and will add citation in the final print version (here).

December 19, 2014 Rejection of our manuscript based on novelty and one specific experimental request, which we could have addressed within 2-3 weeks if possible (more details below if wanted).

December 20, 2014 Rebuttal of decision. Citing that we had pre-print out and that authors of other competing study omitted our pre-print at time of submission of their paper.

January 07, 2015 Rejection of our rebuttal due to the following editorial practice. I will cite this here straight, because we looked for editorial guidance in regards of pre-prints and competing publications during the review process on the PLoS webpage. Without success!

‘From an editorial perspective, a work would not be considered novel if a similar paper is published before the work is fully accepted for publication. Although we fully understand this is a difficult situation for you and your co-authors, we could not neglect the fact that Lu and associates reported a similar story in a highly regarded, peer-reviewed scientific journal, which reduced both the significance and novelty of your work. We realized that the situation might be a bit tricky with the deposition of your manuscript onto an online pre-print server before the submission of the paper by Lu et al. While PLOS journals are “pre-print friendly”, we do not have a specific policy regarding scooping. Nonetheless, we can assure you that none of the authors in the published paper served as a reviewer for this manuscript and there is no conflict-of-interest issues during the handling of your manuscript by PLOS xxxxxx.’

I am fine with having my paper rejected because reviewers are not satisfied and it is not deemed novel enough at the initial stage of reviewing. That is simply part of the game. Problem here is the missing transparency. PLoS is ‘pre-print friendly’ (here and here) however does not honor submission date*. Of course putting a manuscript on a pre-print server at the point of initial submission exposes you to the risk of being scooped by other folks scrambling to publish their data in time while you in review and revising. Especially in a culture that for now does not ‘honor’ and cite pre-prints.In this case being ‘friendly’ doesn’t really support pre-prints. You can be scooped during the review process anyway.

Will deposit my manuscripts on pre-print servers at time of submission? YES, as much as possible!! I think these are exciting times for biology.

Will I submit to PLoS ‘higher tire’ journals at the same time? NO!! Not without a straight editorial policy on pre-prints and being scooped during the review process.

Update 01/14, 11 PM: Yes with the new and hopefully clear cut editorial policy on pre-prints and being scooped in the review process I will always again submit my paper to PLoS and a pre-print server at the same time. Way to Go!

Update after short twitter discussion. It is not clear if author’s of the competing paper knew about our pre-print. This is only speculative. Main point is #1 they MIGHT have rushed for publication having heard of pre-print or manuscript another way.  #2 journals like PLoS need to get their editorial policies straight about pre-prints and getting scooped during the review process. Honoring submission date will become more important with more pre-prints out.

*Even though only ONE specific additional second round experiment was request. This confirmatory experiment could have been addressed easily. I am willing to post the whole review process and communication without names up here if folks of PLoS don’t mind so you can judge for themselves.