The true culture of STEM inclusivity

Disclaimer: This is a guest post by a good friend of mine, Shaila Kotadia. I fully endorse this post as it raises many important issues in academia.

As talk of how to restructure the funding of STEM fields to make them sustainable, the word diversity often arises. Almost everyone states this is a part of [insert name here]’s mission but how actively is academia really supporting this notion? In particular, over the past several decades, there has been little change in the makeup of professor positions.

I recently attended a conference organized by a scholars program for undergraduates. This program has existed for over 20 years, is successful at equalizing the playing field for students that are overlooked in and often drop out of STEM majors, and is now evaluating the parts of the program that result in this positive outcome. The morning consisted of a series of excellent speakers that conducted evaluations of programs targeted at underrepresented undergraduates to determine their effectiveness. One study conducted by Mica Estrada at UC San Francisco evaluated a national panel of minority science students for their feelings on self-efficacy, scientific identity, and how their values aligned with scientists to determine scientific integration (Estrada, et al., J Educ Psychol, 2011). This raised important questions in my mind. Are we selecting for a self-perpetuating personality type that think about and approach problems very similarly regardless of their background? Or are we molding these students through formal and informal programming to fit a model of an academician and losing their diverse perspective in the process?

A town hall discussion concluded the event. This is where we were challenged to share our thoughts that called out the academic system and its support for underrepresented students. This is when the conversation got really interesting. Comments on the culture of STEM shifting with the influx of Ph.D.’s and the lack of associate professor positions suggest that we need to adapt. Additionally, it was suggested that little diversity in these positions might be due to individuals from all backgrounds not having a desire for these positions. I felt the need to chime in. We are encouraging these students to pursue Ph.D.’s with the ultimate outcome to retain them in the STEM fields with a heavy emphasis on faculty positions. But why would we do that to them? Faculty positions suck now, in my honest opinion. And as these students progress through the different stages of academia with programmatic help, are we selecting for the same types of personalities or molding these students into what faculty positions require? How is that helping anyone? We may end up increasing diversity “by the numbers” but in the end we are losing the real benefit of diversity of thought. One interesting way to put it is that we need more cognitive diversity (Valantine and Collins, PNAS, 2015).

When I think of diverse populations and those that are underrepresented in STEM that I have worked with or mentored, their values do not align well with what academia values. Those that continue onto faculty positions may feel like they lose parts of their identity and the need to conform in order to be promoted in their career (McGee and Kazembe, Race Ethnicity and Education, 2015). That is going to continuously result in a lack of diversity, whether it be of thought or people leaving the field, despite all of the early programmatic measures.

Additionally, while unconscious bias training will likely decrease microaggressions towards others and might eliminate gender bias in hiring (Carson, TechRepublic, 2015), which is completely necessary and should be required, it appears as though we still select candidates from the same top schools that tend to attract similar personalities (Clauset, et al., Science Advances, 2015). In the end, the change in overall culture will be gradual but hiring more diverse individuals will only become more diverse with a cultural change that people must be willing to enact in the long run.

I do acknowledge that top notch research is conducted at the highly-ranked institutions but I want to also acknowledge that ground-breaking research is often discovered across all institutions. One can even argue that the same schools where postdocs are being hired from cannot employ all of them, which means that these researchers are being attracted to other programs that might offer a more welcoming environment. And there is no doubt that the people being hired are incredibly talented and yet there might well be not a single perfect match for each position but multiple. Hence many brilliant and creative candidates might go home empty handed simply because of the factor of chance.

So, I propose a different approach. Let’s change what we value when hiring faculty. Let’s change the academic environment. Not just the funding structure or implementing training for various STEM career paths or making more permanent research positions. All that is good and well but I predict it won’t increase diversity, cognitive, gender, or racial, at the faculty level. What we need to change is the way we select who continues onto academic positions. Why not use the same “pluck” criteria, or the desire to create change in a spirited or daring manner, and holistic approach, considering more than the candidate’s pedigree and publication record, that many colleges and universities use to admit undergraduates to select faculty? You might be surprised by the results.

More reading: Gibbs and Griffin, CBE LSE, 2013
Warner and Clauset, Slate, 2015

///Edited for three references 28/11/2015///


Be positive! From witch hunts to the new reward culture

Disclaimer: This is a guest blog post by Sophien Kamoun, a highly respected group leader in Plant Biology at The Sainsbury Laboratory, Norwich. We decided to publish his post as a valuable contribution to the debate on ‘open science’. However the represented opinion does not necessarily reflect our both/both opinions.

I’m a proponent of open science. Science is continuously in flux. Our knowledge, theories and concepts are continuously evolving. The essence of science is to capture new information, integrate it into current models and regurgitate more elaborate concepts. Therefore science cannot thrive without a vibrant culture of discussion and debate. Open science widens the net. Anyone can access the data and comment on it. A tweet by someone you don’t know could lead you to think differently about your science and help you to develop new concepts. We move from elitist old boy clubs to an open door party. This is healthy for science.

As a native of Tunisia open science matches well with my personality. I grew up in the typical Mediterranean culture of vibrant discussion, constant arguing and yes the occasional bickering. These traits are embedded in me. I know they can be irritating to others. But I believe they did help me develop into an engaged citizen and scientist. A paucity of critical thinking and engagement among the citizen of modern and, presumably, well-educated societies is one of the drama of our age.

It is therefore natural for me to support all efforts of post-publication review. Platforms such as PubPeer aim at extending the discussion and analysis beyond publication in peer-reviewed literature. They are perfect for this era of journal proliferation and internet communication. They also address flaws in pre-publication peer-review that have been well documented.

But PubPeer has evolved, most certainly against the wishes of its anonymous founders, into a modern day “witch hunt” platform. Many comments seem valid. But pointless and frivolous comments are being posted, and have certainly increased in frequency in recent weeks (at least in plant biology). How to address this? How to buffer or eliminate such posts while maintaining the original goal of PubPeer as a vibrant journal club platform? My suggestions are two-fold.

First, PubPeer should encourage and promote the posting of positive reviews. Peer review is not about “Gotcha! I found a flaw!” It is primarily to endorse excellent science. It does shock me that the great majority of the posts I read only list negative comments. This is not what peer review or journal clubs are about. More often than not, we find positives in the literature we read and discuss. There is plenty of great science out there. Why shouldn’t we acknowledge it and promote it? Why are posters rushing to reveal “vertical lines” in a blot but failing to highlight a flawless figure? PubPeer and related platforms have a role to play here. They could help build the confidence and reputation of young scientists, strengthen their CVs – further shifting the obsession with impact factors and publishing in glam magazines to a focus on the quality of an individual’s work.

Second, PubPeer may consider recruiting an Editorial Board to help moderate the questionable posts. I expect many reputable scientists, junior and senior, to be willing to volunteer just as we do for scientific journals. An Editorial Board that reflects diversity in gender, geography, career stage, and research topic would improve transparency and credibility. It will also serve to temper criticism and cynicism about PubPeer that is prevalent among many in the scientific community.

The reality is that post-publication peer review is here to stay. The recent episode that my colleagues and I faced was a timely teaching moment. It reminded us of the importance of record keeping, archiving old data, ensuring that pictures integrate visible labels and so on. Several members of my lab told me that this sorry episode has prompted them to document and store their data more rigorously. Nobody wants to find out 10 years later that they cannot respond to an allegation about their paper. Mistakes do happen so we should be prepared to respond and revise.

At my host institution, The Sainsbury Lab, which is currently led by Head of Lab Cyril Zipfel, recent episodes have further justified the misconduct training initiatives that were already undertaken way before the current brouhaha. We need to raise awareness of these important issues. Scrutiny and discussion of the science post-publication should become part of the culture. A shift to a new reward culture is happening. It’s not only where you published but also what you published. Quality indicators other than the journal impact factor are becoming recognized. It’s you, the next generation of scientific leaders, who can ensure that the cultural shift takes hold. And PubPeer and other post-publication peer review platforms have a role to play in this new reward culture.

/// See also our first guest post of today by anonymous Unregistered Submission about the same issue. ///

Don’t judge too fast!

Disclaimer: This is an anonymous blog post submission by Unregistered Submission. We decided to publish it as a valuable contribution to the debate on ‘open science’. However the represented opinion does not necessarily reflect our both/both opinions. To keep with spirit of this post we gave Sophien Kamoun a 24-hours heads up before publishing. Some of his feedback was incorporated into this post by the anonymous author. Slighlty modified paragraphs are highlighted in italic.

There you go, find a duplicated figure panel in an article, make a figure, write one sentence and post it on pubpeer (, the online journal club where scientific peers can anonymously place comments on scientific publications. Little effort for one person, something that may have a huge effect on people far away from where I am living…

What followed was a tremendously fast response from the authors involved in this manuscript ( and I think a new world-record in correcting a scientific paper ( I absolutely respect and admire the professionalism by which the authors (Mireille van Damme, Cahid Cakir, Sophien Kamoun et al.) handled the probably very unpleasant situation. I posted my concerns regarding one specific figure of the respective article on Saturday evening, already by Sunday the original data was provided on figshare by the authors to convince any skeptical colleague ( Let me be clear, I never believed that the authors purposely published data to mislead the reader. Obviously, this was just a simple mistake, something that could happen to anyone actively involved in science.

Why did the authors rush so much to get the original data online so fast? My thought is that the authors wanted to avoid entering a harmful treadmill, in which other anonymous commenters start to dig further trying to add additional “evidence” that the authors purposely misled the reader. In fact, this process almost immediately started after I added my concern. With one person adding some more fuel to the starting fire by talking about the authors purposely rotating another panel in the same figure. What if the authors were unable to provide such a fast response? For example, when data could not be found immediately or someone was on a holiday for two weeks or more? Would the authors have had a fair chance to defend themselves against a growing group of anonymous commenters?

The last couple of weeks I have been following evolving stories around papers of Olivier Voinnet, David Baulcombe et al.,. Doubt about figures in a number of papers were posted on pubpeer in September 2014 (, ). This was followed by an explosion of post refereeing to over 25 papers in January 2015. Worrisome? Yes, but I was a bit shocked by the way colleagues around me spoke with disgust about multiple scientists involved in any of these papers, and how on social media such as pubpeer and retractionwatch people carelessly provide their opinion and accuse scientists potentially involved in figure manipulations and duplications. Probably knowing little to nothing about the factual situation.

To my feeling the pubpeer website in its current form is too much a “hunt the scientist” website, a place where scientists can be suspected of publishing falsified data. Not really the “online community that uses the publication of scientific results as an opening for fruitful discussion among scientists” that it claims or wants to be ( Why does a comment that I add myself on a Saturday evening have to appear online in public and to the authors on a Sunday? Why can’t the authors be informed far in advance before making a comment public? Giving the authors ample amount of time to sort things out and reply. The way pubpeer currently works, or better, the way it allows some people to use it, resembles a modern day witch-hunt.

I would like to stress that intentional figure manipulations are indeed extremely bad, in fact it is fraud and that is a very serious crime. This is especially why we should be extremely careful commenting on other scientists work. We should not allow ourselves to create a platform of which the primarily use currently seems to be a public “scientific execution site”. Potentially damaging innocent scientists reputations and that of co-authors who may have nothing or very little to do with the whole situation. To accuse someone from committing a crime is a big thing and I wonder whether this should be discussed so directly and openly in public, with maybe little chances for the authors to reply or defend themselves initially. Do we do the same with other types of crime? We don’t put anonymous notes in supermarkets with names of customers who are suspected of theft, at least not in the country where I live. No, we go to a respectable authority and led them investigate what is actually going on. Shouldn’t pubpeer have a more stringent editorial filter? An online open journal club is something different from an online open crime-report site.

Don’t get me wrong, I am very much pro “open science” and the more discussion the better. Pubpeer is a good initiative, but currently not working optimally. Authors should have a fair chance to defend themselves and one should not judge before all evidence is provided. Also scientists have the right of being “innocent until proven guilty”. In fact, how many of the suspected papers are actually truly worrisome? Yes, for a couple of papers it looks bad, but there also seem to be quite a few with marginal evidence for intentional figure manipulations (,,, Are all these authors and co-authors suspected of fraud or, alternatively, can we accurately point all these cases to one single person?

Pubpeer in its current form is surrounded with a negative and suggestive atmosphere, something you would not like your paper to be associated with. A site where comments seem to be frequently made by over-frustrated scientists. People like sensation: “big names struggling”, always a source of entertainment. Whether these big names are famous movie-stars, politicians or scientists. (Un)fortunately scientists are people too and on pubpeer (and sites like retractionwatch) it is shown that we are often little better then gossip loving yellow press readers in a supermarket.

That brings me to the fact of specifically bringing up the following manuscript/figure on pubpeer. Last week, tweets appeared on social media that were joking about the scientists suspected of fraud ( , sensation!). Did it bring a smile on my face?, honestly yes, it was quite funny. However, would I like to be in the same position as any of the co-authors of the 25+ suspected papers?, and am I 100% sure that all my papers are spotless? My answer would be “No”. Several scientists actively re-tweeted this joke, but isn’t that also a tiny little bit hypocritical? Or are these scientists very sure that their own published work is the gold standard?

I decided to look into a number papers of the re-tweeting scientists present in my literature archive for troublesome data (I was only looking into manuscript main figures, without help of any software). A childish “gotcha game”? Maybe, but I guess that’s how the true wanna-be “science-detectives” on pubpeer work. I discovered one paper of the Kamoun lab with a “serious” issue. Did I suspect fraud?, not for a single moment. But I felt I should bring this up to show how vulnerable we all are as scientist at this moment. The Kamoun lab is without doubt one of the most active labs on social media within the plant sciences field. Something, I actually greatly appreciate and respect! I could thus expect that posting a comment on pubpeer would attract a lot of attention (well it certainly did, Something I hope would aware the community that at this moment we all can too easily be suspected of being a fraud.

Cynical and ironic jokes arise when questions remain unanswered. I understand that the Voinnet lab has had plenty of time to reply to any of the initial concerns posted. This is in stark contrast to the way the Kamoun lab handled the situation, by directly replying to all concerns raised. Nonetheless, with over 25 papers currently in doubt in the case of Voinnet I am aware that ironic jokes currently target a large group of innocent scientist who may not have a fair chance to reply at this very moment.

I would like to open a discussion in which standards of how to criticize a scientific paper are addressed. Should this always be done so open and direct? Shouldn’t websites such as pubpeer have a better editing process for certain type of comments?, especially when issues such as scientific integrity are at stage? At least scientists should be given sufficient time before they are being haunted by a group of “science-detectives”. Lastly, we as scientists should not judge too fast, suspecting or suggesting someone is a fraud is a big leap. It is time we start using pubpeer in a much more positive way by not just posting negative or suggestive comments. To the Kamoun lab: I promise to make a start by now placing my honest, positive and fair opinion on several of your great manuscripts! Pubpeer should be used as online journal club highlighting not only flaws but also the great science out there.

/// See also our second guest post of today by Sophien Kamoun about the same issue. ///