Grants keep coming to Reed Biologists

As a new computational biologist at Reed College, I was excited about the prospect of continuing to do research while teaching innovative courses.  I’ve written about the research opportunities at Reed, and faculty across campus have received over two million dollars of grant funding in 2014/2015.

The Biology Department just secured two more research grants from the M.J. Murdock Charitable Trust to investigate neurogenesis in zebrafish (Dr. Kara Cerveny) and discover candidate driver genes in cancer (me!).

Small schools also have an opportunity to play a large role in undergraduate education programs.  Another NSF grant was recently awarded to Dr. Suzy Renn to organize a STEM workshop on undergraduate involvement in the NSF’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

All in all, 2016 seems like it will be another great research year.

Advertisements

Tenure

A couple articles came across my news feed in the past week a few months ago related to tenure.  While they are not directly related to each other, I thought I’d mention them in the same post.

First, how many faculty in the United States are actually tenured?  I was surprised to find that, according to the American Association of University Professors (AAUP), over 50% of faculty hold adjunct (non tenure-track) positions.  AAUP calls these positions “contingent faculty” because, regardless of their full-time or part-time status, their school makes little to no long-term commitment in terms of job security.  The increasing reliance of institutions on adjunct faculty has an impact not only on the faculty but also on the students and the research at the institution.  An article from The Atlantic summarizes many of these points:

The Cost of an Adjunct | The Atlantic

Now, tenure itself may be a controversial topic – some say that the system encourages faculty to slack off after getting tenure, or to keep teaching outdated material long after they should have retired.  The tenure process is incredibly stressful, sometimes unclear, and notoriously unfair – and this is just scratching the surface.  But once tenure is obtained, faculty may end up doing more out-of-the-box, high-risk research and teaching that they wouldn’t have attempted otherwise.

Trying to Kill Tenure | Inside Higher Ed

Quantifying the gender bias in federally-funded STEM research

We all know that there is a gender disparity in STEM fields.  Is it harder for women in these fields to obtain federal funding compared to their male colleagues?  In 2013, Helen Chen published an article in Nature summarizing women’s continual challenges in science.    The infographic below from the paper describes the gap in NIH-funded research grants.

from Inequality quantified: Mind the gender gap by Helen Chen, Nature Vol 595 Issue 7439 2013.

At first glance, the funding gap looks appalling – only 30% of the NIH’s grants are going to women!  However, there’s a missing ingredient here:  the fraction of NIH grant proposals submitted by women.  To get this information, let’s go back to 2008 for a minute.  Jennifer Pohlhaus and others at the NIH assessed the gender differences in application rates and success rates for 77% of the awards submitted in 2008, including training grants, midcareer grants, independent research grants (e.g., R01), and senior grants.  They found that the acceptance rates reflected the application rates for most NIH grants.  however, men had a higher success rate once they had received their first NIH grant and become NIH investigators.  So the funding gap in the infographic may not be tied to women having lower success rates in funding, but rather that fewer women are submitting grants.  A visualization of the data from the NIH is available on their webpage.

The Nature article (and many many other articles) point to the fact that women tend to leave science early in their education and careers.  In the 2008 NIH grant applications there were more female applicants than male applicants for three of the early career / training awards (F31, K01, K23), and two other early career awards (F30 and F32) showed no statistical difference between the number of male and female applicants.  However, male applicants significantly outnumbered female applicants in all midcareer, independent research, and senior career programs.

An evaluation of gender bias is currently underway for six other federal agencies: NSF, DOD, DOE, USDA, HHS, and NASA.  The audit, conducted by the Government Accountability Office (GAO), will first release a report that investigates whether the agencies evaluate proposals based on potentially biased measures.  The GAO will then release a second report identifying potential factors that lead to the disparity in funding between men and women.  Once out, it will be an interesting read…

Slaughter Announces GAO Audit on Gender Discrimination in Federal STEM Research Funding | Congresswoman Louise Slaughter.

Funding models for Ph.D. programs

UC-Irvine is adopting a new funding model for Ph.D. programs in some departments.  The idea is to provide increased funding for five years, and then offer a two year postdoc teaching position.

This article, from Slate, is written about Ph.D. programs in the humanities, where it is generally much harder to secure a job than in the computational sciences.  Still, an interesting perspective.

UC–Irvine’s 5+2 program: A good idea, but the worst job title in academia..

Responsible research, even when you’re wrong

I follow Retraction Watch, a blog dedicated to reporting academic misconduct in scientific publishing.  I’ve learned that there are two types of retractions: the malicious, intentional acts such as destroying data and plagiarism, and the unintentional mistakes.  This post is about the unintentional mistakes.

As I glance through the new posts, I am always a bit apprehensive.  Will I come across any familiar names?  Will they find something directly related to my sub-field that invalidates my own findings?  Will I find my name there due to some code bug or mathematical error?

I am a computer scientist who has striven to publish datasets and code along with publications.  It is rarely the case where I work with a dataset that is not publicly available.  In some sense, this makes my work easier to justify – I can provide all that’s needed to reproduce my results.  But still, we’re all human, and unintentional mistakes may happen.

Today, my apprehension was transformed into respect after reading this blog post about authors who retracted their own paper in light of additional experiments they conducted post-publication.

“[T]hese things can happen in every lab:” Mutant plant paper uprooted after authors correct their own findings | Retraction Watch

The authors submitted a retraction notice that included the experimental data that showed that their results described a different mutation than the one they intended.  Not only did they take action to retract the paper, but the last author, Dr.Hidetoshi Iida, notified researchers using the plant seeds that reportedly contained the mutant.

In a publish or perish world, it takes a lot of guts to do the right thing.  This retraction, in fact, contributed to the progress of scientific knowledge.  I applaud these authors, and I hope that other honest mistakes are corrected in similar ways.

How many authors is too many?

Nature recently published a quick blurb about a paper on fruit fly genetics that has sent social media abuzz.  Why?  Because the paper, published in G3: Genes Genomes Genetics, lists over 1,000 authors.  Further, more than 900 of these authors are undergraduates and members of the Genomics Education Partnership, an organization that has posted a record of the commentary on the author number. The author list, which spans the first three pages of the PDF, is shown below.

    fly2     fly3

The paper has sparked a larger debate about the role of training and education in research, particularly when it comes to undergraduate involvement.  Alongside the paper, the authors also released a blog post about undergraduate-empowered research in the Genetics Society of America’s Genes to Genomes blog.  This is the first paper I’ve seen that lists a blog post as supporting information.

I can see arguments on both sides. On one hand, crowd-sourcing allows us accomplish tasks impossible for a single person to execute. The computer scientist in me loves this aspect of the story.  Here, “the crowd” is the sea of undergraduates that edited and annotated a DNA sequence (the Muller F element, or the “dot” chromosome) in fruit flies by analyzing and integrating different types of data.  Unlike papers that use Mechanical Turk to collect data, where the crowd is typically non-experts, this particular crowd has learned a set of specialized skills that facilitated the research.  Undergrads dirtied their hands with real data and learned valuable insights about how to conduct research.  The educator in me finds the endeavor incredibly impactful for the scientists-in-training.  On the other hand, being buried in the author list makes one’s contributions look meaningless.  What does it mean to be a co-author on such a paper?  If the Genomics Education Partnership consisted of only a few dozen undergraduates, would it be better?  Some of these questions are discussed in the Neuro DoJo blog post.  The academic “I-want-to-get-tenure” in me cringes at the thought that good research may be down-weighted because the author contributions were unclear if I were in the middle of the author list.

I think that the undergraduates from the Genomics Education Partnership did conduct research that contributed to the paper, and they should be credited in some way.  It seems that in the age of crowdsourcing, perhaps there needs to be an intermediate between authorship and acknowledgement that indicates a collective contribution from a group of people (e.g. students in a class or members of a consortium).

Responses to the sexist review by PLOS One

There has recently been a lot of attention on the journal PLOS One and their handling of a heavily gender-biased review received by an evolutionary biologist on a manuscript about gender differences in the Ph.D. to Postdoc academic transition.  PLOS One has taken a few actions, including asking the academic editor who handled the manuscript to step down from the editorial board and removing the offending reviewer from their database.  Dr. Michael Eisen, one of the founders of PLOS, has provided interesting commentary on the subject.

What’s just as troubling is that the reviewer clearly used his personal assessment of not only the authors’ gender but also their “junior” academic status in his criticism.  A blog that focuses on manuscript retractions has another summary of the issue.  Dr. Fiona Ingleby, one of the authors of the manuscript, tells the authors,

Megan and Fiona are pretty unambiguous names when it comes to guessing gender.  But in fact, the reviewer acknowledged that they had looked up our websites prior to reading the MS (they said so in their review). They used the personal assessment they made from this throughout their review – not just gender, but also patronising comments throughout that suggested the reviewer considered us rather junior.  – Fiona Ingleby

This has raised some major issues about the peer-review process, including whether a reviewer’s identity should ever be revealed in a single-blind or double-blind review.  Dr. Eisen addresses this in his blog posted above.  Dr. Zuleyka Zevallos wrote about removing publishing bias in science, here in the form of sexism. In response to Dr. Eisen’s post, she explicitly addresses the accountability that institutions and publishers need to have in place.

It will be interesting to see how PLOS One and other publishers address this now-viral issue, especially by the changes to the peer-review process.  Some have noted that the PLOS One editor failed in his/her duty by returning this gender-biased review to the authors instead of disregarding it, and it wasn’t necessarily a problem with the procedures.  But more and more journals are moving to different review styles, including Nature’s experiment with double-blind peer review.  If the PLOS One review had been double-blind, the reviewer may have been able to guess the gender of the authors but would not have been able to verify that, and especially not verify their current academic positions.