Funding models for Ph.D. programs

UC-Irvine is adopting a new funding model for Ph.D. programs in some departments.  The idea is to provide increased funding for five years, and then offer a two year postdoc teaching position.

This article, from Slate, is written about Ph.D. programs in the humanities, where it is generally much harder to secure a job than in the computational sciences.  Still, an interesting perspective.

UC–Irvine’s 5+2 program: A good idea, but the worst job title in academia..

Advertisements

Responsible research, even when you’re wrong

I follow Retraction Watch, a blog dedicated to reporting academic misconduct in scientific publishing.  I’ve learned that there are two types of retractions: the malicious, intentional acts such as destroying data and plagiarism, and the unintentional mistakes.  This post is about the unintentional mistakes.

As I glance through the new posts, I am always a bit apprehensive.  Will I come across any familiar names?  Will they find something directly related to my sub-field that invalidates my own findings?  Will I find my name there due to some code bug or mathematical error?

I am a computer scientist who has striven to publish datasets and code along with publications.  It is rarely the case where I work with a dataset that is not publicly available.  In some sense, this makes my work easier to justify – I can provide all that’s needed to reproduce my results.  But still, we’re all human, and unintentional mistakes may happen.

Today, my apprehension was transformed into respect after reading this blog post about authors who retracted their own paper in light of additional experiments they conducted post-publication.

“[T]hese things can happen in every lab:” Mutant plant paper uprooted after authors correct their own findings | Retraction Watch

The authors submitted a retraction notice that included the experimental data that showed that their results described a different mutation than the one they intended.  Not only did they take action to retract the paper, but the last author, Dr.Hidetoshi Iida, notified researchers using the plant seeds that reportedly contained the mutant.

In a publish or perish world, it takes a lot of guts to do the right thing.  This retraction, in fact, contributed to the progress of scientific knowledge.  I applaud these authors, and I hope that other honest mistakes are corrected in similar ways.

How many authors is too many?

Nature recently published a quick blurb about a paper on fruit fly genetics that has sent social media abuzz.  Why?  Because the paper, published in G3: Genes Genomes Genetics, lists over 1,000 authors.  Further, more than 900 of these authors are undergraduates and members of the Genomics Education Partnership, an organization that has posted a record of the commentary on the author number. The author list, which spans the first three pages of the PDF, is shown below.

    fly2     fly3

The paper has sparked a larger debate about the role of training and education in research, particularly when it comes to undergraduate involvement.  Alongside the paper, the authors also released a blog post about undergraduate-empowered research in the Genetics Society of America’s Genes to Genomes blog.  This is the first paper I’ve seen that lists a blog post as supporting information.

I can see arguments on both sides. On one hand, crowd-sourcing allows us accomplish tasks impossible for a single person to execute. The computer scientist in me loves this aspect of the story.  Here, “the crowd” is the sea of undergraduates that edited and annotated a DNA sequence (the Muller F element, or the “dot” chromosome) in fruit flies by analyzing and integrating different types of data.  Unlike papers that use Mechanical Turk to collect data, where the crowd is typically non-experts, this particular crowd has learned a set of specialized skills that facilitated the research.  Undergrads dirtied their hands with real data and learned valuable insights about how to conduct research.  The educator in me finds the endeavor incredibly impactful for the scientists-in-training.  On the other hand, being buried in the author list makes one’s contributions look meaningless.  What does it mean to be a co-author on such a paper?  If the Genomics Education Partnership consisted of only a few dozen undergraduates, would it be better?  Some of these questions are discussed in the Neuro DoJo blog post.  The academic “I-want-to-get-tenure” in me cringes at the thought that good research may be down-weighted because the author contributions were unclear if I were in the middle of the author list.

I think that the undergraduates from the Genomics Education Partnership did conduct research that contributed to the paper, and they should be credited in some way.  It seems that in the age of crowdsourcing, perhaps there needs to be an intermediate between authorship and acknowledgement that indicates a collective contribution from a group of people (e.g. students in a class or members of a consortium).

Responses to the sexist review by PLOS One

There has recently been a lot of attention on the journal PLOS One and their handling of a heavily gender-biased review received by an evolutionary biologist on a manuscript about gender differences in the Ph.D. to Postdoc academic transition.  PLOS One has taken a few actions, including asking the academic editor who handled the manuscript to step down from the editorial board and removing the offending reviewer from their database.  Dr. Michael Eisen, one of the founders of PLOS, has provided interesting commentary on the subject.

What’s just as troubling is that the reviewer clearly used his personal assessment of not only the authors’ gender but also their “junior” academic status in his criticism.  A blog that focuses on manuscript retractions has another summary of the issue.  Dr. Fiona Ingleby, one of the authors of the manuscript, tells the authors,

Megan and Fiona are pretty unambiguous names when it comes to guessing gender.  But in fact, the reviewer acknowledged that they had looked up our websites prior to reading the MS (they said so in their review). They used the personal assessment they made from this throughout their review – not just gender, but also patronising comments throughout that suggested the reviewer considered us rather junior.  – Fiona Ingleby

This has raised some major issues about the peer-review process, including whether a reviewer’s identity should ever be revealed in a single-blind or double-blind review.  Dr. Eisen addresses this in his blog posted above.  Dr. Zuleyka Zevallos wrote about removing publishing bias in science, here in the form of sexism. In response to Dr. Eisen’s post, she explicitly addresses the accountability that institutions and publishers need to have in place.

It will be interesting to see how PLOS One and other publishers address this now-viral issue, especially by the changes to the peer-review process.  Some have noted that the PLOS One editor failed in his/her duty by returning this gender-biased review to the authors instead of disregarding it, and it wasn’t necessarily a problem with the procedures.  But more and more journals are moving to different review styles, including Nature’s experiment with double-blind peer review.  If the PLOS One review had been double-blind, the reviewer may have been able to guess the gender of the authors but would not have been able to verify that, and especially not verify their current academic positions.

Gendered language in reference letters

…”nice” never got me a research grant or professional position. –Marcia McNutt

I previously posted an interactive visualization tool the shows language differences when reviewing male vs. female professors on ratemyprofessors.com, which shows a startling difference in word frequencies.  These reviews may have been hastily written by students, and they may not have thought much about the word choices they used when describing their professors.

What about letters of reference?  These are often critical pieces of information for acquiring a job, securing grant funding, and long-term career success.  A colleague just forwarded me an editorial that appeared in Science, written by Marcia McNutt, Editor-in-Chief for Science Journals.  McNutt was recently tasked with reviewing small research grant proposals written by graduate students.  She found that over 10% of the proposals included reference letters with inappropriate content for the decision to fund the grant.  While the offending letter writers were both men and women, all of the affected applicants were women.  Letter writers should highlight the applicant’s qualifications, so one would think that these would be carefully vetted for gendered language.  Sadly, they are not.

The “Influence of Prestige” in Academia

I’ve been meaning to write a post about a recent article that has attracted a fair amount of attention in the past few weeks.  The article, Systematic inequality and hierarchy in faculty hiring networks by Aaron Clauset et. al., appeared in Science Advances in February.

As the Slate article points out, the authors systematically analyze a rather troubling trend in faculty hiring, one where “faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality.”  I was very interested in this article, not only because I was on the job market this year but also because Aaron was my host when I interviewed at CU Boulder for a faculty position at their (very impressive) BioFrontiers Institute.

Consider a collection of institutions.  For each institution, we have information about where their graduates obtain faculty positions at the other institutions in the collection.  The authors define a “prestige hierarchy” as an ordered list of these institutions, and the “hierarchy strength” is the fraction of graduates that get faculty positions at institutions lower down on the list.  A large strength value means that no graduate has a chance of moving up the list when obtaining a faculty position.

Across disciplines, we find steep prestige hierarchies, in which only 9 to 14% of faculty are placed at institutions more prestigious than their doctorate…Furthermore, the extracted hierarchies are 19 to 33% stronger than expected from the observed inequality in faculty production rates alone…indicating a specific and significant preference for hiring faculty with prestigious doctorates. Clauset. et. al.

The main point of this article can be summed up in a picture. It’s a network, actually – Aaron develops algorithms to analyze networks (including biological ones – hence his affiliation with BioFrontiers).   The nodes are universities, and the edge widths convey the percentage of graduates from that university that obtain a faculty position at the university it points to.  This network can be rearranged to form an ordering — or a prestige hierarchy — that minimizes the number of “downward” weights on the edges.

Fig. 1 Prestige hierarchies in faculty hiring networks. (Top) Placements for 267 computer science faculty among 10 universities, with placements from one particular university highlighted. Each arc (u,v) has a width proportional to the number of current faculty at university v who received their doctorate at university u (≠v). (Bottom) Prestige hierarchy on these institutions that minimizes the total weight of “upward” arcs, that is, arcs where v is more highly ranked than u.

One conclusion from the article is that doctoral prestige is an accurate estimator of faculty placement.  In other words, if I know where you did your Ph.D., I have a pretty good idea where you will become a faculty member.  I really screwed with the system by choosing to become a faculty member at a liberal arts school, but it is interesting to see this prestige hierarchy.