Gender & Racial Disparities in Big Cancer Data

As a researcher who works with large publicly available biological datasets, I was reminded of the potential biases in big data when I came across this blog post from the University of Michigan Health Lab:

How Genomic Sequencing May Be Widening Racial Disparities in Cancer Care .  Nicole Fawcett, Aug 17, 2106.

Cancer is a notoriously heterogeneous disease, meaning that different patients with the same cancer type may harbor different sets of mutations.  Further, many genes associated with cancer tend to be mutated at very low frequencies in tumors [1].   In order to gain enough statistical power to confidently identify these rare “driver” mutations, we need data from hundreds to thousands of tumor samples.  Obtaining such a large number of samples often requires collecting tissues whenever possible.

The Cancer Genome Atlas (TCGA) is a massive data repository for dozens of cancers, containing data from hundreds to thousands of individuals for most cancer types.  The post above describes a recent study that determined the racial breakdown of tumor samples in 10 of the 31 tumor types from TCGA.  They found that while the samples were racially diverse — even, in some cases, matching the U.S. population — the number of African-American, Asian, and Hispanic samples were too small to identify group-specific mutations with 10% frequency for any tumor type except breast cancer in African-Americans. On the other hand, there were enough Caucasian samples in every tumor type to identify mutations with 10% frequency in the population (and 5% frequency for 8 of the 10 tumor types assessed).  Consequently, we identify more “rare” mutations that pertain to Caucasians simply because we have more data to support the findings.  Further, only 3% of the total samples were Hispanic, while Hispanics comprise 16% of the U.S. population.

This disparity is not limited to a race.  Gender representation in big cancer data has also been in the press.  The under-representation of women in sex-nonspecific cancer over the past 15 years has been reviewed by Hoyt and Rubin (Cancer 2012), who noted that this gap may be widening.

Want to see the discrepancies for yourself?  The data is easy enough to obtain, but Enpicom has a fantastic interactive visualization of the entire TCGA data repository by patient gender, race, and age.

screen-shot-2016-09-14-at-5-20-46-pm

Consider glioma, for example – while the incidence rate of brain tumors is higher in women than in men [2], women comprised only 41.4% of the over 1,100 samples.

screen-shot-2016-09-14-at-1-50-31-pm

Even more alarmingly,  over 88% of the samples are Caucasian.screen-shot-2016-09-14-at-1-50-03-pm

There is evidence of higher incidence rates of brain cancer in Caucasians compared African-Americans and Hispanics, but surely this doesn’t justify the over-representation in this dataset.

So, what should we do?

On one hand, we need to carefully design data collection efforts to ensure that different racial/ethnic groups are adequately represented – not simply to reflect the proportion in the U.S. population but to gain enough statistical power to confidently identify rare mutations.   On the other hand,  “convenience sampling” methods of obtaining tumors from the most convenient places, even if the population is homogenous, have enabled consortia to collect enough data in the first place.  In fact, we better understand the “rare mutation” concept due to the mostly-white patient data collected by TCGA and others.

The only clear answer is that we need more data.


[1] This is often called the “long tail” distribution of cancer gene mutations.  For more information, see, for example,  Lessons from the Cancer Genome. Garraway and Lander.  Cell 2013.

[2] All primary malignant and non-malignant brain and CNS tumors.  In fact, the incidence rate of malignant brain tumors is slightly higher in men.  Cancer statistics from the Central Brain Tumor Registry of the United States.

 

 

Advertisements

Yep, cancer is still complicated

Image from amazon.com

If you haven’t read The Emperor of All Maladies: A Biography of Cancer by Siddhartha Mukherjee, I would highly recommend it. And if you would rather watch it, Ken Burns produced a documentary focusing on the book that recently aired on PBS.   While we have come a long way in cancer research, it is alarming how little we still know about it.  In the age of personalized medicine and the plethora of cancer datasets, you would think that understanding cancer is getting to be, at the very least, more understandable.  This New York Times opinion article gives a few examples where finding a druggable mutation is not as easy as one would hope.

Trying to Fool Cancer – NYTimes.com.

Ephemeralization

This WIRED article resonated with the New Media Seminar I’m taking at Virginia Tech.

Big Data: One Thing to Think About When Buying Your Apple Watch | WIRED.

I hadn’t heard of  the term ephemeralization coined by Buckminster Fuller before, which is the promise of technology to do “more and more with less and less until eventually you can do everything with nothing.” Fuller cites Ford’s assembly line as one example of ephemeralization.  Ali Rebaie, the author of the WIRED article, writes that the Big Data movement is another form of it.  Our ability to analyze huge datasets has lead to designing more efficient technology.  All in all, Fuller seems to fit right in with the others we have been reading in the seminar.

The vision of machine learning, from 1950

Reading: “Computing Machinery and Intelligence” by Alan Turing. Mind: A Quarterly Review of Psychology and Philosophy 59(236):433-360. October 1950. (one reprint is here by a quick Google search).

Computer scientist majors will learn about the famous Turing Machine in any introductory Theory of Computation class.  They might get a cursory mention of the “Imitation Game,” the subject of this article (with the recent movie, this may change).  I am intrigued by so many aspects of this article, but I will limit my observations to two items.

Part I: Could this article be published today?

The notion of the “Imitation Game” and an exploration of its feasibility is incredibly forward-thinking for Turing’s time  — so much so that he admits to his audience that he doesn’t have much in the way of proof.

The reader will have anticipated that I have no very convincing arguments of a positive nature to support my views.  If I had I should not have taken such pains to point out the fallacies in contrary views.

The article was published in a philosophy journal, so Turing was able to allow his arguments to take idealistic positions which were not practical at the time (though many of his arguments are closer to reality today).  Yet he does not focus on the arguments that establish the feasibility of such a computer (or the program), but lays out a framework for “teaching” machines to play the Imitation Game.  Through his descriptions I can easily see the foundations of fundamental computer science sub-disciplines such as artificial intelligence and machine learning.  He truly was an innovative thinker for his time.  I wonder if a similar forward-thinking article would be published today, with little evidence for idealistic scenarios.  Perhaps there is a Turing of 2015 trying to convince the scientific community of a potential technological capacity that will only be confirmed fifty years from now.

Part II: Scale

There are many numbers in Turing’s article relating to the amount of storage capacity required for a computer to successfully participate in the Imitation Game.  He didn’t seem to be too worried about storage requirements:

I believe that in about fifty years’ time it will be possible to programme computers with a storage capacity of about 109 to make them play the imitation game so well than an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.

I was interested in seeing how accurate his estimates were.  Keep in mind that 10×102=103; that is, each time the exponent increases by one we are multiplying the quantity by 10. For example, if we look at the capacity of the Encyclopedia Brittanica

  • 2×109: capacity of the Encyclopedia Brittanica, 11th Ed. (Turing, 1950)
  • 8×109: capacity of the Encyclopedia Brittanica, 2010 Ed. (last one to be printed)

We see that the size of the encyclopedia has quadrupled in the past 60 years.  Now, let’s look at Turing’s estimates of the capacities of both a future computer and the human brain.

  • 109: capacity of a computer by 2000 (Turing, 1950)
  • 1010-1015: estimated capacity of the human brain (Turing, 1950)
  • 3×1010: standard memory of a MacBook Pro, 2015 (4Gb memory)
  • 4×1012: standard storage of a MacBook Pro, 2015 (500Gb storage)
  • 8×1012-8×1013 : estimated capacity of the human brain (Thanks Slate, 2012)
  • 2×1013: pretty cheap external hard drive (3Tb)

Our current laptops can hold more bits in memory than Turing believed would be able to be stored!  Pretty amazing.  Consider the speed (in FLOPS = floating point operations per second) of two of the world’s supercomputers:

  • 80×1012:  IBM’s Watson, designed to answer questions on Jeopardy (80 TeraFLOPS)
  • 33.86×1015: Tianhe-2, the world’s fastest supercomputer according to TOP500 (33.86 PetaFLOPS)

In 2011, USC researchers suggested that we could store about 295 exabytes of information, which translates to 2.3×1021 bits.  That’s a number even I cannot comprehend.