Here are two IQ test questions for you:

  1. Fill in the blank in the sequence 1, 4, 9, 16, 25, __ , 49, 64, 81.
  2. What number comes next in the sequence 1, 1, 2, 3, 5, 8, 13, .. ?

Please stop and think about these questions before proceeding. Spoiler alert: the blog post reveals the answers.

Read the rest of this entry »

Earlier this month I posted a new paper on the bioRxiv:

Jase Gehring, Jeff Park, Sisi Chen, Matt Thomson, and Lior Pachter, Highly Multiplexed Single-Cell RNA-seq for Defining Cell Population and Transciptional Spaces, bioRxiv, 2018.

The paper offers some insights into the benefits of multiplex single-cell RNA-Seq, a molecular implementation of information multiplexing. The paper also reflects the benefits of a multiplex lab, and the project came about thanks to Jase Gehring, a multiplex molecular biologist/computational biologist in my lab.

mult·i·plex
/`məltəˌpleks/
adjective
– consisting of many elements in a complex relationship.
– involving simultaneous transmission of several messages along a single channel of communication.

Conceptually, Jase’s work presents a method for chemically labeling cells from multiple samples with DNA nucleotides so that samples can be pooled prior to single-cell RNA-Seq, yet cells can subsequently be associated with their samples of origin after sequencing. This is achieved by labeling all cells from a sample with DNA that is unique to that sample; in the figure below colors are used to represent the different DNA tags that are used for each sample:

concept

This is analogous to the barcoding of transcripts in single-cell RNA-Seq, that allows for transcripts from the same cell of origin to be associated with each other, yet in this framework there is an additional layer of barcoding of cells.

The tagging mechanism is a click chemistry one-pot, two-step reaction in which cell samples are exposed to methyltetrazine-activated DNA (MTZ-DNA) oligos as well as the amine-reactive cross-linker NHS-trans-cyclooctene (NHS-TCO). The NHS functionalized oligos are formed in situ by reaction of methyltetrazine with trans-cyclooctene (the inverse-election demand Diels-Alder (IEDDA) reaction). Nucleophilic amines present on all proteins, but not nucleic acids, attack the in situ-formed NHS-DNA, chemoprecipitating the functionalized oligos directly onto the cells:Reaction

MTZ-DNAs are made by activating 5′-amine modified oligos with NHS-MTZ for the IEDDA reaction, and they are designed with a PCR primer, a cell tag (a unique “barcode” sequence) and a poly-A tract so that they can be captured by poly-T during single-cell RNA-Seq:

tag

Such oligos can be readily ordered from IDT. We are careful to refer to the identifying sequences in these oligos as cell tags rather than barcodes so as not to confuse them with cell barcodes which are used in single-cell RNA-Seq to associate transcripts with cells.

The process of sample tagging for single-cell RNA-Seq is illustrated in the figure below. It shows how the tags, appearing as synthetic “transcripts” in cells, are captured during 3′ based microfluidic single-cell RNA-Seq and are subsequently deciphered by sequencing a tag library alongside the cDNA library:

mechanism.jpeg

This significance of multiplexing is manifold. First, by labeling cells prior to performing single-cell RNA-Seq, multiplexing allows for controlling a trade off between the number of cells assayed per sample, and the total number of samples analyzed. This allows for leveraging the large number of cells that can be assayed with current technologies to enable complex experimental designs based on many samples. In our paper we demonstrate this by performing an experiment consisting of single-cell RNA-Seq of neural stem cells (NSCs) exposed to 96 different combinations of growth factors. The experiment was conducted in collaboration with the Thomson lab that is interested in performing large-scale perturbation experiments to understand cell fate decisions in response to developmental signals. We examined NSCs subjected to different concentrations of Scriptaid/Decitabine, epidermal growth factor/basic fibroblast growth factor, retinoid acid, and bone morphogenic protein 4. In other words, our experiment corresponded to a 4x4x6 table of conditions, and for each condition we performed a single-cell RNA-Seq experiment (in multiplex).

This is one of the largest (in terms of samples) single-cell RNA-Seq experiments to date: a 100-fold decrease in the number of cells we collected per sample allowed us to perform an experiment with 100x more samples. Without multiplexing, an experiment that cost us ~$7,000 would cost a few hundred thousand dollars, well outside the scope of what is possible in a typical lab. We certainly would have not been able to perform the experiment without multiplexing. Although the cost tradeoff is impactful, there are many other important implications of multiplexing as well:

  • Whereas simplex single-cell RNA-Seq is descriptive, focusing on what is in a single sample, multiplex single-cell RNA-Seq allows for asking how? For example how do cell states change in response to perturbations? How does disease affect cell state and type?
  • Simplex single-cell RNA-Seq leads to systematics arguments about clustering: when do cells that cluster together constitute a “cell type”? How many clusters are real? How should clustering be performed? Multiplex single-cell RNA-Seq provides an approach to assigning significance to clusters via their association with samples. In our paper, we specifically utilized sample identification to determine the parameters/thresholds for the clustering algorithm:samples_clusteringOn the left hand side is a t-SNE plot labeled by different samples, and on the right hand side de novo clusters. The experiment allowed us to confirm the functional significance of a cluster as a cell state resulting from a specific range of perturbation conditions.
  • Multiplexing reduces batch effect, and also makes possible the procurement of more replicates in experiments, an important aspect of single-cell RNA-Seq as noted by Hicks et al. 2017.
  • Multiplexing has numerous other benefits, e.g. allowing for the detection of doublets and their removal prior to analysis. This useful observation of Stoeckius et al. makes possible higher-throughput single-cell RNA-Seq. We also found an intriguing relationship between tag abundance and cell size. Both of these phenomena are illustrated in one supplementary figure of our paper that I’m particularly fond of:

Supp2

It shows a multiplexing experiment in which 8 different samples have been pooled together. Two of these samples are human-only samples, and two are mouse-only. The remaining four are samples in which human and mouse cells have been mixed together (with 2,3,4 and 5 tags being used for each sample respectively). The t-SNE plot is made from the tag counts, which is why the samples are neatly separated into 8 clusters. However in Panel b, the cells are colored by their cDNA content (human, mouse, or both). The pure samples are readily identifiable, as are the mixed samples. Cell doublets (purple) can be easily identified and therefore removed from analysis. The relationship between cell size and tag abundance is shown in Panel d. For a given sample with both human and mouse cells (bottom row), human cells give consistently higher sample tag counts. Along with all of this, the figure shows we are able to label a sample with 5 tags, which means that using only 20 oligos (this is how many we worked with for all of our experiments) it is possible to label {20 \choose 5} = 15,504 samples.

  • Thinking about hundreds (and soon thousands) of single-cell experiments is going to be complicated. The cell-gene matrix that is the fundamental object of study in single-cell RNA-Seq extends to a cell-gene-sample tensor. While more complicated, there is an opportunity for novel analysis paradigms to be developed. A hint of this is evident in our visualization of the samples by projecting the sample-cluster matrix. Specifically, the matrix below shows which clusters are represented within each sample, and the matrix is quantitative in the sense that the magnitude of each entry represents the relative abundance of cells in a sample occupying a given cluster:sample-cluster
    A three-dimensional PCA of this matrix reveals interesting structure in the experiment. Here each point is an entire sample, not a cell, and one can see how changes in factors move samples in “experiment space”:pca_sample

As experiments become even more complicated, and single-cell assays become increasingly multimodal (including not only RNA-Seq but also protein measurements, methylation data, etc.) development of a coherent mathematical framework for single-cell genomics will be central to interpreting the data. As Dueck et al. 2015 point out, such analysis is likely to not only be mathematically interesting, but also functionally important.

We aren’t the only group thinking about sample multiplexing for single-cell RNA-Seq. The “demuxlet” method by Kang et al., 2017 is an in silico approach based on multiplexing from genomic variation. Kang et al. show that if pooled samples are genetically heterogeneous, genotype data can be used to separate samples providing an effective solution for multiplexing single-cell RNA-Seq in large human studies. However demuxlet has limitations, for example it cannot be used for samples from a homogenous genetic background. Two papers at the end of last year develop an epitope labeling strategy for multiplexing: Stoeckius et al. 2017 and Peterson et al. 2017. While epitope labeling provides additional information that can be of interest, our method is more universal in that it can be used to multiplex any kind of samples, even from different organisms (a point we make with the species mixing multiplex experiment I described above). The approaches are also not exclusive, epitope labeling could be coupled to a live cell DNA tagging multiplex experiment allowing for the same epitopes to be assayed together in different samples. Finally, our click chemistry approach is fast, cheap and convenient, immediately providing multiplex capability for thousands, or even hundreds of thousands of samples.

One interesting aspect of Jase’s multiplexing paper is that the project it describes was itself a multiplexing experiment of sorts. The origins of the experiment date to 2005 when I was awarded tenure in the mathematics department at UC Berkeley. As is customary after tenure trauma, I went on sabbatical for a year, and I used that time to ponder career related questions that one is typically too busy for. Questions I remember thinking about: Why exactly did I become a computational biologist? Was a mathematics department the ideal home for me? Should I be more deeply engaged with biologists? Were the computational biology papers I’d been writing meaningful? What is computational biology anyway?

In 2008, partly as a result of my sabbatical rumination but mostly thanks to the encouragement and support of Jasper Rine, I changed the structure of my appointment and joined the UC Berkeley Molecular and Cell Biology (MCB) department (50%). A year later, I responded to a call by then Dean Mark Schlissel and requested wet lab space in what was to become the Li Ka Shing Center at UC Berkeley. This was not a rash decision. After working with Cole Trapnell on RNA-Seq I’d come to the conclusion that a small wet lab would be ideal for our group to better learn the details of the technologies we were working on, and I felt that practicing them ourselves would ultimately be the best way to arrive at meaningful (computational) methods contributions. I’d also visited David Haussler‘s wet lab where I met Jason Underwood who was working on FragSeq at the time. I was impressed with his work and what I saw were important benefits of real contact between wet and dry, experiment and computation.

In 2011 I was delighted to move into my new wet lab. The decision to give me a few benches was a bold and unexpected one, spearheaded by Mark Schlissel, but also supported by a committee he formed to decide on the make up of the building. I am especially grateful to John Ngai, Art Reingold and Randy Scheckman for their help. However I was in a strange position starting a wet lab as a tenured professor. On the one hand the security of tenure provided some reassurance that a failure in the wet lab would not immediately translate to a failure of career. On the other hand, I had no startup funds to buy all the basic infrastructure necessary to run a lab. CIRM, Mark Schlissel, and later other senior faculty in Molecular & Cell Biology at UC Berkeley, stepped in to provide me with the basics: a -80 and -20, access to a shared cold room, a Bioanalyzer (to be shared with others in the building), and a thermocycler. I bought some other basic equipment but the most important piece was the recruitment of my first MCB graduate student: Shannon Hateley. Shannon and I agreed that she would set up the lab and also be lab manager, while I would supervise purchasing and other organization lab matters. I obtained informed consent from Shannon prior to her joining my lab, for what would be a monumental effort requested of her. We also agreed she would be co-advised by another molecular biologist “just in case”.

With Shannon’s work and then my second molecular biology student, Lorian Schaeffer, the lab officially became multiplexed. Jase, who initiated and developed not only the molecular biology but also the computational biology of Gehring et al. 2018 is the latest experimentalist to multiplex in our group. However some of the mathematicians now multiplex as well. This has been a boon to the research of the group and I see Jase’s paper as fruit that has grown from the diversity in the lab. Moving forward, I see increasing use of mathematics ideas in the development of novel molecular biology. For example, current single-cell RNA-Seq multiplexing is a form of information multiplexing that is trivial in comparison to the multiplexing ideas from information theory; the achievements are in the molecular molecular implementations, but in the future I foresee much more of a blur between wet and dry and increasingly sophisticated mathematical ideas being implemented with molecular biology.

Hedy_Lamarr_Publicity_Photo_for_The_Heavenly_Body_1944

Hedy Lamarr, the mother of multiplexing.

Some anti-Semitism is justified

Whenever you interview fat people, you feel bad, because you know you’re not going to hire them

Japan should be bombed for dragging its feet on supporting the Human Genome Project

All our social policies are based on the fact that [Africans] intelligence is the same as ours – whereas all the testing says not really

I think having all these women around makes it more fun for the men but they’re probably less effective

I’m not a racist in a conventional way

There is a biochemical link between exposure to sunlight and sexual urges.. that’s why you have Latin lovers

[The] historic curse of the Irish.. is not alcohol, it’s not stupidity.. it’s ignorance

People say it would be terrible if we made all girls pretty. I think [doing so by genetic selection] would be great

By choice [Rosalind Franklin] did not emphasize her feminine qualities.. There was never lipstick to contrast with her straight black her, while at the age of thirty-one her dresses showed all the imagination of English blue-stocking adolescents. So it was quite easy to imagine her the product of an unsatisfied mother who unduly stressed the desirability of professional  careers that could save bright girls from marriages to dull men.. Clearly Rosy had to go or be put in her place. The former was obviously preferable because given her belligerent moods, it would be very difficult for Maurice [Wilkins] to maintain a dominant position that would allow him to think unhindered about DNA.. The thought could not be avoided that the best home for a feminist was another person’s lab

The one aspect of the Jewish brain that is not first class is that Jews are said to be bad in thinking in three dimensions.. it is true

Women are supposedly bad at three dimensions

[Rosalind Franklin] couldn’t think in three dimensions very well

[Rosalind Franklin] had Aspergers

People ask about [Rosalind Franklin] and I always say ‘autism’

[Francis Crick] may have been a bit autistic

I think now we’re in a terrible sitution where we should pay the rich people to have children.. if we don’t encourage procreation of wealthier citizens, IQ levels will most definitely fall.

Men are a bit strange and their strangest quality is their ability to understand mathematics

[Rosalind] Franklin couldn’t do maths

Indians in [my] experience [are] servile.. because of selection under the caste system

Women at Oxford and Cambridge are better than Harvard and Yale because they know their job is to look pretty and get a rich husband

People who have to deal with black employees find [that they are equal] not true

[As a female scientist] you won’t be taken seriously if you have children

Fat people are more sexual

East Asian students [tend] to be conformist, because of selection for conformity in ancient Chinese society

[Linus Pauling] was probably always half-insane

Anyone who would hire an ecologist is out of his mind

[Rosalind Franklin] was a loser

The wider your face, the more likely you are [to be violent].. Senator Jim Webb has the broadest face I’ve ever seen on any man

We already accept that most couples don’t want a Down child. You would have to be crazy to say you wanted one, because that child has no future.

Disabled individuals are genetic losers

[With IVF] all hell will break loose, politically and morally, all over the world

If we knew our son would develop schizophrenia, we wouldn’t have had him

My former colleagues are pinkos and shits

We should perform genome-wide association studies of women who have given up their children for adoption in order to find the ‘loveless gene’

[X University]- it used to be such a wonderful place. And then they started admitting women!

Catholics are more likely to forgive than Jews

If you could find the gene which determines sexuality and a woman decides she doesn’t want a homosexual child, well, let her

 

photo-6_Fotor

 

I recently published a paper on the bioRxiv together with Vasilis Ntranos, Lynn Yi and Páll Melsted on Identification of transcriptional signatures for cell types from single-cell RNA-Seq. The contributions of the paper can be summed up as:

  1. The simple technique of logistic regression, by taking advantage of the large number of cells assayed in single-cell RNA-Seq experiments, is much more effective than current approaches at identifying marker genes for clusters of cells.
  2. The simplest single-cell RNA-Seq data, namely 3′ single-end reads produced by technologies such as Drop-Seq or 10X, can distinguish isoforms of genes.
  3. The simple idea of GDE provides a unified perspective on DGE, DTU and DTE.

These simple, simple and simple ideas are so obvious that of course anyone could have discovered them, and one might be tempted to go so far as to say that even if people didn’t explicitly write them down, they were basically already known. After all, logistic regression was published by David Cox in 1958, and who didn’t know that there are many 3′ unannotated UTRs in the human genome? As for DGE, DTU and DTE (and DTE->G and DTE+G) I mean who doesn’t get these basic concepts? Indeed, after reading our paper someone remarked that one of the key results “was already known“, presumably because the successful application of logistic regression as a gene differential expression method for single-cell RNA-Seq follows from the fact that Šidák aggregation fails for differential gene expression in bulk RNA-Seq.

The “was already known” comment reminded me of a recent blog post about the dirty secret of mathematics. In the post, the author begins with the following math problem: Without taking your pencil off the paper/screen, can you draw four straight lines that go through the middle of all of the dots?

dots

The problem may not yield immediately (try it!) but the solution is obvious once presented. This is a case of the solution requiring a bit of out-of-the-box thinking, leading to a perspective on the problem that is obvious in retrospect. In the Ntranos, Yi et al. paper, the change in perspective was the realization that “Instead of the traditional approach of using the cell labels as covariates for gene expression, logistic regression incorporates transcript quantifications as covariates for cell labels”. It’s no surprise the “was already known” reaction reared it’s head in this case. It’s easy to convince oneself, after the fact, that the “obvious” idea was in one’s head all along.

The egg of Columbus is an apocryphal tale about ideas that seem trivial after the fact. The story originates from the book “History of the New World” by Girolamo Benzoni, who wrote that Columbus, upon upon being told that his journey to the West Indies was unremarkable and that Spain “would not have been devoid of a man who would have attempted the same” had he not undertaken the journey, replied

“Gentlemen, I will lay a wager with any of you, that you will not make this egg stand up as I will, naked and without anything at all.” They all tried, and no one succeeded in making it stand up. When the egg came round to the hands of Columbus, by beating it down on the table he fixed it, having thus crushed a little of one end”

1024px-Columbus_Breaking_the_Egg'_(Christopher_Columbus)_by_William_Hogarth

The story makes a good point. Discovery of the Caribbean in the 6th millennium BC was certainly not a trivial accomplishment even if it was obvious after the fact. The egg trick, which Columbus would have learned from the Amerindians who first brought chickens to the Americas, is a good metaphor for the discovery.

There are many Amerindian eggs in mathematics, which has its own apocryphal story to make the point: A professor proving a theorem during a lecture pauses to remark that “it is obvious that…”, upon which she is interrupted by a student asking if that’s truly the case. The professor runs out of the classroom to a nearby office,  returning after several minutes with a notepad filled with equations to exclaim “Why yes, it is obvious!” But even first-rate mathematicians can struggle to accept Amerindian eggs as worthy contributions, frequently succumbing to the temptation of dismissing others’ work as obvious. One of my former graduate school mentors was G.W. Peck, a math professor who created a pseudonym for the express purpose of publishing his Ameridian eggs in a way that would reduce unintended embarrassment for those whose work he was improving on in in “trivial ways”. G.W. Peck has an impressive publication record.

Bioinformatics is not very different from mathematics; the literature is populated with many Amerindian eggs. My favorite example is the Smith-Waterman algorithm, an algorithm for local alignment published by Temple Smith and Michael Waterman in 1981. The Smith-Waterman algorithm is a simple modification of the Needleman-Wunsch algorithm:

SW

The table above shows the differences. That’s it! This table made for a (highly cited) paper. Just initialize the Needleman-Wunsch algorithm with zeroes instead of a gap penalty, set negative scores to 0, trace back from the highest score. In fact, it’s such a minor modification that when I first learned the details of the algorithm I thought “This is obvious! After all, it’s just the Needleman-Wunsch algorithm. Why does it even have a name?! Smith and Waterman got a highly cited paper?! For this?!” My skepticism lasted only as long as it took me to discover and read Peter Sellers’ 1980 paper attempting to solve the same problem. It’s a lot more complicated, relying on the idea of “inductive steps”, and requires untangling mysterious diagrams such as:

Figure2

The Smith-Waterman solution was clever, simple and obvious (after the fact). Such ideas are a hallmark of Michael Waterman’s distinguished career. Consider the Lander-Waterman model, which is a formula for the expected number of contigs in a shotgun sequencing experiment:

E(contigs) = Ne^{-R}.

Here N is the number of reads sequenced and R=NL/G is the “redundancy” (reads * fragment length / genome length). At first glance the Lander-Waterman “model” is just a formula arising from the Poisson distribution! It was obvious… immediately after they published it. The Pevzner-Tang-Waterman approach to DNA assembly is another good example. It is no coincidence that all of these foundational, important and impactful ideas have Waterman in their name.

Looking back at my own career, some of the most satisfying projects have been Amerindian eggs, projects where I was lucky to participate in collaborations leading to ideas that were obvious (after the fact). Nowadays I know I’ve hit the mark when I receive the most authentic of compliments: “your work is trivial!” or “was widely known in the field“, as I did recently after blogging about plagiarism of key ideas from kallisto. However I’m still waiting to hear the ultimate compliment: “everything you do is obvious and was already known!”

(Click “read the rest of this entry” to see the solution to the 9 dot problem.)

Read the rest of this entry »

The development of microarray technology two decades ago heralded genome-wide comparative studies of gene expression in human, but it was the widespread adoption of RNA-Seq that has led to differential expression analysis becoming a staple of molecular biology studies. RNA-Seq provides measurements of transcript abundance, making possible not only gene-level analyses, but also differential analysis of isoforms of genes. As such, its use has necessitated refinements of the term “differential expression”, and new terms such as “differential transcript expression” have emerged alongside “differential gene expression”. A difficulty with these concepts is that they are used to describe biology, statistical hypotheses, and sometimes to describe types of methods. The aims of this post are to provide a unifying framework for thinking about the various concepts, to clarify their meaning, and to describe connections between them.

To illustrate the different concepts associated to differential expression, I’ll use the following example, consisting of a comparison of a single two-isoform gene in two conditions (the figure is Supplementary Figure 1 in Ntranos, Yi et al. Identification of transcriptional signatures for cell types from single-cell RNA-Seq, 2018):

Supp_Fig_1

The isoforms are labeled primary and secondary, and the two conditions are called “A” and “B”. The black dots labeled conditions A and B have x-coordinates x_A and x_B corresponding to the abundances of the primary isoform in the respective conditions, and y-coordinates y_A and y_B corresponding to the abundance of the secondary isoforms. In data from an experiment the black dots will represent the mean level of expression of the constituent isoforms as derived from replicates, and there will be uncertainty as to their exact location. In this example I’ll assume they represent the true abundances.

Biology

Below is a list of terms used to characterize changes in expression:

Differential transcript expression (DTE) is change in one of the isoforms. In the figure, this is represented (conceptually) by the two red lines along the x- and y-axes respectively. Algebraically, one might compute the change in the primary isoform by x_B-x_A and the change in the secondary isoform by y_B-y_A. However the term DTE is used to denote not only the extent of change, but also the event that a single isoform of a gene changes between conditions, i.e. when the two points lie on a horizontal or vertical line. DTE can be understood to occur as a result of transcriptional regulation if an isoform has a unique transcription start site, or post-transcriptional regulation if it is determined by a unique splicing event.

Differential gene expression (DGE) is the change in the overall output of the gene. Change in the overall output of a gene is change in the direction of  the line y=x, and the extent of change can be understood geometrically to be the distance between the projections of the two points onto the line y=x (blue line labeled DGE). The distance will depend on the metric used. For example, the change in expression could be defined to be the total expression in condition B (x_B+y_B) minus the change in expression in condition A (x_A+y_A), which is |x_B-x_A+y_B-y_A|.  This is just the length of the blue line labeled “DGE” given by the L_1 norm. Alternatively, one could consider “DGE” to be the length of the blue line in the L_2 norm. As with DTE, DGE can also refer to a specific type of change in gene expression between conditions, one in which every isoform changes (relatively) by the same amount so that the line joining the two points has a slope of 1 (i.e. is angled at 45°). DGE can be understood to be the result of transcriptional regulation, driving overall gene expression up or down.

Differential transcript usage (DTU) is the change in relative expression between the primary and secondary isoforms. This can be interpreted geometrically as the angle between the two points, or alternatively as the length (as given by some norm) of the green line labeled DTU. As with DTE and DGE, DTU is also a term used to describe a certain kind of difference in expression between two conditions, one in which the line joining the two points has a slope of -1. DTU events are most likely controlled by post-transcriptional regulation.

Gene differential expression (GDE) is represented by the red line. It is the amount of change in expression along in the direction of line joining the two points. GDE is a notion that, for reasons explained below, is not typically tested for, and there are few methods that consider it. However GDE is biologically meaningful, in that it generalizes the notions of DGE, DTU and DTE, allowing for change in any direction. A gene that exhibits some change in expression between conditions is GDE regardless of the direction of change. GDE can represent complex changes in expression driven by a combination of transcriptional and post-transcriptional regulation. Note that DGE, DTU and DTE are all special cases of GDE.

If the L_2 norm is used to measure length and DTE_1,DTE_2 denote DTE in the primary and secondary isoforms respectively, then it is clear that DGE, DTU, DTE and GDE satisfy the relationship

GDE^2  = DGE^2 + DTU^2  = DTE_1^2  + DTE_2^2.

 

Statistics

The terms DTE, DGE, DTU and GDE have an intuitive biological meaning, but they are also used in genomics as descriptors of certain null hypotheses for statistical testing of differential expression.

The differential transcript expression (DTE) null hypothesis for an isoform is that it did not change between conditions, i.e. x_A=x_B for the primary isoform, or y_A=y_B for the secondary isoform. In other words, in this example there are two DTE null hypotheses one could consider.

The differential gene expresión (DGE) null hypothesis is that there is no change in overall expression of the gene, i.e. x_A+y_A = x_B+y_B.

The differential transcript usage (DTU) null hypothesis is that there is no change in the difference in expression of isoforms, i.e. x_A-y_A = x_B - y_B.

The gene differential expression (GDE) null hypothesis is that there is no change in expression in any direction, i.e. for all constants a,b, ax_A+by_A = ax_B+by_B.

The union differential transcript expression (UDTE) null hypothesis is that there is no change in expression of any isoform. That is, that x_A = y_A and x_B = y_B (this null hypothesis is sometimes called DTE+G). The terminology is motivated by \neg \cup_i DTE_i = \cap_i DTE_i.

Not that UDTE \Leftrightarrow GDE, because if we assume GDE, and set a=1,b=0 we obtain DTE for the primary isoform and setting a=0,b=1 we obtain DTE for the secondary isoform. To be clear, by GDE or DTE in this case we mean the GDE (respectively DTE) null hypothesis. Furthermore, we have that

UDTE,GDE \Rightarrow DTE,DGE,DTU.

This is clear because if x_A=y_A and x_B=y_B then both DTE null hypotheses are satisfied by definition, and both DGE and DTU are trivially satisfied. However no other implications hold, i.e. DTE \not \Rightarrow DGE,DTU, similarly DGE \not \Rightarrow DTE,DTU, and DTU \not \Rightarrow DGE, DTE.

Methods

The terms DGE, DTE, DTU and GDE also used to describe methods for differential analysis.

A differential gene expression method is one whose goal is to identify changes in overall gene expression. Because DGE depends on the projection of the points (representing gene abundances) to the line y=x, DGE methods typically take as input gene counts or abundances computed by summing transcript abundances x_A+y_A and x_B+y_B. Examples of early DGE methods for RNA-Seq were DESeq (now DESeq2) and edgeR. One problem with DGE methods is that it is problematic to estimate gene abundance by adding up counts of the constituent isoforms. This issue was discussed extensively in Trapnell et al. 2013. On the other hand, if the biology of a gene is DGE, i.e. changes in expression are the same (relatively) in all isoforms, then DGE methods will be optimal, and the issue of summed counts not representing gene abundances accurately is moot.

differential transcript expression method is one whose goal is to identify individual transcripts that have undergone DTE. Early methods for DTE were Cufflinks (now Cuffdiff2) and MISO, and more recently sleuth, which improves DTE accuracy by modeling uncertainty in transcript quantifications. A key issue with DTE is that there are many more transcripts than genes, so that rejecting DTE null hypotheses is harder than rejecting DGE null hypotheses. On the other hand, DTE provides differential analysis at the highest resolution possible, pinpointing specific isoforms that change and opening a window to study post-transcriptional regulation. A number of recent examples highlight the importance of DTE in biomedicine (see, e.g., Vitting-Seerup and Sandelin 2017). Unfortunately DTE results do not always translate to testable hypotheses, as it is difficult to knock out individual isoforms of genes.

differential transcript usage method is one whose goal is to identify genes whose overall expression is constant, but where isoform switching leads to changes in relative isoform abundances. Cufflinks implemented a DTU test using Jensen-Shannon divergence, and more recently RATs is a method specialized for DTU.

As discussed in the previous section, none of null hypotheses DGE, DTE and DTU imply any other, so users have to choose, prior to performing an analysis, which type of test they will perform. There are differing opinions on the “right” approach to choosing between DGE, DTU and DTE. Sonseson et al. 2016 suggest that while DTE and DTU may be appropriate in certain niche applications, generally it’s better to choose DGE, and they therefore advise not to bother with transcript-level analysis. In Trapnell et al. 2010, an argument was made for focusing on DTE and DTU, with the conclusion to the paper speculating that “differential RNA level isoform regulation…suggests functional specialization of the isoforms in many genes.” Van den Berge et al. 2017 advocate for a middle ground: performing a gene-level analysis but saving some “FDR budget” for identifying DTE in genes for which the UDTE null hypothesis has been rejected.

There are two alternatives that have been proposed to get around the difficulty of having to choose, prior to analysis, whether to perform DGE, DTU or DTE:

differential transcript expression aggregation (DTE->G) method is a method that first performs DTE on all isoforms of every gene, and then aggregates the resulting p-values (by gene) to obtain gene-level p-values. The “aggregation” relies on the observation that under the null hypothesis, p-values are uniformly distributed. There are a number of different tests (e.g. Fisher’s method) for testing whether (independent) p-values are uniformly distributed. Applying such tests to isoform p-values per gene provides gene-level p-values and the ability to reject UDTE. A DTE->G method was tested in Soneson et al. 2016 (based on Šidák aggregation) and the stageR method (Van den Berge et al. 2017) uses the same method as a first step. Unfortunately, naïve DTE->G methods perform poorly when genes change by DGE, as shown in Yi et al. 2017. The same paper shows that Lancaster aggregation is a DTE->G method that achieves the best of both the DGE and DTU worlds. One major drawback of DTE->G methods is that they are non-constructive, i.e. the rejection of UDTE by a DTE->G method provides no information about which transcripts were differential and how. The stageR method averts this problem but requires sacrificing some power to reject UDTE in favor of the interpretability provided by subsequent DTE.

gene differential expression method is a method for gene-level analysis that tests for differences in the direction of change identified between conditions. For a GDE method to be successful, it must be able to identify the direction of change, and that is not possible with bulk RNA-Seq data. This is because of the one in ten rule that states that approximately one predictive variable can be estimated from ten events. In bulk RNA-Seq, the number of replicates in standard experiments is three, and the number of isoforms in multi-isoform genes is at least two, and sometimes much more than that.

In Ntranos, Yi et al. 2018, it is shown that single-cell RNA-Seq provides enough “replicates” in the form of cells, that logistic regression can be used to predict condition based on expression, effectively identifying the direction of change. As such, it provides an alternative to DTE->G for rejecting UDTE. The Ntranos and Yi GDE methods is extremely powerful: by identifying the direction of change it is a DGE methods when the change is DGE, it is a DTU method when the change is DTU, and it is a DTE method when the change is DTE. Interpretability is provided in the prediction step: it is the estimated direction of change.

Remarks

The discussion in this post is based on an example consisting of a gene with two isoforms, however the concepts discussed are easy to generalize to multi-isoform genes with more than two transcripts. I have not discussed differential exon usage (DEU), which is the focus of the DEXSeq method because of the complexities arising in genes which don’t have well-defined shared exons. Nevertheless, the DEXSeq approach to rejecting UDTE is similar to DTE->G, with DTE replaced by DEU. There are many programs for DTE, DTU and (especially) DGE that I haven’t mentioned; the ones cited are intended merely to serve as illustrative examples. This is not a comprehensive review of RNA-Seq differential expression methods.

Acknowledgments

The blog post was motivated by questions of Charlotte Soneson and Mark Robinson arising from an initial draft of the Ntranos, Yi et al. 2018 paper. The exposition was developed with Vasilis Ntranos and Lynn Yi. Valentine Svensson provided valuable comments and feedback.

I have been fascinated with mini computers for some time, and have wondered when they will become suitable for bioinformatics. The 4273π project, which is an online course that is distributed as a 32Gb SD card image for the Raspberry Pi, has been around for a few years and demonstrated the utility of mini computers for training. The course is a proof of principle that bioinformatics software can work on a mini computer; the distributed software includes some comparative genomics and phylogenetics programs. However there is not much one can do with 1Gb RAM. The data in 4273π are small FASTA files, and while the Raspberry Pi is powerful enough to allow for experimentation and exploration of such datasets, even the new Raspberry Pi 3, with ten times the performance of the original 2012 model, still only has 1Gb of RAM and is not powerful enough for handling the current primary data type of genomics: high-throughput sequencing data.

Enter the Rock64.

The Rock64 is a new single-board computer from Pine64 that competes with the Raspberry Pi 3:Comparison_table

The Rock64 is evidence of the rapid and impressive development in single-board computers over the past few years, and Pine64 crosses a major threshold by offering a model with 4Gb RAM. The machine is also cheap. A 4Gb RAM Rock64, which is a 64-bit, quad core 1.5GHz machine, costs $44.95 (the 1Gb model is just $24.95). An enclosure is $7.95, a power supply $6.99, and a 64Gb SSD drive is only $31.95 (the 16Gb drive is $15.95). When my student Jase Gehring found out the specs of the machine last summer, he immediately realized that it was powerful enough to run kallisto for RNA-Seq analyses, and we preordered a handful of the boards for the lab. These arrived in the fall and we have been testing the machines for a while. One of them is hooked up to a monitor, and together with a bluetooth mouse and keyboard is serving as a general desktop computer in the wet lab. They are extraordinary versatile mini computers that, in my opinion, portend a future of mobile, low-cost, and light-weight computing for clinical and field genomics applications.

Unfortunately ARM is not an architecture known to most computational biologists, and my initial enthusiasm for the Rock64 was dampened when I found out that most genomics software does not work on ARM architecture. However I managed to install R, and Páll Melsted compiled kallisto on the Rock64 for the new release of version 0.44 (the release introduces an ARM binary, along with pseudobam for visualization of pseudoalignments). With these programs in place on Gibraltar (our first Rock64 with 4Gb of RAM, a 64Gb SSD drive, and a quad-core 1.5GHz processor), there was ample processing power to quantify RNA-Seq datasets.

rock64_Gibraltar

For example, I was able to build the Saccharomyces cerevisae release 81 transcriptome index in one minute. A complete quantification of 6 samples from Ellahi, Thurtle and Rine, 2015 using two cores (with 30 bootstraps per sample) took 21 minutes. The quantification consisted of processing 47,744,312 paired-end reads. Amazingly, the Rock64 can quantify human RNA-Seq, which requires pseudoalignment of reads to a much larger transcriptome than yeast. A human 15,117,833 paired-end read sample (SRR493366) took less than 11 minutes to quantify using a single core. These results show that the Rock64 is not a toy; it can be used for the analysis of high-throughput sequencing data from substantial biological experiments.

Human_quantification

It’s mind boggling to consider just how amazing it is to be able to quantify RNA-Seq on such a machine. When we developed kallisto we knew that the two orders of magnitude speedup was a game-changer, but I never thought we would literally be able to run it on what is not much more than a phone. We’re not going to switch over all of our RNA-Seq analyses to the Rock64s quite yet, but cluster assemblies such as the Pico5S have piqued my interest.

Pico5S

I imagine that it won’t be long before mini computers are even more powerful, and provide ultra low-cost portable alternatives to current server and cloud computing solutions. Having said that, I still miss my Commodore 64. Fortunately the mini revolution isn’t leaving me behind: a mini version of the C64 is slated for release early this year.

On April 11th 2016, I contacted the Office for Prevention of Harassment and Discrimination at UC Berkeley to report that Professor Terry Speed had sexually harassed a postdoctoral researcher in the UC Berkeley statistics department in the period 2000–2002. Two specific allegations were subsequently investigated:

Allegation One: Respondent, a professor in the Statistics Department, sexually harassed Complainant One, a post-doctoral student in the same department, from 2000-2002 by making sexual advances toward her, asking her for dates, telling her he had a “crush” on her, giving her hugs, and communicating with her, including by email, in an intimate or romantic manner, when such behavior was not welcome.

Allegation Two: Respondent, a professor in the Statistics Department, created a hostile work environment for Complainant Two, an Assistant Professor in the Mathematics Department, in 2002, through Respondent’s persistent discussions and emails regarding his romantic interest in Complainant One and by pressuring Complainant Two to persuade Complainant One to interact with Respondent.

The investigation took 14 months to complete, and the result was a 47 page report along with 89 pages of supporting evidence based on interviews, hundreds of pages of emails that I disclosed at the outset of the investigation, and letters and emails provided by Respondent. The report concludes as follows:

CONCLUSION
For the reasons stated above, I conclude that the preponderance of the evidence substantiates that Respondent violated the 1992 Sexual Harassment Policy in that he engaged in unwelcome conduct of a sexual nature that created a hostile environment for Complainant One and Complainant Two, and conditioned an academic or personnel decision on Complainant One’s submission to his conduct. This report is being submitted to the Vice Provost for Faculty for review under the Faculty Code of Conduct.

I have waited since June of last year to hear from the Vice Provost for Faculty at UC Berkeley what action the university will take in light of the findings, however despite multiple requests for information the university has yet to respond as to whether it will enact any sanctions on Respondent.

My close-up encounter with sexual harassment was devastating. I never expected, when I arrived in Berkeley in 1999, that Terry Speed, a senior professor in my field who I admired and thought of as a mentor would end up as Respondent and myself as Complainant Two. However much more serious and significant than my ordeal were the devastating consequences his sexual harassment had on the life and well being of Complainant One. The sexual harassment that took place was not an isolated event. Despite repeated verbal and written requests by Complainant One that Speed stop, his sexual harassment continued unabated for months. The case was not reported at the time the sexual harassment happened because of the structure of Title IX. Complainant One knew that Speed would be informed if a complaint was made, and Complainant One was terrified of reprisal. Her fear was not hypothetical; after months of asking Speed to stop sexually harassing her, he communicated to her that, unless she was willing to reconcile with him as he wished, she could not count on his recommendation.

Speed has been an advocate for women in academia in recent years. However no amount of advocacy on behalf of women can cancel out the physical and mental harm caused by prolonged sexual harassment. Speed’s self-proclamation that he is a “male feminist” rings hollow.

Update on June 6, 2018: Terry Speed is no longer listed as Professor Emeritus at UC Berkeley.

Update on June 22, 2018: This is the “notice of outcome” I received from UC Berkeley regarding the case:

Hermalin_letter

The GTEx consortium has just published a collection of papers in a special issue of Nature that together provide an unprecedented view of the human transcriptome across dozens of tissues. The work is based on a large-scale RNA-Seq experiment of postmortem tissue from hundreds of human donors, illustrated in Figure 1 of the overview by Ward and Gilad 2017:

550190a-f1

The data provide a powerful new opportunity for several analyses, highlighted (at least for me) by the discovery of 673 trans-eQTLs at 10% genome-wide FDR. Undoubtedly more discoveries will be published when the sequencing data, available via dbGAP, is analyzed in future studies. As a result, the GTEx project is likely to garner many citations, both for specific results, but also drive-by-citations that highlight the scope and innovation of the project. Hopefully, these citations will include the key GTEx paper:

Carithers, Latarsha J, Ardlie, Kristin, Barcus, Mary, Branton, Philip A, Britton, Angela, Buia, Stephen A, Compton, Carolyn C, DeLuca, David S, Peter-Demchok, Joanne, Gelfand, Ellen T, Guan, Ping, Korzeniewski, Greg E, Lockhart, Nicole C, Rabiner, Chana A, Rao, Abhi K, Robinson, Karna L, Roche, Nancy V, Sawyer, Sherilyn J, Segrè, Ayellet V, Shive, Charles E, Smith, Anna M, Sobin, Leslie H, Undale, Anita H, Valentino, Kimberly M, Vaught, Jim, Young, Taylor R, Moore, Helen M, on behalf of the GTEx consortium, A Novel Approach to High-Quality Postmortem Tissue Procurement: The GTEx Project, Biopreservation and Biobanking 13(5), 2015, p 311–319.

The paper by Latarsha Carithers et al. provides an overview of the consent and laboratory procedures that GTEx developed and applied to obtain tissues from hundreds of deceased donors. The monumental effort is, to my knowledge, unprecedented in scale and scope, and it relied on the kindness and generosity of hundreds of family members and next-of-kin of donors, who consented to donate their loved ones to science.

To develop effective and appropriate consent procedures, the GTEx project organized a sub-study to determine how best to approach, interact and explain the project to family members. Ultimately consent was obtained either in person or over the phone, and one can only imagine the courage of families to agree to donate, especially during times of grief and for a project whose goals could only be explained in terms of the long-term benefits of basic science.

The consent procedures for GTEx were complicated by a need to rapidly place tissue in preservative postmortem. RNA degrades rapidly after the time of death, and there is a window of only a few hours before expression can no longer be effectively measured. The RNA Integrity Number (RIN) measures the extent of degradation of RNA. It used to be measured with gel electrophoresis by examining the ratio of 28S:18S rRNA; more recently RIN is computed using more sophisticated analyses with, e.g. the Agilent bioanalyzer (see Schroeder et al. 2006 for details). GTEx conducted extensive studies to determine the correspondence between postmortem interval (time taken to preserve tissue) and RIN, and also examined the RIN necessary for effective RNA-Seq library construction.

fig-6

The effect of ischemic time time on RIN values (Fig 6 from Carithers et al. 2015).

These studies were used to deploy standard operating procedures across multiple source sites (an obvious necessity given the number of donors needed). All of this research was not only crucial for GTEx, but will be extremely valuable for studies relying on postmortem RNA-Seq in the future.

The collection of specimens from each source site required training of individuals at that site, and one of GTEx’s achievements is the gathering of knowledge of how to orchestrate such a complex distributed sample collection and preparation enterprise. The workflow shown below (Figure 2 from Carithers et al. 2015) hints at the complexities involved (e.g. the need for separate treatment of brain due to the requirement of proper sectioning).

Workflow

A meeting discussing the findings of Carithers et al. was held on May 20-21 2015 and I encourage all users of GTEx data/results to view the recording of it (Day 1, Day 2).

It is truly moving and humbling to consider the generosity of the hundreds of families, in many cases of donors in their twenties or thirties, who enabled GTEx. The scale of their contribution, and the suffering that preceded collection of the data cannot be captured in cartoons that describe the experiment. The families can also never be fully acknowledged, not matter how many times they are thanked in acknowledgment sections. But at a minimum, I think that reading Carithers et al. 2015 is the least one can do to honor them, and those who turned their good-will into science.

Acknowledgment: the idea for this blog post originated during a conversation with Roderic Guigó.

In a previous post I wrote about How not to perform a differential expression analysis. In response to my post, Rob Patro, Geet Duggal, Michael I Love, Rafael Irizarry and Carl Kingsford wrote a detailed response. Below is my point-by-point rebuttal to their response (the figures and results in this blog post can be generated using the scripts in the Bits of DNA GitHub repository):

1. In Figure 1 of their response, Patro et al. show an MA plot and state that “if it were true that these methods are ‘very very’ similar one would see most log-ratios close to 0 (within the red lines).” This is true. Below is the MA plot for kallisto with default parameters and Salmon with the –gcBias flag:

ERR188140_1

96.6% of the points lie within the red lines. Since this constitutes most of the points, it seems reasonable to conclude that the methods are indeed very very similar. When both programs are run in default mode, as I did in my blog post, 98.9% of the points lie within the red lines. Thus, using the criterion of Patro et al., the programs have very very similar, or near identical, output. These numbers are conservative, computed by omitting transcripts where both kallisto and Salmon determine that a transcript has zero abundance.

2. Furthermore, Patro et al. explain that their MA plot in Figure 1 “demonstrate[s] how deceiving count scatter plots can be in this particular context.” There is, superficially, some merit to this claim. The MA plot above looks like a smudge of points and seems at odds with the fact that 96.6% of the points lie within the red lines. However the plot displays 198,457 points corresponding to 198,457 quantified transcripts, and as a result many points obfuscate each other. The alpha parameter in ggplot2 sets the opacity/transparency of points, and should be used in such a case to reveal the density of points (see, e.g. Supplementary Figure 19 of Love et al. 2016). Below is a plot of the exact same points with alpha=0.01:

ERR188140_0.01

An R animation that interpolates between the two MA plots above shows the same points, with varying opacity parameters (alpha=1 -> 0.01) and helps to demonstrate how deceiving MA plots can be in this particular context:

animERR.gif

3. The Patro et al. response fails to distinguish between two different comparisons I made in my blog post: (1) comparisons of default kallisto to default Salmon, and (2) default kallisto to Salmon with the –gcBias option. Comparisons of the programs with default options is important because with those options their output is near identical, and, as I explain in my blog post, this is not some cosmic coincidence but a result of Salmon directly implementing the key ideas of pseudoalignment. The Patro et al. 2017 paper is also not just about GC bias correction, as the authors claim in their response, but rather it is also “the Salmon paper” a descriptor that Patro et al. use 24 times in their response. Furthermore, when Patro et al. are asked about how to run Salmon they recommend running it with default options (see e.g. the epilogue below or the way Patro  et al. run Salmon for analysis of the Bealieau-Jones-Greene described in #5) so that a comparison of the programs in default mode is of direct relevance to users.

In regards to the GC bias correction, Patro et al. 2017 claim in their abstract that “[GC bias correction] substantially improves the accuracy of abundance estimates and the sensitivity of subsequence differential expression analysis”. This is a general statement, not one about the sort of niche use-cases they describe in their response. The question then is whether Patro et al. provides support for this general statement and my argument has always been that it does not.

4. Patro et al. criticize my use of the ERR188140 sample to demonstrate how similar Salmon is to kallisto. They write that “the blog post author selected a single sample…”(boldface theirs) to claim that Salmon and kallisto produce output with “very very strong similarity (≃)” and raise the possibility that it was cherry picked, noting that “this particular sample has less GC-content bias” and marking it in a plot. I used ERR188140 because it was our sample of choice for many of the demonstrative analyses in the Bray et al. 2016 paper (see the kallisto paper analysis Github repository where the sample is mentioned since February 2016) and for that paper we had already generated the RSEM quantifications (and the alignments required for running the program), thus saving time in making the PCA analysis for my blogpost. ERR188140 was chosen for Bray et al. 2016 because it was the most deeply sequenced sample in the GEUVADIS dataset.

5. Contrary to the claim by Patro et al. in their response that I examined only one dataset, I also included in my post links with references to specific figures from four other papers that independently found that kallisto is near identical to Salmon. The fairest example for consideration is the additional analysis I mentioned of Beaulieu-Jones and Greene, and separately Patro, of the RNA-Seq dataset from Boj et al. 2015. With that analysis, there can be no claims of cherry-picking. The dataset was chosen by the authors of Beaulieu-Jones and Greene 2017, kallisto quantifications were produced by Beaulieu-Jones and Greene, and Salmon quantifications were prepared by Patro. Presumably the main author of the Salmon program ran Salmon with the best settings possible for the experiment. The fact that different individuals ran the programs is highlighted by the fact that they are not even based on identical annotations. They used different versions of RefSeq: Beaulieu-Jones and Greene quantified with 35,026 transcripts and Patro, who quantified later, used an annotation with 35,882 transcripts. There are eight samples in the analysis and MA plots, made by restricting the analysis to the transcripts in common, all look alike. As an example, the MA plot for SRR1654626  is:

animboj

The fraction of points within the red lines, calculated as before by omitting points at (0,0), is 98.6%.  The Patro analysis of Bealieau-Greene was performed on March 8, 2017 with version 0.8.1 of Salmon, well after the –gcBias option was implemented, the Salmon (version 3 preprint describing the GC correction) published, and the paper submitted. The dates are verifiable in the GitHub repository with the Salmon results.

6. In arguing that kallisto and Salmon are different Patro et al. provide an interesting formula for the correlation for two random variables X and X+Y where X and Y are independent but its use in this context is a sleight of hand. The formula, which is a simple exercise for the reader to derive from the definition of correlation, is

cor(X,X+Y)=\sqrt{\frac{1}{1+Var(Y)/Var(X)}}.

It follows by Taylor series expansion that this is approximately

cor(X,X+Y) \approx 1-\frac{1}{2}\frac{Var(Y)}{Var(X)}.

and if sd(X) is about 3.4 and sd(Y) about 0.5 (Patro et al.‘s numbers), then by inspection cor(X,X+Y) will be 0.99. In sample SRR1654626 shown above, when ignoring transcripts where both programs output 0, sd(X)=3.5 and sd(Y) = 0.43 which are fairly close to Patro et al.‘s numbers. However Patro et al. proceed with a non sequitur, writing that “this means that a substantial difference of 25% between reported counts is typical”. While the correlation formula makes no distributional assumptions, the 25% difference seems to be based on an assumption that Y is normally distributed. Specifically, if is normally distributed with mean 0 and standard deviation 0.5 then |Y| is half-normally distributed and a typical percent difference based on the median is

(2^{0.5 \cdot \sqrt{2}\cdot \mbox{erf}^{-1}(0.5)}-1)\cdot 100 = 26.3\% \approx 25\%.

However the differences between kallisto and Salmon quantifications are far from normally distributed. The plot below shows the distribution of the differences between log2 counts of kallisto and salmon (again excluding cases where both programs output 0):

diffhist

The blue vertical line is positioned at the median, which is at 0.001433093. This means that the typical difference between reported counts is not 25% but rather 0.1%.

7.  In their response, Patro et al. highlight the recent Zhang et al. 2017 paper that benchmarked a number of RNA-Seq programs, including kallisto and Salmon. Patro et al. comment on a high correlation between a mode of Salmon that quantifies based with transcriptome alignments and RSEM. First, the correlations reported by Zhang et al. are Pearson correlations, and not Spearman correlations that I focused on in my blog post. Second, the alignment mode of Salmon has nothing to do with pseudoalignment, in that read alignments (in the case of Zhang et al. 2017 produced with STAR) are quantified directly, in a workflow the same as that of RSEM. Investigation of the similarities between alignment Salmon and RSEM that led to the high correlation is beyond the scope of this post. Finally, in discussing the similarities between programs the authors (Zhang et al.) write “Salmon, Sailfish and Kallisto, cluster tightly together with R 2  > 0.96.”

8. In regards to the EM algorithm, Patro et al. acknowledge that Salmon uses kallisto’s termination criteria and have updated their code to reflect this fact. I thank them for doing so, however this portion of their response is bizarre:

“What if Salmon executed more iterations of its offline phase and outperformed kallisto? Then its improvement could be attributed to the extra iterations instead of the different model, bias correction, or online phase. By using the same termination criteria for the offline phase of Salmon, we eliminate a confounding variable in the analysis.”

If Salmon could perform better by executing more iterations of the EM algorithm it should certainly do so. This is because parameters hard-wired in the code should be set in a way that provides users with the best possible performance.

9. At one point in their response Patro et al. write that “It is expected that Salmon, without the GC bias correction feature, will be similar to kallisto”, essentially conceding that default Salmon \simeq default kallisto, a main point of my blog post. However Patro et al. continue to insist that Salmon with GC bias correction significantly improves on kallisto. Patro et al. have repeated a key experiment (the GEUVADIS based simulation) in their paper, replacing the t-test with a workflow they describe as “the pipeline suggested by the post’s author”. To be clear, this is the workflow preferred by Patro et al.:

As explained in my post on How not to perform a differential expression analysis the reason that Love et al. recommend a DESeq2 workflow instead of a t-test for differential expression is because of the importance of regularizing variance estimates. This is made clear by repeating Patro et al.‘s GEUVADIS experiment with a typical three replicates per condition instead of eight:

DESEq2ttest3x3new

With the t-test of transcripts Salmon cannot even achieve an FDR of less than 0.05.

10. Patro et al. find that switching to their recommended workflow (i.e. replacing the t-test with their own DESeq2) alters the difference between kallisto and Salmon at an FDR of 0.01 from 353% to 32%. Patro et al. describe this difference, in boldface, as “The results remain similar to the original published results when run using the accuser’s suggested pipeline.”  Note that Patro et al. refer to a typical difference of 0.1% between counts generated by kallisto and Salmon as “not very very similar” (point #6) while insisting that 353% and 32% aresimilar.

11. The reanalysis of the GEUVADIS differential expression experiment by Patro et al. also fails to address one of the most important critiques in my blogpost, namely that a typical experimental design will not deliberately confound bias with conditionThe plot below shows the difference between kallisto and Salmon in a typical experiment (3 replicates in each condition) followed by Love et al.’s recommended workflow (tximport -> DESeq2):

DESeq2salmonkallisto3x3

There is no apparent difference between kallisto and Salmon. Note that the samples in this experiment have the same GC bias as in Patro et al. 2017, the only difference being that samples are chosen randomly in a way that they are not confounded by batch. The lack of any observed difference in results between default kallisto and Salmon with the –gcBias option are the same with an 8×8 analysis:

8x8nonconfoundedkalsal

There is no apparent difference between kallisto and Salmon, even though the simulation includes the same GC bias levels as in Patro et al. 2017 (just not confounded with condition) .

12. It is interesting to compare the 8×8 unconfounded experiment with the 8×8 confounded experiment.

8x8confoundedDESeq2kalsal

While Salmon does improve on kallisto (although as discussed in point #10 the improvement is not 353% but rather 32% at an FDR of 0.01), the improvement in accuracy when performing an unconfounded experiment highlights why confounded experiments should not be performed in the first place.

13. Patro et al. claim that despite best intentions, “confounding of technical artifacts such as GC dependence with the biological comparison of interest does occur” and cite Gilad and Mizrahi-Man 2015. However the message of the Gilad and Mizrahi-Man paper is not that we must do our best to analyze confounded experiments. Rather, it is that with confounded experiments one may learn nothing at all. What they say is “In summary, we believe that our reanalysis indicates that the conclusions of the Mouse ENCODE Consortium papers pertaining to the clustering of the comparative gene expression data are unwarranted.” In other words, confounding of batch effect with variables of interest can render experiments worthless.

14. In response to my claim that GC bias has been reduced during the past 5 years, Patro et al. state:

A more informed assumption is that GC bias in sequencing data originates with PCR amplification and depends on thermocycler ramp speed (see, for example, Aird (2011) or t’ Hoen (2013)), and not from sequencing machines or reverse transcription protocols which may have improved in the past 5 years.

This statement is curious in that it seems to assume that, unlike sequencing machines or reverse transcription protocols, PCR amplification and thermocycler technology could not have improved in the past 5 years. As an example to the contrary, consider that just months before the publication of the GEUVADIS data, New England Biolabs released a new polymerase which claimed to address this very issue. GC bias is a ubiquitous issue in molecular biology and of course there are ongoing efforts to address it in the wet lab. Furthermore, continued research and benchmarking aimed at reducing GC bias, (see e.g. Thorner et al. 2014have led to marked improvements in library quality and standardization of experiments across labs. Anyone who performs bulk RNA-Seq, as we do in my lab, knows that RNA-Seq is no longer an ad hoc experiment.

15. Patro et al. write that

The point of the simulation was to demonstrate that, while modeling fragment sequence bias reduces gross mis-estimation (false reports of isoform switching across labs in real data — see for example Salmon Supplementary Figure 5 showing GEUVADIS data), the bias modeling does not lead to overall loss of signal. Consider that one could reduce false positives simply by attenuating signal or adding noise to all transcript abundances.

However none of the simulations or results in Patro et al. 2017 address the question of whether bias modeling leads to overall loss of signal. To answer it would require examining the true and false positives in a comparison of default Salmon and Salmon with –gcBias. Not only did Patro et al. not do the relevant intra-program comparisons, they did inter-program comparisons instead which clearly bear no relevance to the point they now claim they were making.

16. I want to make very clear that I believe that GC bias correction during RNA-Seq quantification is valuable and I agree with Patro et al. that it can be important for meta-analyses, especially of the kind that take place by large genome consortia. One of the interesting results in Patro et al. is the SEQC analysis (Supplementary Figure 4) which shows that that Salmon is more consistent in intra-center quantification in one sample (HBRR). However in a second experiment (UHRR) the programs are near identical in their quantification differences within and between centers and based on the results shown above I don’t believe that Patro et al. 2017 achieves its stated aim of showing that GC correction has an effect on typical differential analyses experiments that utilize typical downstream analyses.

17. I showed the results of running kallisto in default mode and Salmon with GC bias correction on a well-studied dataset from Trapnell et al. 2013. Patro et al. claim they were unable to reproduce my results, but that is because they performed a transcript level analysis despite the fact that I made it very clear in my post that I performed a gene level analysis. I chose to show results at the gene level to draw a contrast with Figure 3c of Trapnell et al. 2013. The results of Patro et al. at the transcript level show that even then the extent of overlap is remarkable. These results are consistent with the simulation results (see point #11).

18. The Salmon authors double down on their runtime analysis by claiming that “The running time discussion presented in the Salmon paper is accurate.” This is difficult to reconcile with two facts

(a) According to Patro et al.’s rebuttal “kallisto is faster when using a small number of threads” yet this was not presented in Patro et al. 2017.

(b) According to Patro et al. (see, e.g., the Salmon program GitHub), when running kallisto or Salmon with 30 threads what is being benchmarked is disk I/O and not the runtime of the programs.

If Patro et al. agree that to benchmark the speed of a program one must use a small number of threads, and Patro et al. agree that with a small number of threads kallisto is faster, then the only possible conclusion is that the running time discussion presented in the Salmon paper is not accurate.

19. The Patro et al. response has an entire section (3.1) devoted to explaining why quasimapping (used by Salmon) is distinct from pseudoalignment (introduced in the kallisto paper). Patro et al. describe quasimapping as “a different algorithm, different data structure, and computes different results.” Furthermore, in a blog post, Patro explained that RapMap (on which Salmon is based) implements both quasimapping and pseudoalignment, and that these are distinct concepts. He writes specifically that in contrast to the first algorithm provided by RapMap (pseudoalignment), “the second algorithm provided by RapMap — quasi-mapping — is a novel one”.

One of the reviewers of the Salmon paper recently published his review, which begins with the sentence “The authors present salmon, a new RNAseq quantification tool that uses pseudoalignment…” This directly contradicts the assertion of Patro et al. that quasimapping is “a different algorithm, different data structure” or that quasimapping is novel. In my blog post I provided a detailed walk-through that affirms that the reviewer is right. I showed how the quasimapping underlying Salmon is literally acting in identical ways on the k-mers in reads. Moreover, the results above show that Salmon, using quasimapping, does not “compute different results”. Unsurprisingly, its output is near identical to kallisto.

20. Patro et al. write that “The title of Sailfish paper contains the words ‘alignment-free’, which indicates that it was Sailfish that first presented the key idea of abandoning alignment. The term alignment-free has a long history in genomics and is used to describe methods in which the information inherent in a complete read is discarded in favor of the direct use of it’s substrings. Sailfish is indeed an alignment-free method because it shreds reads into constituent k-mers, and those are then operated on without regard to which read they originated from. The paper is aptly titled. The concept of pseudoalignment is distinct in that complete reads are associated to targets, even if base-pair alignments are not described.

21. Patro et al. write that “Salmon, including many of its main ideas, was widely known in the field prior to the kallisto preprint.” and mention that Zhang et al. 2015 included a brief description of Salmon. Zhang et al. 2015 was published on June 5, 2015, a month after the kallisto preprint was published, and its description of Salmon, though brief, was the first available for the program. Nowhere else, prior to the Zhang et al. publication, was there any description of what Salmon does or how it works, even at a high-level.

Notably, the paragraph on Salmon of Zhang et al. shows that Salmon, in its initial form, had nothing to do with pseudoalignment:

“Salmon is based on a novel lightweight alignment model that uses chains of maximal exact matches between sequencing fragments and reference transcripts to determine the potential origin of RNA‐seq reads.”

This is consistent with the PCA plot of my blog post which shows that initial versions of Salmon were very different from kallisto, and that Salmon \simeq kallisto only after Salmon switched to the use of pseudoalignment.

22. My blogpost elicited an intense discussion in the comments and on social media of whether Patro et al. adequately attributed key ideas of Salmon to kallisto. Patro et al. They did not.

Patro et al. reference numerous citations to kallisto in Patro et al. 2017 which I’ve reproduced below

citations

Only two of these references attribute any aspect of Salmon to kallisto. One of them, the Salmon bootstrap, is described as “inspired by kallisto” (in fact it is identical to that of kallisto). There is only one citation in Salmon to the key idea that has made it near identical to kallisto, namely the use of pseudoalignment, and that is to the RapMap paper from the Patro group (Srivastava et al. 2016).

Despite boasting of a commitment to open source principles and embracing preprints, Patro et al. conveniently ignore the RapMap preprints (Srivastava et al. 2015). Despite many mentions of kallisto, none of the four versions of the preprint acknowledge the direct use of the ideas in Bray et al. 2016 in any way, shape or form. The intent of Srivastava et al. is very clear. In the journal version the authors still do not acknowledge that “quasi mapping” is just pseudoalignment implemented with a suffix array, instead using words such as “inspired” and “motivated” to obfuscate the truth. Wording matters.

Epilogue

Discussion of the Zhang et al. 2017 paper by Patro et al., along with a tweet by Lappalainen about programs not giving identical results lead me to look more deeply into the Zhang et al. 2017 paper.

The exploration turned out to be interesting. On the one hand, some figures in Zhang et al. 2017 contradict Lappalainen’s claim that “none of the methods seem to give identical results…”.  For example, Figure S4 from the paper shows quantifications for four genes where kallisto and Salmon produce near identical results.

Zhang_S1

On the other hand, Figure 7 from the paper is an example from a simulation on a single gene where kallisto performed very differently from Salmon:

Zhang_F7

I contacted the authors to find out how they ran kallisto and Salmon. It turns out that for all the results in the paper with the exception of Figure 7, the programs were run as follows:

·       kallisto quant -i $KAL_INDEX –fr-stranded –plaintext $DATADIR/${f}_1.fq  $DATADIR/${f}_2.fq -t 8 -o ./kallisto/

·       salmon quant -i $SALMON_INDEX -l ISF -1 $DATADIR/${f}_1.fq -2 $DATADIR/${f}_2.fq -p 8 -o salmon_em  –incompatPrior 0

We then exchanged some further emails, after which they sent the data (reads) for the figure, we ran kallisto on our end and found discordant results with what was reported in the paper, they re-ran kallisto on their end, and after these exchanges we converged to an updated (and corrected) figure which shows Sailfish \simeq kallisto but not Salmon \simeq kallisto. The updated figure, shown below, was made by Zhang et al. using the default mode of kallisto version 0.43.1:

total_facet_3

Note that kallisto is near identical in performance to Sailfish, which I explained in my blog post about Salmon has also converged to kallisto. However Salmon is different.

It turned out that for this one figure, Salmon was run with a non-standard set of options, specifically with the additional option –numPreAuxModelSamples 0 (although notably Patro did not recommend using the –gcBias option). The recommendation to run with this option was made by Patro to Zhang et al. after they contacted him early in January 2017 to ask for the best way to run Salmon for the experiment. What the flag does is turn off the online phase of Salmon (hence the 0 in –numPreAuxModelSamples 0) that is used to initially estimate the fragment length distribution. There is a good rationale for using the flag, namely the very small number of reads in the simulation makes it impossible to accurately learn auxiliary parameters as one might with a full dataset. However on January 13th, Patro changed the behavior of the option in a way that allowed Salmon to optimize for the specific experiment at hand. The default fragment length distribution in Salmon had been set the same as that in Cufflinks (mean 200, standard deviation 80). These settings match typical experimental data, and were chosen by Cole Trapnell and myself after examining numerous biological datasets. Setting –numPreAuxModelSamples to 0 forced Salmon to use those parameters. However on January 13th Patro changed the defaults in Salmon to mean = 250 and standard deviation = 25The numbers 250 and 25 are precisely the defaults for the polyester program that simulates reads. Polyester (with default parameters) is what Zhang et al. 2017 used to simulate reads for Figure 7. 

Zhang et al. also contacted me on January 9th and I did not reply to their email. I had just moved institutions (from UC Berkeley to Caltech) on January 1st, and did not have the time to investigate in detail the issues they raised. I thank them for being forthcoming and helpful in reviewing Figure 7 post publication.

Returning to Lappalainen’s comment, it is true that Salmon results are different from kallisto in Figure 7 and one reason may be that Patro hard wired parameters for a flag that was used to match the parameters of the simulation. With the exception of that figure, throughout the paper Salmon \simeq kallisto, providing yet another example of an independent publication confirming the claims of my blog post.

Two years ago I wrote a blog post on being wrong. It’s not fun to admit being wrong, but sometimes it’s necessary. I have to admit to being wrong again.

In May 2015 my coauthors and I released software called kallisto for RNA-Seq and we published a preprint concurrently. Several months before that, in February 2015, when initial results with kallisto showed that its accuracy was competitive with state-of-the-art programs but that it could quantify a hundred or more times faster, I went to seek advice from a licensing officer at UC Berkeley about licensing options. Even though most software in bioinformatics is freely available to both academia and industry, I felt, for reasons I outlined in a previous blog post, that it was right that commercial users should pay a fee to use kallisto. I believed then, and still do now, that it’s right that institutions that support software development should benefit from its commercial use (UC Berkeley receives 2/3 of the royalties for commercially licensed products), that students are entitled to renumeration for software engineering work that does not directly support of their own research goals, and that funds are needed to support specialized personnel who can maintain/improve code and service user requests.

After some discussion with UC Berkeley staff, who were helpful every step of the way, I finally licensed kallisto using standard language from a UC Berkeley license, which provided free access to academics and non-profit institutions while requiring companies to contact UC Berkeley for a commercial license. I wouldn’t call the decision an experiment. I truly believed it was the right thing to do and convinced my coauthors on the project to go along with the decision. Unfortunately,

I was wrong.

Shortly after kallisto was released Titus Brown wrote a blog post titled “On licensing in bioinformatics software: use the BSD, Luke“. One critique he made against the type of licensing arrangement I secured can be paraphrased as “such licensing necessitates conversations with lawyers”. I scoffed at this comment at the time, thinking to myself ok, so what if lawyers need to be involved to do the right thing? I also scoffed at a tag he associated with his post: “lior-is-wrong”.

Prior to licensing kallisto, with the exception of one program, my software was released freely to academia and industry. The exception was the AVID alignment program, a project that launched shortly I arrived at UC Berkeley as a postdoc, and whose licensing terms were decided in large part by collaborators at LBNL. They had first licensed visualization software (for AVID alignments) called VISTA the same way, and I think they did that because they came from the protein folding community where such licensing is common. In any case, there hadn’t been much lawyering going on (as far as I was aware) with the AVID/VISTA projects and I also didn’t think too much about the broader issues surrounding licensing choices. Open source free licensing also seemed good and throughout the years I always let my students decide what license they wanted for the software they’d written. With kallisto standing to have a huge impact on companies, enabling them to directly profit (in $) from its speed, I decided it was timely to think about the pros and cons of different licenses again (it was later shown that kallisto does, indeed, greatly reduce costs for RNA-Seq analysis). This is the process that led up to the kallisto licensing, and the associated blog post I wrote.

However Titus Brown was right. Despite what I perceived to be best intentions from the UC Berkeley licensing staff and their counterparts at companies, what should have been simple licensing agreements signed and completed in a day, sometimes bogged down in lawyer infused negotiations. There were questions about indemnity clauses (prior to licensing kallisto I didn’t know what those were), payment terms, etc. etc. etc.  Some licenses were signed, but in some cases companies where I knew certain researchers wanted to use kallisto withdrew their requests due to term disagreements.

It did not take long for me to realize the mess that licensing involved. At the same time, I began to be buffeted by comments from colleagues and friends who argued zealously against the kallisto license in public. One colleague, I learned, refuses to read papers of software that is not licensed completely freely. I found out that Galaxy would not support kallisto, a decision I understood but that saddened me. So last year I went back to the licensing office and asked them to scrap the whole thing, and allow me to change the kallisto license to BSD. I wanted the licensing change to be immediate, and I was even happy to pay back (out of my own pocket) fees that may have been paid, but the change took many months to implement. I apologize for the delay. Had I known that even changing the license would be difficult, I would have done things differently from the outset. I did not make news of the pending license change public, because at first I was not sure that UC Berkeley would approve it at all. In any case, I’m now happy to say that kallisto is licensed with the permissive BSD 2-clause license.

Blog Stats

  • 1,845,416 views
%d bloggers like this: