You are currently browsing the monthly archive for September 2013.
I recently came across a wonderful website of Karl Broman via his homepage. He hosts a list of top ten worst graphs in which he describes examples of badly constructed/displayed graphs from statistical and computational biology papers. He calls out mistakes of others, not in order to humiliate the authors, but rather to educate via example. Its a list every computational biologist should peruse and learn from. The page also has an excellent list of recommended references, including Tufte’s “The visual display of quantitative information” which inspired the Minard plots for transcript abundance shown in Figure 2b and Appendix B of the supplement of my Cufflinks paper.
Interestingly, in addition to the list of graphs, Karl includes a list of tables but with only a single entry (the problem with the table is discussed here). Why should a list of worst graphs have 10 items but a list of tables only one? This is a blatant example of protectionism of tables at the expense of the oft abused graphs. With this post I’d like to remedy this unfair situation and begin the process of assembling a list of top ten worst tables: the tabular hall of shame.
I begin by offering Supplementary Table S6 from the paper “Evidence for Abundant and Purifying Selection in Humans for Recently Acquired Regulatory Functions” by Luke Ward and Manolis Kellis, Science 337 (2012) 1675–1678:
The table is in support of a main result of the paper, namely that in addition to the 5% of the human genome conserved across mammalian genomes, at least another 4% is subject to lineage-specific constraint. This result is based on adding up the estimates of constrained nucleotides from Table S6 (using column 10, the derived allele frequency measure). These estimates were calculated using a statistic that is computed as follows: for each one of ten bins determined according to estimated background selection strength, and for every ENCODE feature F, the average derived allele frequency value DF was rescaled to
,
where DCNDC and DNCNE are the bin-specific average derived allele frequencies of conserved non-degenerate coding regions and non-conserved non-ENCODE regions respectively.
One problem with the statistic is that the non-conserved regions contain nucleotides not conserved in all mammals, which is not the same as nucleotides not conserved in any mammals. The latter would be needed in order to identify human specific constraint. Second, the statistic PUCF is used as a proxy for the proportion under constraint even though, as defined, it could be less than zero or greater than one. Indeed, in Table S6 there are four values among the confidence intervals for the estimated proportions using derived allele frequency that include values less than 0% or above 100%, for example:
Ward and Kellis are therefore proposing that some features might have a negative number of nucleotides under constraint. Moreover, while it is possible that PUCF might correlate with the true proportion of nucleotides under constraint, there is no argument provided in the paper. Thus, while Ward and Kellis claim to have estimated the proportion of nucleotides under constraint, they have only computed a statistic named “proportion under constraint”.
The bottom line is that a table containing percentages should not have negative entries, and if it does reader beware!
On his worst graphs website Karl provides one example of a what he calls “a really bad table” and here I have offered a second (amazingly Table S5 from the Ward-Kellis paper is also a candidate for the list– Nicolas Bray and I review its sophistry in an arXiv note– but I think every paper should be represented only once on the list). I ask you, the reader, to help me round out the list by submitting more examples in the comments or by email directly to me. Tables from my papers are fair game (notably example #10 of a bad graph on Karl’s list is from one of his own papers). Please help!
I recently completed a term as director of our Center for Computational Biology and one of the things I am proud of having accomplished is helping Brian McClendon, program administrator for the Center, launch a new Ph.D. program in computational biology at UC Berkeley.
The Ph.D. program includes a requirement for taking a new course, “Classics in Computational Biology” (CMPBIO 201), that introduces students to research projects and approaches in computational biology via a survey of classic papers. I taught the class last week and the classic paper I chose was “Comparison of biosequences” by Temple Smith and Michael Waterman, Advances in Applied Mathematics 2 (1981), p 482–489. I gave a preparatory lecture this past week in which I discussed the Needleman-Wunsch algorithm, followed by leading a class discussion about the Smith-Waterman paper and related topics. This post is an approximate transcription of my lecture on the Needleman-Wunsch algorithm:
The Needleman-Wunsch algorithm was published in “A general method applicable to the search for similarities in the amino acid sequence of two proteins“, Needleman and Wunsch, Journal of Molecular Biology 48 (1970), p 443–453. As with neighbor-joining or UniFrac that I just discussed in my previous blog post, the Needleman-Wunsch algorithm was published without an explanation of its meaning, or even a precise description of the dynamic programming procedure it is based on. In their paper 11 years later, Smith and Waterman write
“In 1970 Needleman and Wunsch [1] introduced their homology (similarity) algorithm. From a mathematical viewpoint, their work lacks rigor and clarity. But their algorithm has become widely used by the biological community for sequence comparisons.”
They go on to state that “The statement in Needleman-Wunsch [1] is less general, unclearly stated, and does not have a proof” (in comparison to their own main Theorem 1). The Theorem Smith and Waterman are referring to explicitly describes the dynamic programming recursion only implicit in Needleman-Wunsch, and then explains why it produces the “optimal” alignment. In hindsight, the Needleman-Wunsch algorithm can be described formally as follows:
Given two sequences a and b of lengths and
respectively, an alignment
is a sequence of pairs:
.
Given an alignment , we let
and
.
In other words, M counts the number of matching characters in the alignment, X counts the number of mismatches and S the number of “spaces”, i.e. characters not aligned to another character in the other sequence. Since every character must be either matched, mismatched, or a not aligned, we have that
.
Given scores (parameters) consisting of real numbers , the Needleman-Wunsch algorithm finds an alignment that maximizes the function
. Note that by the linear relationship between M,X and S described above, there are really only two free parameters, and without loss of generality we can assume they are s and x. Furthermore, only S and X are relevant for computing the score of the alignment; when the scores are provided statistical meaning, one can say that S and X are sufficient statistics. We call them the summary of the alignment.
The specific procedure of the Needleman-Wunsch algorithm is to compute a matrix S recursively:
where is the indicator function that is 1 if the characters are the same, and zero otherwise, and an initialization step consists of setting
and
for all i,j. There are numerous generalizations and improvements to the basic algorithm, both in extending the scoring function to include more parameters (although most generalizations retain the linearity assumption), and algorithms for trading off time for space improvements (e.g. divide-and-conquer can be used to improve the space requirements from
to
).
An important advance in understanding the Needleman-Wunsch algorithm was the realization that the “scores” m,x and s can be interpreted as logarithms of probabilities if the sign on the parameters is the same, or as log-odds ratios if the signs are mixed. This is discussed in detail in the book “Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids Sequences” by Richard Durbin, Sean Eddy, Anders Krogh and Graeme Mitchison.
One insight from the statistical point of view is that if max and plus in the recursion are replaced with plus and times, and the logarithms of probabilities are replaced by the probabilities themselves, then the Needleman-Wunsch recursion computes the total probability of the pairs of sequences marginalized over alignments (i.e., it is the forward algorithm for a suitably defined hidden Markov model). Together with the backward algorithm, which consists of running the forward algorithm on the reversed sequences, this provides an efficient way to compute for every pair of nucleotides the posterior probability that they are aligned. In other words, there is a meaningful interpretation and useful application for the Needleman-Wunsch algorithm with the semi-ring replaced with
.
In work in collaboration with Bernd Sturmfels, we realized that it also makes sense to run the Needleman-Wunsch algorithm with the polytope algebra . Here
is the set of convex polytopes in
for some d, polytopes are “added” using the geometric operation of the convex hull of the union of polytopes, and they are “multiplied” by taking Minkowski sum. The result allows one to find not only a single optimal alignment, but all optimal alignments. The details are discussed in a pair of papers:
L. Pachter and B. Sturmfels, Parametric inference for biological sequence analysis, Proceedings of the National Academy of Sciences, Volume 101, Number 46 (2004), p 16138–16143.
C. Dewey, P. Huggins, K, Woods, B. Sturmfels and L. Pachter, Parametric alignment of Drosophila genomes, PLoS Computational Biology, Volume 2, Number 6 (2006) p e73.
The first explains the mathematical foundations for using the result and is where we coined the term polytope propagation algorithm for the Needleman-Wunsch algorithm with the polytope algebra. The second paper illustrates an application to a whole-genome alignment of two Drosophilids and provides an alternative to polytope propagation (the beneath-beyond algorithm) that is much faster. The type of analysis performed in the paper is also called parametric alignment, a problem with a long history whose popularity was launched by Dan Gusfield who provided the first software program for parametric alignment called XPARAL.
The polytope propagation algorithm (Needleman-Wunsch with the polytope algebra) is illustrated in the figure below:
The convex polygon in the bottom right panel contains in its interior points corresponding to all possible summaries for the alignments of the two sequences. The optimal summaries are vertices of the polygon. For each vertex, the parameters for which the Needleman-Wunsch algorithm will produce a summary corresponding to that vertex form a cone. The cones together form a fan which is shown inside the polygon.
The running time of polytope propagation depends on the number of vertices in the final polytope. In our papers we provide bounds showing that polytope propagation is extremely efficient. In particular, for d parameters, the number of vertices is at most . Cynthia Vinzant, formerly in our math department at UC Berkeley, has written a nice paper on lower bounds.
The fact that the Needleman-Wunsch algorithm can be run with three semi-rings means that it is particularly well suited to C++ thanks to templates that allow for the variables in the recursion to be abstracted. This idea (and code) is thanks to my former student Colin Dewey:
template<typename SemiRing>
void
alignGlobalLastRow(const string& seq1,
const string& seq2,
const typename SemiRing::Element match,
const typename SemiRing::Element mismatch,
const typename SemiRing::Element gap,
vector& row);
const Element one = SemiRing::multiplicativeIdentity;
// Initialize row
row.resize(seq2.size() + 1);
row[0] = one;
// Calculate first row
for (size_t j = 1; j <= seq2.size(); ++j)
row[j] = gap * row[j - 1];
// Calculate remaining rows
Element up, diag;
for (size_t i = 1; i <= seq1.size(); ++i) {
diag = row[0];
row[0] *= gap;
for (size_t j = 1; j <= seq2.size(); ++j) {
up = row[j];
if (seq1[i - 1] == seq2[j - 1]) {
row[j] = match * diag + gap * (up + row[j - 1]);
} else {
row[j] = mismatch * diag + gap * (up + row[j - 1]);
}
diag = up;
}
}
To recap, the three semi-rings with which it makes sense to run this algorithm are:
The interpretation of the output of the algorithm with the three semi-rings is:
is the score (probability) of a maximum alignment between the ith prefix of one sequence and the jth prefix of the other.
is the total score (probability) of all alignments between the ith prefix of one sequence and the jth prefix of the other.
is a polytope whose faces correspond to summaries that are optimal for some set of parameters.
The semi-rings in (2) and (3) lead to particularly interesting and useful applications of the Needleman-Wunsch algorithm. An example of the use of (2), is in posterior decoding based alignment where nucleotides are aligned as to maximize the sum of the posterior probabilities computed by (2). In the paper “Alignment Metric Accuracy” (with Ariel Schwartz and Gene Myers) we provide an interpretation in terms of an interesting metric on sequence alignments and in
A.S. Schwartz and L. Pachter, Multiple alignment by sequence annealing, Bioinformatics 23 (2007), e24–e29
we show how to adapt posterior decoding into a (greedy) multiple alignment algorithm. The video below shows how it works, with each step consisting of aligning a single pair of nucleotides from some pair of sequences using posterior probabilities computed using the Needleman-Wunsch algorithm with semi-ring (2):
We later extended this idea in work together with my former student Robert Bradley in what became FSA published in the paper “Fast Statistical Alignment“, R. Bradley et al., PLoS Computational Biology 5 (2009), e1000392.
An interesting example of the application of Needleman-Wunsch with semi-ring (3), i.e. polytope propagation, is provided in the recent paper “The RNA Newton polytope and learnability of energy parameters” by Elmirasadat Forouzmand and Hamidreza Chitsaz, Bioinformatics 29 (2013), i300–i307. Using our polytope propagation algorithm, they investigate whether simple models for RNA folding can produce, for some set of parameters, the folds of sequences with solved structures (the answer is sometimes, but not always).
It is remarkable that forty three years after the publication of the Needleman-Wunsch paper, and thirty two years after the publication of the Smith-Waterman paper, the algorithms remain pertinent, widely used, and continue to reveal new secrets. True classics.
Where are the authors now?
Saul B. Needleman writes about numismatics.
Christian Wunsch is a clinical pathologist in the University of Miami health system.
Temple Smith is Professor Emeritus of Biomedical Engineering at Boston University.
Michael Waterman is Professor of Biological Sciences, Mathematics and Computer Science at the University of Southern California.
Don’t believe the anti-hype. They are saying that RNA-Seq promises the discovery of new expression events, but it doesn’t deliver:
Is this true? There have been a few papers comparing microarrays to RNA-Seq technology (including one of my own) that I’ll discuss below, but first a break-down of the Affymetrix “evidence”. The first is this figure (the poor quality of the images is exactly what Affymetrix provides, and not due to a reduction in quality on this site; they are slightly enlarged when clicked on):
The content of this figure is an illustration of the gene LMNB1 (Lamin protein of type B), used to argue that microarrays can provide transcript level resolution whereas RNA-Seq can’t!! Really? Affymetrix is saying that RNA-Seq users would likely use the RefSeq annotation which only has three isoforms. But this is a ridiculous claim. It is well known that RefSeq is a conservative annotation and certainly RNA-Seq users have the same access to the multiple databases Affymetrix used to build their annotation (presumably, e.g. Ensembl). It therefore seems that what Affymetrix is saying with this figure is that RNA-Seq users are dumb.
The next figure is showing the variability in abundance estimates as a function of expression level for RNA-SEq and the HTA 2.0, with the intended message being that microarrays are less noisy:
But there is a subtle trick going on here. And its in the units. The x-axis is showing RPM, which is an abbreviation for Reads Per Million. This is not a commonly used unit, and there is a reason. First, its helpful to review what is used. In his landmark paper on RNA-Seq, Ali Mortazavi introduced the units RPKM (note the extra K) that stands for reads per kilobase of transcript per million mapped. Why the extra kilobase term? In my review on RNA-Seq quantification I explain that RPKM is proportional to a maximum likelihood estimate of transcript abundance (obtained from a simple RNA-Seq model). The complete derivation is on page 6 ending in Equation 13; I include a summary here:
The maximum likelihood (ML) abundances are given by
In these equations is the length of transcript t (if reads are long it is necessary to modify the length due to edge effects, hence the tilde in the paper), the
are the maximum likelihood estimates for the probabilities of selecting reads from transcripts (unnormalized by their length) and finally
is the number of reads mapping to transcript t while N is the total number of mapped reads. The point is that RPKM (the rightmost formula for abundance) is really a unit for describing the maximum likelihood relative abundances (
) scaled by some factors.
RPKM as a unit has two problems. The first is that in current RNA-Seq experiments reads are paired so that the actual units being counted (in ) are fragments. For this reason we modified RPKM to FPKM in the Cufflinks paper (the “F” replaced “R” for fragment instead of read). A more serious problem, noted by Bo Li and Colin Dewey in their paper on RSEM, is that while FPKM is proportional to ML estimates of abundance, the proportionality constants may vary between experiments. For this reason they proposed TPM (transcripts per million) which is also proportional to the ML abundance estimates but with a proportionality constant (a million) that is the same between experiments. All of these units are used simply to avoid writing down the
which are in many cases tiny numbers since they must all sum to 1.
Returning to the Affymetrix figure, we see the strange RPM units. In essence, this is the rightmost term in the equation above, with the length terms removed from the denominators. Therefore RPM is proportional to the
. If a transcript is short, even if it is equally abundant to a longer transcript ,it will produce less RNA-Seq reads and therefore its
will be (possibly considerably) smaller. The effect of displaying RPM for RNA-Seq vs. expression level for the HTA 2.0 arrays is therefore to mix apples and oranges. Since what is being displayed is a coefficient of variation, there is a bias caused by the relationship between length and expression (e.g. many highly expressed housekeeping genes are short).
To be fair to Affymetrix the conversion between the and the
can be confusing (its explained in Lemma 14 in the Supplement of the Cufflinks paper). So maybe the discordant x-axes were unintentional…but then there is the third figure:
Here its a bit hard to tell what is going on because not all the information needed to decipher the figure is provided. For example, its not clear how the “expression of exons” was computed or measured for the RNA-Seq experiment. I suspect that as with the previous figure, read numbers were not normalized by length of exons, and moreover spliced reads (and other possibly informative reads from transcripts) were ignored. In other words, I don’t really believe the result.
Having said this, it is true that expression arrays can have an advantage in measuring exon expression, because an array measurement is absolute (as opposed to the relative quantification that is all that is possible with RNA-Seq). Array signal is based on hybridization, and it is probably a reasonable assumption that some minimum amount of RNA triggers a signal, and that this amount is independent of the remainder of the RNA in an experiment. So arrays can (and in many cases probably do) have advantages over RNA-Seq.
There are a few papers that have looked into this, for example the paper “A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae ” by Nookaew et al., Nucleic Acids Research 40 (2012) who find high reproducibility in RNA-Seq and consistency between arrays and RNA-Seq. Xu et al., in “Human transcriptome array for high-throughput clinical studies“, PNAS 108 (2011), 3707–3712 are more critical, agreeing with Affymetrix that arrays are more sensitive at the exon level. For disease studies, they recommend using RNA-Seq to identify transcripts relevant to the disease, and then screening for those transcripts on patients using arrays.
For the Cuffdiff2 paper describing our new statistical procedures for differential analysis of transcripts and genes, the Rinn lab performed deep RNA-Seq and array expression measurement on the same samples from a HOXA1 knowdown (the experiments included multiple replicates of both the RNA-Seq and the arrays). To my knowledge, it is the deepest and most comprehensive data currently available for comparing arrays and RNA-Seq. Admittedly, the arrays used were not Affymetrix but Agilent SurePrint G3, and the RNA-Seq coverage was deep, however we had two main findings very different from the Affymetrix claims. First, we found overall strong correlation between array expression values and RNA-Seq abundance estimates. The correlation remained strong for large regimes of expression even with very few reads (tested by sequencing fewer reads from a MiSeq). Second, we found that arrays were missing differentially expressed transcripts, especially at low abundance levels. In other words, we found RNA-Seq to have higher resolution. The following figure from our paper made the case (note the overall Spearman Correlation was 0.86):
There are definitely continued applications for arrays. Both in high-throughput screening applications (as suggested in the Xu et al. paper), and also in the development of novel assays. For example Mercer et al. “Targeted rNA sequencing reveals the deep complexity of the human transcriptome“, Nature Biotechnology 30 (2012) 99–104 show how to couple capture (with arrays) with RNA-Seq to provide ultra deep sequencing in subsets of the transcriptome. So its not yet the time to write off arrays. But RNA-Seq has many applications of its own. For example the ability to better detect allele-specific expression, the opportunity to identify RNA-DNA differences (and thereby study RNA editing), and the ability to study expression in non-model organisms where genomes sequences are incomplete and annotations poor. For all these reasons I’m betting on RNA-Seq.
I learned today about a cute Fibonacci fact from my friend Federico Ardila:
1/89 = 0.011235…
1/9899 = 0.0001010203050813213455…
1/998999 = 0.000001001002003005008013021034055089144233377610...
The pattern is explained in a note in the College Mathematics Journal (2003). Of course Fibonacci numbers are ubiquitous in biology, but in thinking about this pattern I was reminded of a lesser known connection between Fibonacci numbers and biology in the context of the combinatorics of splicing:
A stretch of DNA sequence with
acceptor sites and
donor sites can produce at most
distinct spliced transcripts, where the numbers
are the Fibonacci numbers.
The derivation is straightforward using induction: to simplify notation we denote acceptor sites with an open parenthesis “(” and donor sites with a closed parenthesis “)”. We use the notation for the length of a string S of open and closed parentheses, and denote the maximum number of transcripts that can be spliced from S by
. We assume that the theorem is true by induction for
(the base case is trivial). Let S be a string with
. Observe that S must have an open parenthesis somewhere that is followed immediately be a closed parentheses. Otherwise we have that
(the empty string is considered to be a valid transcript). We therefore have
where
has k open and r closed parentheses respectively, and
has
open and s closed parentheses respectively. Now notice that
.
This can be seen by breaking down the terms as follows: One can take any transcript in and append to it a transcript in “
“. Similarly, one can take a transcript in
and append to it a transcript in
. Transcripts ommitting the interior pair () between
and
are counted twice, which is fine because one of the copies corresponds to all transcripts that include the interior pair () between
and
. The last two terms account for all transcripts whose last element in
is an open parenthesis, and whose first element in
is a closed parenthesis. This is counted by considering all transcripts in
and subtracting transcripts that do not include a parentheses from each. Finally, using the Fibonacci recurrence and the identity
we have
.
The bound is attained for certain configurations, such as with n acceptor and n donor sites.
The combinatorics is elementary and it only establishes what is already intuitive and obvious: splicing combinatorics dictates that there are a lot of transcripts (exponentially many in the number of acceptor and donor sites) that can, in principle, be spliced together, even from short DNA sequences. The question, then, is why do most genes have so few isoforms? Or do they?
Recent Comments