You are currently browsing the tag archive for the ‘network deconvolution’ tag.

Five years ago on this day, Nicolas Bray and I wrote a blog post on The network nonsense of Manolis Kellis in which we described the paper Feizi et al. 2013 from the Kellis lab as dishonest and fraudulent. Specifically, we explained that:

**“Feizi et al. have written a paper that appears to be about inference of edges in networks based on a theoretically justifiable model but**

**the method used to obtain the results in the paper is completely different than the idealized version sold in the main text of the paper and****the method actually used has parameters that need to be set, yet no approach to setting them is provided. Even worse,****the authors appear to have deliberately tried to hide the existence of the parameters. It looks like****the reason for covering up the existence of parameters is that the parameters were tuned to obtain the results.****Moreover,****the results are not reproducible. The provided data and software is not enough to replicate even a single figure in the paper. This is disturbing because****the performance of the method on the simplest of all examples, a correlation matrix arising from a Gaussian graphical model, is poor.”**

A second point we made is that the justification for the method, which the authors called “network deconvolution” was nonsense. For example, the authors wrote that **“The model assumes that networks are “l****inear time-invariant flow-preserving operators.”** Perhaps I take things too literally when I read papers but I have to admit that five years later I still don’t understand the sentence. However just because a method is ad-hoc, heuristic, or perhaps poorly explained, doesn’t mean it won’t work well in practice. In the blog post we compared network deconvolution to regularized partial correlation on simulated data, and found network deconvolution performed poorly. But in a responding comment, Kellis noted that “in our experience, partial correlation performed very poorly in practice.” He added that “We have heard very positive feedback from many other scientists using our software successfully in diverse applications.”

Fortunately we can now evaluate Kellis’ claims in light of an independent analysis in Wang, Pourshafeie, Zitnik et al. 2018, a paper from the groups of Serafim Batzoglou and Jure Leskovec (in collaboration with Carlos Bustamante) at Stanford University. There are three main results presented in Wang, Pourshafeie and Zitnik et al. 2018 that summarize the benchmarking of network deconvolution and other methods, and I reproduce figures showing the results below. The first shows the performance of network deconvolution and some other network denoising methods on a problem of butterfly species identification (network deconvolution is abbreviated ND and is shown in green). RAW (in blue) is the original unprocessed network. **Network deconvolution is much worse than RAW**:

The second illustrates the performance of network denoising methods on Hi-C data. The performance metric in this case is normalized mutual information (NMI) which Wang, Pourshafeie, Zitnik et al. described as “a fair representation of overall performance”. **Network deconvolution (ND, dark green) is again worse than RAW (dark blue)**:

Finally, in an analysis of gene function from tissue-specific gene interaction networks, ND (blue) does perform better than RAW (pink) although barely. In four cases out of eight shown it is the worst of the four methods benchmarked:

Network deconvolution was claimed to be applicable to any network when it was published. At the time, Feizi stated that “We applied it to gene networks, protein folding, and co-authorship social networks, but our method is general and applicable to many other network science problems.” A promising claim, but in reality it is difficult to beat the nonsense law: **Nonsense methods tend to produce nonsense results**.

The Feizi et al. 2013 paper now has 178 citations, most of them drive by citations. Interestingly this number, 178 is **exactly** the number of citations of the Barzel et al. 2013 network nonsense paper, which was published in the same issue of Nature Biotechnology. Presumably this reflects the fact that authors citing one paper feel obliged to cite the other. These pair of papers were thus an impact factor win for the journal. For the first authors on the papers, the network deconvolution/silencing work is their most highly cited first author papers respectively. Barzel is an assistant professor at Bar-Ilan University where he links to an article about his network nonsense on his “media page”. Feizi is an assistant professor at the University of Maryland where he lists Feizi et al. 2013 among his “selected publications“. Kellis teaches the “network deconvolution” and its associated nonsense in his computational biology course at MIT. And why not? These days truth seems to matter less and less in every domain. A statement doesn’t have to be true, it just has to work well on YouTube, Twitter, Facebook, or some webpage, and as long as some people believe it long enough, say until the next grant cycle, promotion evaluation, or election, then what harm is done? A win-win for everyone. Except science.

Reproducibility has become a major issue in scientific publication, it is under scrutiny by many, making headlines in the news, is on the minds of journals, and there are guidelines for achieving it. Reproducibility is certainly important for scientific research, but I think that lost in the debate is the notion of *usability*. Reproducibility only ensures that the results of a paper can be recreated by others, but usability ensures that researchers can build on scientific work, and explore hypotheses and ideas unforeseen by authors of the original papers. Here I describe a case study in reproducibility and usability, that emerged from a previous post I wrote about the paper

- Soheil Feizi, Daniel Marbach, Muriel Médard & Manolis Kellis, Network deconvolution as a general method to distinguish direct dependencies in networks,
*Nature Biotechnology*31(8), 2013, p 726–733.

Feizi *et al. *describe a method called network deconvolution, that they claim improves the inference results for 8 out of 9 network inference methods, out of the 35 that were tested in the DREAM5 challenge. In DREAM5, participants were asked to examine four chip-based *gene x expression* matrices, and were also provided a list of transcription factors for each. They were then asked to provide ranked lists of transcription factor – gene interactions for each of the four datasets. The four datasets consisted of one computer simulation (the “*in silico*” dataset) and expression measurements in *E. coli, S. cerevisiae *and *S. aureus. *The consortium received submission from 29 different groups and ran 6 other “off-the-shelf” methods, while also developing its own “community” method (for a total of 36=29+6+1). The community method consisted of applying the Borda count to the 35 methods being tested, to produce a new consensus, or community, network. Nicolas Bray and I tried to replicate the results of Feizi *et al. *so that we could test for ourselves the performance of network deconvolution with different parameters and on other DREAM5 methods ( Feizi *et al.* tested only 9 methods; there were 36 in total). But despite contacting the authors for help we were unable to do so. In desperation, I even offered $100 for someone to replicate all of the figures in the paper. Perhaps as a result of my blogging efforts, or possibly due to a spontaneous change of heart, the authors finally released *some* of the code and data needed to reproduce *some* of the figures in their paper. In particular, I am pleased to say that the released material is sufficient to almost replicate Figure 2 of their paper which describes their results on a portion of the DREAM5 data. I say almost because the results for one of the networks is off, but to the authors credit it does appear that the distributed data and code are close to what was used to produce the figure in the paper (note: there is still not enough disclosure to replicate *all* of the figures of the paper, including the suspicious Figure S4 before and after revision of the supplement, and I am therefore not yet willing to concede the $100). What Feizi *et al.* did accomplish was to make their methods *usable*. That is to say, with the distributed code and data I was able to test the method with different parameters and on new datasets. In other words, **Feizi et al. is still not completely reproducible, but it is usable. **In this post, I’ll demonstrate why usability is important, and make the case that it is too frequently overlooked or confused with reproducibility. With usable network deconvolution code in hand, I was finally able to test some of the claims of Feizi

*et al.*First, I identify the relationship between the DREAM methods and the methods Feizi

*et al.*applied network deconvolution to. In the figure below, I have reproduced Figure 2 from Feizi

*et al.*together with Figure 2 from Marbach

*et al.:*

Figure 2 from Feizi *et al.* aligned to Figure 2 from Marbach *et al.*

The mapping is more complex than appears at first sight. For example, in the case of Spearman correlation (method Corr #2 in Marbach *et al., *method #5 in Feizi *et al.*), Feizi *et al.* ran network deconvolution on the method after taking absolute values. This makes no sense, as throwing away the sign is to throw away a significant amount of information, not to mention it destroys any hope of connecting the approach to the intuition of inferring directed interactions from the observed via the idealized “model” described in the paper. On the other hand, Marbach *et al.* evaluated Spearman correlation *with* sign. Without taking the absolute value before evaluation negative edges, strong (negative) interactions, are ignored. This is the reason for the very poor performance of Spearman correlation and the reason for the discrepancy in bar heights between Marbach *et al.** *and Feizi *et al.* for that method. The caption of Figure 2 in Feizi *et al.* begins “Network deconvolution applied to the inferred networks of top-scoring methods [1] from DREAM5..” This is obviously not true. Moreover, one network they did not test on was the community network of Marbach *et al.* which was the best method and the point of the whole paper. However the methods they did test on were ranked 2,3,4,6,8,12,14,16,28 (out of 36 methods). The 10th “community” method of Feizi *et al.* is actually the result of applying the community approach to the ND output from all the methods, so it is not in and of itself a test of ND. Of the nine tested methods, arguably only a handful were “top” methods. I do think its sensible to consider “top” to be the best methods for each category (although Correlation is so poor I would discard it altogether). That leaves four top methods. So instead of creating the illusion that network deconvolution improves 9/10 top scoring methods, what Feizi *et al.* should have reported is that **3 out of 4 of the top methods that were tested were improved by network deconvolution**. That is the result of running network deconvolution with the default parameters. I was curious what happens when using the parameters that Feizi *et al.* applied to the protein interaction data (alpha=1, beta=0.99). Fortunately, because they have made the code usable, I was able to test this. The overall result as well as the scores on the individual datasets are shown below: The Feizi *et al. *results on gene regulatory networks using parameters different from the default. The results are very different. Despite the claims of Feizi *et al.* that network deconvolution is robust to choice of parameters,** now only 1 out of 4 of the top methods are improved by network deconvolution**. Strikingly, the top three methods tested have their quality degraded. In fact, the top method in two out of the three datasets tested is made worse by network deconvolution. **Network deconvolution is certainly not robust to parameter choice. **What was surprising to me was the *improved* performance of network deconvolution on the *S. cerevisae* dataset, especially for the mutual information and correlation methods. In fact, the improvement of network deconvolution over the methods is appears extraordinary. At this point I started to wonder about what the improvements really mean, i.e. what is the “score” that is being measured. The y-axis, called the “score” by Feizi *et al.* and Marbach *et al.* seemed to be changing drastically between runs. I wondered… what exactly is the score? What do the improvements mean? It turns out that “score” is defined as follows:

.

This formula requires some untangling: First of all, AUROC is shorthand for area under the ROC (receiver operator curve), and AUPR for area under the PR (precision recall) curve. For context, ROC is a standard concept in engineering/statistics. Precision and recall are used frequently, but the PR curve is used much less than ROC . Both are measures for judging the quality of a binary classifier. In the DREAM5 setting, this means the following: there is a gold standard of “positives”, namely a set of edges in a network that should be predicted by a method, and the remainder of the edges will be considered “negatives”, i.e. they should not be predicted. A method generates a list of edges, sorted (ranked) in some way. As one proceeds through the list, one can measure the fraction of positives and false positives predicted. The ROC and PR curves measure the performance. A ROC is simply a plot showing the true positive rate for a method as a function of the false positive rate. Suppose that there are *m *positives in the gold standard out of a goal of *n *edges. If one examines the top *k* predictions of a method, then among them there will be *t** “*true” positives as well as *k**-t “*false” positives. This will result in a single point on the ROC, i.e. the point (). This can be confusing at first glance for a number of reasons. First, the points do not necessarily form a function, e.g. there can be points with the same *x-*coordinate. Second, as one varies *k *one obtains a set of points, not a curve. The ROC is a *curve,* and* *is obtained by taking the envelope of all of the points for . The following intuition is helpful in understanding ROC:

- The
*x*coordinate in the ROC is the false positive rate. If one doesn’t make any predictions of edges at all, then the false positive rate is 0 (in the notation above*k=0, t=0).*On the other hand, if all edges are considered to be “true”, then the false positive rate is 1 and the corresponding point on the ROC is (1,1), which corresponds to*k=n, t=m*. - If a method has no predictive power, i.e. the ranking of the edges tells you nothing about which edges really are true, then the ROC is the line
*y=x.*This is because lack of predictive power means that truncating the list at any*k*, results in the same proportion of true positives above and below the*k*th edge. And a simple calculation shows that this will correspond to the point () on the ROC curve. - ROC curves can be summarized by a single number that has meaning: the area under the ROC (AUROC). The observation above means that a method without any predictive power will have an AUROC of 1/2. Similarly, a “perfect” method, where he true edges are all ranked at the top will have an AUROC of 1. AUROC is widely used to summarize the content of a ROC curve because it has an intuitive meaning: the AUROC is the probability that if a positive and a negative edge are each picked at random from the list of edges, the positive will rank higher than the negative.

An alternative to ROC is the precision-recall curve. Precision, in the mathematics notation above, is the value , i.e., the number of true positives divided by the number of true positives plus false positives. Recall is the same as sensitivity, or true positive rate: it is . In other words, the PR curve contains the points (), as recall is usually plotted on the *x-*axis. The area under the precision-recall curve (AUPR) has an intuitive meaning just like AUROC. It is the average of precision across all recall values, or alternatively, the probability that if a “positive” edge is selected from the ranked list of the method, then an edge above it on the list will be “positive”. Neither precision-recall curves, nor AUPR are widely used. There is one problem with AUPR, which is that its value is dependent on the number of positive examples in the dataset. For this reason, it doesn’t make sense to average AUPR across datasets (while it does make sense for AUROC). For all of these reasons, I’m slightly uncomfortable with AUPR but that is not the main issue in the DREAM5 analysis. I have included an example of ROC and PR curves below. I generated them for the method “GENIE3” tested by Feizi *et al.*. This was the method with the best overall score. The figure below is for the *S. cerevisiae *dataset:

The ROC and a PR curves before (top) and after (bottom) applying network deconvolution to the GENIE3 network.

The red curve in the ROC plots is what one would see for a method without any predictive power (point #2 above). In this case, what the plot shows is that GENIE3 is effectively ranking the edges of the network randomly. The PR curve is showing that at all recall levels there is very little precision. **The difference between GENIE3 before and after network deconvolution is so small, that it is indistinguishable in the plots. **I had to create separate plots before and after network deconvolution because the curves literally overlapped and were not visible together. **The conclusion from plots such as these, should not be that there is statistically significance (in the difference between methods with/without network deconvolution, or in comparison to random), but rather that there is negligible effect.** There is a final ingredient that is needed to constitute “score”. Instead of just averaging AUROC and AUPR, both are first converted into *p*-values that measure the statistical significance of the method being different from random. The way this was done was to create random networks from the edges of the 35 methods, and then to measure their quality (by AUROC or AUPR) to obtain a distribution. The *p-*value for a given method was then taken to be the area under the probability density function to the right of the method’s value. The graph below shows the pdf for AUROC from the *S. cerevisae *DREAM5 data that was used by Feizi *et al. *to generate the scores:

Distribution of AUROC for random methods generated from the *S. cerevisiae *submissions in Marbach *et al.*

In other words, almost all random methods had an AUROC of around 0.51, so **any slight deviation from that was magnified in the computation of p-value, and then by taking the (negative) logarithm of that number a very high “score” was produced**. The scores were then taken to be the average of the AUROC and AUPR scores. I can understand why Feizi

*et al.*might be curious whether the difference between a method’s performance (before and after network deconvolution) is significantly different from random, but to replace magnitude of effect with statistical significance in this setting, with such small effect sizes, is to completely mask the fact that the methods are hardly distinguishable from random in the first place. To make concrete the implication of reporting the statistical significance instead of effect size, I examined the “significant” improvement of network deconvolution on the

*S. cerevisae*and other datasets when run with the protein parameters rather than the default (second figure above). Below I show the AUROC and AUPR plots for the dataset.

The Feizi *et al.* results before and after network deconvolution using alpha=1, beta=0.99 (shown with AUROC).

The Feizi *et al.* results before and after network deconvolution using alpha=1, beta=0.99 (shown with AUPR).

My conclusion was that **the use of “score” was basically a red herring**. **What looked like major differences between methods disappears into tiny effects in the experimental datasets, and even the in silico variations are greatly diminished**. The differences in AUROC of one part in 1000 hardly seem reasonable for concluding that network deconvolution works. Biologically, both results are that the methods cannot reliably predict edges in the network. With usable network deconvolution code at hand, I was curious about one final question. The main result of the DREAM5 paper

- D. Marbach
*et al.*, Wisdom of Crowds for Robust Gene Network Inference, Nature Methods 9 (2012), 796–804.

was that the community method was best. So I wondered whether network deconvolution would improve it. In particular, the community result shown in Feizi *et al. *was not a test of network deconvolution, it was simply a construction of the community from the 9 methods tested (two communities were constructed, one before and one after network deconvolution). To perform the test, I went to examine the DREAM5 data, available as supplementary material with the paper. I was extremely impressed with reproducibility. The participant submissions are all available, together with scripts that can be used to quickly obtain the results of the paper. However the data is not very usable. For example, what is provided is the top 100,000 edges that each method produced. But if one wants to use the full prediction of a method, it is not available. The implication of this in the context of network deconvolution is that it is not possible to test network deconvolution on the DREAM5 data without thresholding. Furthermore, in order to evaluate edges absolute value was applied to all the edge weights. Again, this makes the data much less useful for further experiments one may wish to conduct. In other words, DREAM5 is reproducible but not very usable. But since Feizi *et al.* suggest that network deconvolution can literally be run on anything with “indirect effect”, I decided to give it a spin. I did have to threshold the input (although fortunately, Feizi *et al.* have assured us that this is a fine way to run network deconvolution), so actually the experiment is entirely reasonable in terms of their paper. The figure is below (produced with the default network deconvolution parameters), but before looking at it, please accept my apology for making it. I really think its the most incoherent, illogical, meaningless and misleading figure I’ve ever made. But it does abide by the spirit of network deconvolution:

The DREAM5 results before and after network deconvolution.

Alas, **network deconvolution decreases the quality of the best method, namely the community method**.

**The wise crowds have been dumbed down.**In fact, 19/36 methods become worse, 4 stay the same, and only 13 improve. Moreover, network deconvolution decreases the quality of the top method in each dataset. The only methods with consistent improvements when network deconvolution is applied are the mutual information and correlation methods, poor performers overall, that Feizi

*et al.*ended up focusing on. I will acknowledge that one complaint (of the many possible) about my plot is that the overall results are dominated by the

*in silico*dataset. True- and I’ve tried to emphasize that by setting the y-axis to be the same in each dataset (unlike Feizi

*et al.*) But I think its clear that no matter how the datasets are combined into an overall score, the result is that network deconvolution is

**not**consistently improving methods. All of the analyses I’ve done were made possible thanks to the improved usability of network deconvolution. It is unfortunate that the result of the analyses is that network deconvolution should not be used. Still, I think this examples makes a good case for the fact that

**r**

**eproducibility is essential, but usability is more important.**

## Recent Comments