You are currently browsing the tag archive for the ‘Daniel Marbach’ tag.

Reproducibility has become a major issue in scientific publication, it is under scrutiny by many, making headlines in the news, is on the minds of journals, and there are guidelines for achieving it. Reproducibility is certainly important for scientific research, but I think that lost in the debate is the notion of *usability*. Reproducibility only ensures that the results of a paper can be recreated by others, but usability ensures that researchers can build on scientific work, and explore hypotheses and ideas unforeseen by authors of the original papers. Here I describe a case study in reproducibility and usability, that emerged from a previous post I wrote about the paper

- Soheil Feizi, Daniel Marbach, Muriel Médard & Manolis Kellis, Network deconvolution as a general method to distinguish direct dependencies in networks,
*Nature Biotechnology*31(8), 2013, p 726–733.

Feizi *et al. *describe a method called network deconvolution, that they claim improves the inference results for 8 out of 9 network inference methods, out of the 35 that were tested in the DREAM5 challenge. In DREAM5, participants were asked to examine four chip-based *gene x expression* matrices, and were also provided a list of transcription factors for each. They were then asked to provide ranked lists of transcription factor – gene interactions for each of the four datasets. The four datasets consisted of one computer simulation (the “*in silico*” dataset) and expression measurements in *E. coli, S. cerevisiae *and *S. aureus. *The consortium received submission from 29 different groups and ran 6 other “off-the-shelf” methods, while also developing its own “community” method (for a total of 36=29+6+1). The community method consisted of applying the Borda count to the 35 methods being tested, to produce a new consensus, or community, network. Nicolas Bray and I tried to replicate the results of Feizi *et al. *so that we could test for ourselves the performance of network deconvolution with different parameters and on other DREAM5 methods ( Feizi *et al.* tested only 9 methods; there were 36 in total). But despite contacting the authors for help we were unable to do so. In desperation, I even offered $100 for someone to replicate all of the figures in the paper. Perhaps as a result of my blogging efforts, or possibly due to a spontaneous change of heart, the authors finally released *some* of the code and data needed to reproduce *some* of the figures in their paper. In particular, I am pleased to say that the released material is sufficient to almost replicate Figure 2 of their paper which describes their results on a portion of the DREAM5 data. I say almost because the results for one of the networks is off, but to the authors credit it does appear that the distributed data and code are close to what was used to produce the figure in the paper (note: there is still not enough disclosure to replicate *all* of the figures of the paper, including the suspicious Figure S4 before and after revision of the supplement, and I am therefore not yet willing to concede the $100). What Feizi *et al.* did accomplish was to make their methods *usable*. That is to say, with the distributed code and data I was able to test the method with different parameters and on new datasets. In other words, **Feizi et al. is still not completely reproducible, but it is usable. **In this post, I’ll demonstrate why usability is important, and make the case that it is too frequently overlooked or confused with reproducibility. With usable network deconvolution code in hand, I was finally able to test some of the claims of Feizi

*et al.*First, I identify the relationship between the DREAM methods and the methods Feizi

*et al.*applied network deconvolution to. In the figure below, I have reproduced Figure 2 from Feizi

*et al.*together with Figure 2 from Marbach

*et al.:*

Figure 2 from Feizi *et al.* aligned to Figure 2 from Marbach *et al.*

The mapping is more complex than appears at first sight. For example, in the case of Spearman correlation (method Corr #2 in Marbach *et al., *method #5 in Feizi *et al.*), Feizi *et al.* ran network deconvolution on the method after taking absolute values. This makes no sense, as throwing away the sign is to throw away a significant amount of information, not to mention it destroys any hope of connecting the approach to the intuition of inferring directed interactions from the observed via the idealized “model” described in the paper. On the other hand, Marbach *et al.* evaluated Spearman correlation *with* sign. Without taking the absolute value before evaluation negative edges, strong (negative) interactions, are ignored. This is the reason for the very poor performance of Spearman correlation and the reason for the discrepancy in bar heights between Marbach *et al.** *and Feizi *et al.* for that method. The caption of Figure 2 in Feizi *et al.* begins “Network deconvolution applied to the inferred networks of top-scoring methods [1] from DREAM5..” This is obviously not true. Moreover, one network they did not test on was the community network of Marbach *et al.* which was the best method and the point of the whole paper. However the methods they did test on were ranked 2,3,4,6,8,12,14,16,28 (out of 36 methods). The 10th “community” method of Feizi *et al.* is actually the result of applying the community approach to the ND output from all the methods, so it is not in and of itself a test of ND. Of the nine tested methods, arguably only a handful were “top” methods. I do think its sensible to consider “top” to be the best methods for each category (although Correlation is so poor I would discard it altogether). That leaves four top methods. So instead of creating the illusion that network deconvolution improves 9/10 top scoring methods, what Feizi *et al.* should have reported is that **3 out of 4 of the top methods that were tested were improved by network deconvolution**. That is the result of running network deconvolution with the default parameters. I was curious what happens when using the parameters that Feizi *et al.* applied to the protein interaction data (alpha=1, beta=0.99). Fortunately, because they have made the code usable, I was able to test this. The overall result as well as the scores on the individual datasets are shown below: The Feizi *et al. *results on gene regulatory networks using parameters different from the default. The results are very different. Despite the claims of Feizi *et al.* that network deconvolution is robust to choice of parameters,** now only 1 out of 4 of the top methods are improved by network deconvolution**. Strikingly, the top three methods tested have their quality degraded. In fact, the top method in two out of the three datasets tested is made worse by network deconvolution. **Network deconvolution is certainly not robust to parameter choice. **What was surprising to me was the *improved* performance of network deconvolution on the *S. cerevisae* dataset, especially for the mutual information and correlation methods. In fact, the improvement of network deconvolution over the methods is appears extraordinary. At this point I started to wonder about what the improvements really mean, i.e. what is the “score” that is being measured. The y-axis, called the “score” by Feizi *et al.* and Marbach *et al.* seemed to be changing drastically between runs. I wondered… what exactly is the score? What do the improvements mean? It turns out that “score” is defined as follows:

.

This formula requires some untangling: First of all, AUROC is shorthand for area under the ROC (receiver operator curve), and AUPR for area under the PR (precision recall) curve. For context, ROC is a standard concept in engineering/statistics. Precision and recall are used frequently, but the PR curve is used much less than ROC . Both are measures for judging the quality of a binary classifier. In the DREAM5 setting, this means the following: there is a gold standard of “positives”, namely a set of edges in a network that should be predicted by a method, and the remainder of the edges will be considered “negatives”, i.e. they should not be predicted. A method generates a list of edges, sorted (ranked) in some way. As one proceeds through the list, one can measure the fraction of positives and false positives predicted. The ROC and PR curves measure the performance. A ROC is simply a plot showing the true positive rate for a method as a function of the false positive rate. Suppose that there are *m *positives in the gold standard out of a goal of *n *edges. If one examines the top *k* predictions of a method, then among them there will be *t** “*true” positives as well as *k**-t “*false” positives. This will result in a single point on the ROC, i.e. the point (). This can be confusing at first glance for a number of reasons. First, the points do not necessarily form a function, e.g. there can be points with the same *x-*coordinate. Second, as one varies *k *one obtains a set of points, not a curve. The ROC is a *curve,* and* *is obtained by taking the envelope of all of the points for . The following intuition is helpful in understanding ROC:

- The
*x*coordinate in the ROC is the false positive rate. If one doesn’t make any predictions of edges at all, then the false positive rate is 0 (in the notation above*k=0, t=0).*On the other hand, if all edges are considered to be “true”, then the false positive rate is 1 and the corresponding point on the ROC is (1,1), which corresponds to*k=n, t=m*. - If a method has no predictive power, i.e. the ranking of the edges tells you nothing about which edges really are true, then the ROC is the line
*y=x.*This is because lack of predictive power means that truncating the list at any*k*, results in the same proportion of true positives above and below the*k*th edge. And a simple calculation shows that this will correspond to the point () on the ROC curve. - ROC curves can be summarized by a single number that has meaning: the area under the ROC (AUROC). The observation above means that a method without any predictive power will have an AUROC of 1/2. Similarly, a “perfect” method, where he true edges are all ranked at the top will have an AUROC of 1. AUROC is widely used to summarize the content of a ROC curve because it has an intuitive meaning: the AUROC is the probability that if a positive and a negative edge are each picked at random from the list of edges, the positive will rank higher than the negative.

An alternative to ROC is the precision-recall curve. Precision, in the mathematics notation above, is the value , i.e., the number of true positives divided by the number of true positives plus false positives. Recall is the same as sensitivity, or true positive rate: it is . In other words, the PR curve contains the points (), as recall is usually plotted on the *x-*axis. The area under the precision-recall curve (AUPR) has an intuitive meaning just like AUROC. It is the average of precision across all recall values, or alternatively, the probability that if a “positive” edge is selected from the ranked list of the method, then an edge above it on the list will be “positive”. Neither precision-recall curves, nor AUPR are widely used. There is one problem with AUPR, which is that its value is dependent on the number of positive examples in the dataset. For this reason, it doesn’t make sense to average AUPR across datasets (while it does make sense for AUROC). For all of these reasons, I’m slightly uncomfortable with AUPR but that is not the main issue in the DREAM5 analysis. I have included an example of ROC and PR curves below. I generated them for the method “GENIE3” tested by Feizi *et al.*. This was the method with the best overall score. The figure below is for the *S. cerevisiae *dataset:

The ROC and a PR curves before (top) and after (bottom) applying network deconvolution to the GENIE3 network.

The red curve in the ROC plots is what one would see for a method without any predictive power (point #2 above). In this case, what the plot shows is that GENIE3 is effectively ranking the edges of the network randomly. The PR curve is showing that at all recall levels there is very little precision. **The difference between GENIE3 before and after network deconvolution is so small, that it is indistinguishable in the plots. **I had to create separate plots before and after network deconvolution because the curves literally overlapped and were not visible together. **The conclusion from plots such as these, should not be that there is statistically significance (in the difference between methods with/without network deconvolution, or in comparison to random), but rather that there is negligible effect.** There is a final ingredient that is needed to constitute “score”. Instead of just averaging AUROC and AUPR, both are first converted into *p*-values that measure the statistical significance of the method being different from random. The way this was done was to create random networks from the edges of the 35 methods, and then to measure their quality (by AUROC or AUPR) to obtain a distribution. The *p-*value for a given method was then taken to be the area under the probability density function to the right of the method’s value. The graph below shows the pdf for AUROC from the *S. cerevisae *DREAM5 data that was used by Feizi *et al. *to generate the scores:

Distribution of AUROC for random methods generated from the *S. cerevisiae *submissions in Marbach *et al.*

In other words, almost all random methods had an AUROC of around 0.51, so **any slight deviation from that was magnified in the computation of p-value, and then by taking the (negative) logarithm of that number a very high “score” was produced**. The scores were then taken to be the average of the AUROC and AUPR scores. I can understand why Feizi

*et al.*might be curious whether the difference between a method’s performance (before and after network deconvolution) is significantly different from random, but to replace magnitude of effect with statistical significance in this setting, with such small effect sizes, is to completely mask the fact that the methods are hardly distinguishable from random in the first place. To make concrete the implication of reporting the statistical significance instead of effect size, I examined the “significant” improvement of network deconvolution on the

*S. cerevisae*and other datasets when run with the protein parameters rather than the default (second figure above). Below I show the AUROC and AUPR plots for the dataset.

The Feizi *et al.* results before and after network deconvolution using alpha=1, beta=0.99 (shown with AUROC).

The Feizi *et al.* results before and after network deconvolution using alpha=1, beta=0.99 (shown with AUPR).

My conclusion was that **the use of “score” was basically a red herring**. **What looked like major differences between methods disappears into tiny effects in the experimental datasets, and even the in silico variations are greatly diminished**. The differences in AUROC of one part in 1000 hardly seem reasonable for concluding that network deconvolution works. Biologically, both results are that the methods cannot reliably predict edges in the network. With usable network deconvolution code at hand, I was curious about one final question. The main result of the DREAM5 paper

- D. Marbach
*et al.*, Wisdom of Crowds for Robust Gene Network Inference, Nature Methods 9 (2012), 796–804.

was that the community method was best. So I wondered whether network deconvolution would improve it. In particular, the community result shown in Feizi *et al. *was not a test of network deconvolution, it was simply a construction of the community from the 9 methods tested (two communities were constructed, one before and one after network deconvolution). To perform the test, I went to examine the DREAM5 data, available as supplementary material with the paper. I was extremely impressed with reproducibility. The participant submissions are all available, together with scripts that can be used to quickly obtain the results of the paper. However the data is not very usable. For example, what is provided is the top 100,000 edges that each method produced. But if one wants to use the full prediction of a method, it is not available. The implication of this in the context of network deconvolution is that it is not possible to test network deconvolution on the DREAM5 data without thresholding. Furthermore, in order to evaluate edges absolute value was applied to all the edge weights. Again, this makes the data much less useful for further experiments one may wish to conduct. In other words, DREAM5 is reproducible but not very usable. But since Feizi *et al.* suggest that network deconvolution can literally be run on anything with “indirect effect”, I decided to give it a spin. I did have to threshold the input (although fortunately, Feizi *et al.* have assured us that this is a fine way to run network deconvolution), so actually the experiment is entirely reasonable in terms of their paper. The figure is below (produced with the default network deconvolution parameters), but before looking at it, please accept my apology for making it. I really think its the most incoherent, illogical, meaningless and misleading figure I’ve ever made. But it does abide by the spirit of network deconvolution:

The DREAM5 results before and after network deconvolution.

Alas, **network deconvolution decreases the quality of the best method, namely the community method**.

**The wise crowds have been dumbed down.**In fact, 19/36 methods become worse, 4 stay the same, and only 13 improve. Moreover, network deconvolution decreases the quality of the top method in each dataset. The only methods with consistent improvements when network deconvolution is applied are the mutual information and correlation methods, poor performers overall, that Feizi

*et al.*ended up focusing on. I will acknowledge that one complaint (of the many possible) about my plot is that the overall results are dominated by the

*in silico*dataset. True- and I’ve tried to emphasize that by setting the y-axis to be the same in each dataset (unlike Feizi

*et al.*) But I think its clear that no matter how the datasets are combined into an overall score, the result is that network deconvolution is

**not**consistently improving methods. All of the analyses I’ve done were made possible thanks to the improved usability of network deconvolution. It is unfortunate that the result of the analyses is that network deconvolution should not be used. Still, I think this examples makes a good case for the fact that

**r**

**eproducibility is essential, but usability is more important.**

This is the second in a series of three posts (part1, part3) on two back-to-back papers published in Nature Biotechnology in August 2013:

- Baruch Barzel & Albert-László Barabási, Network link prediction by global silencing of indirect correlations,
*Nature Biotechnology***31**(8), 2013, p 720–725. doi:10.1038/nbt.2601 - Soheil Feizi, Daniel Marbach, Muriel Médard & Manolis Kellis, Network deconvolution as a general method to distinguish direct dependencies in networks,
*Nature Biotechnology***31**(8), 2013, p 726–733. doi:10.1038/nbt.2635

The Barzel & Barabási paper we discussed in yesterday’s blog post is embarrassing for its math, shoddy “validations” and lack of results. However of the two papers above it is arguably the “better” one. That is because the paper by Soheil Feizi, Daniel Marbach, Muriel Médard & Manolis Kellis, in addition to having similar types of problems to the Barzel & Barabási paper, is also **dishonest** and **fraudulent**. For reasons that I will explain in the third and final post in this series, we (Nicolas Bray and I) started looking at the Feizi *et al.* paper shortly after it was published early in August 2013. This is the story:

Feizi *et al.*‘s paper describes a method called *network deconvolution* that in their own words provides “…a systematic method for inferring the direct dependencies in a network, corresponding to true interactions, and removing the effects of transitive relationships that result from indirect effects.” They claim that the method is a “foundational graph theoretic tool” and that it “is widely applicable for computing dependencies in network science across diverse disciplines.” This high brow language makes network deconvolution sounds very impressive, but what *exactly *is the method?

Feizi *et al.* would like you to believe that the method is what is described in their Figure 1, part of which is shown below:

This is a model for networks represented as matrices. In this model an unknown matrix with eigenvalues between -1 and 1 is to be inferred from an observed matrix that is related to via

(1)

The matrix represents “direct effects” and the sum of its powers “indirect effects”. It is probably worth noting at this point that there is no particular reason to believe that effects will behave in this manner in any particular system of interest, but we return to this later. The eigenvalue assumption on is required for the specification of the model, because without it the infinite sum generating does not converge. An elementary manipulation of (1) shows that the matrix can be recovered from by the formula

. (2)

Unfortunately, many matrices cannot be decomposed as an infinite sum of powers of some matrix as in equation (1), so **equation (2) cannot be applied directly to arbitrary data matrices.** The only mention of this issue in the main text of the paper is the statement that “This [eigenvalue] assumption can be achieved for any matrix by scaling the observed input network by a function of the magnitude of its eigenvalues.” This is true but incoherent. There are an infinite number of scalings that will satisfy the assumption, and while **the authors claim in their FAQ **that “the effect of linear scaling on the input matrix is that … it does not have an effect” this **is obviously false **(also if it has no effect why do they do it?). For example, as the scaling goes to zero, converges to .

**What the authors have actually done with their seemingly innocuous aside is fundamentally change their model**: now instead of (1), is given by

. (3)

The problem with this model is that given there are an infinite number of solutions for and . Feizi *et al.*‘s approach to dealing with this is to introduce a scaling parameter that must be chosen *a priori*. **They do not even mention the existence of this parameter anywhere in the main text. Instead**, **they choose to make a false statement in the caption of Figure 1** when they write “When these assumptions hold, network deconvolution removes all indirect flow effects and infers all direct interactions and weights exactly.” Even when satisfies the eigenvalue constraint, once it is scaled before applying equation (2) the matrix has probability zero of being recovered.

In the video below, recorded at the Banff workshop on Statistical Data Integration Challenges in Computational Biology: Regulatory Networks and Personalized Medicine (August 11–16, 2013), you can see an explanation of network deconvolution by the corresponding author Kellis himself. The first part of the clip is worth watching just to see Kellis describe inverting a matrix as a challenge and then explain the wrong way to do it. But the main point is at the end (best viewed full screen with HD):

Manolis Kellis received his B.S., M.S. and Ph.D degrees in computer science and electrical engineering from MIT, so it is hard to believe that he really thinks that solving (2), which requires nothing more than a matrix inversion, must be accomplished via eigendecomposition. In fact, inverting a *2000 x 2000* matrix in R is 50% slower using that approach. What is going on is that Kellis is hiding the fact that computation of the eigenvalues is used in Feizi *et al.* in order to set the scaling parameter, i.e. that he is actually solving (3) and not (2). Indeed, there is no mention of scaling in the video except for the mysteriously appearing footnote in the lower left-hand corner of the slide starting at 0:36 that is flashed for 2 seconds. **Did you have time to think through all the implications of the footnote in 2 seconds, or were you fooled? **

While it does not appear in the main text, the key scaling parameter in network deconvolution is acknowledged in the supplementary material of the paper (published with the main paper online July 14, 2013). In supplementary Figure 4, Feizi *et al. *show the performance of network deconvolution with different scaling parameters () on the three different datasets examined in the paper:

In the words of the authors, the point of this figure is that “…choosing close to one (i.e., considering higher order indirect interactions) leads to the **best** performance in all considered network deconvolution applications.” However, while the supplement revealed the existence of , it did not disclose the values used for the results in the paper.

We inquired with the authors and were surprised to discover that while was used for the protein networks and for the co-authorship network, was used for the DREAM5 regulatory networks violating their own advice. What rationale could there be for such a different choice, especially one very far away from the apparently optimal choice “near 1”? We asked the authors, who initially replied that the parameter setting didn’t make a difference. We then asked why the parameter would be set differently if its value didn’t make a difference; we never got an answer to this question.

**Although it may be hard to believe, this is where the story gets even murkier.**

Perhaps as a result of our queries, the scaling parameters were publicly disclosed by Feizi *et al.* in a correction to the original supplement posted on the Nature Biotechnology website on August 26, 2013. The correction states “Clarification has been made to Supplementary Notes 1.1, 1.3, 1.6 and Supplementary Figure 4 about the practical implementation of network deconvolution and parameter selection for application to the examples used in the paper. ” But **Supplementary Figure 4 was not clarified, it was changed, a change not even acknowledged by the authors, let alone explained.** Below is Figure S4 from the updated supplement for comparison with the one from the original supplement shown above.

The revised figure is qualitatively and quantitatively different from its predecessor. **A key point of the paper was changed- the scaling parameter is now seen to not be optimal near 1, but rather dataset dependent **(this is reflected in a corresponding change in the text: in reference to choosing close to 1 “best performance” was changed to “high performance”). Specifically, the regulatory network dataset now has an optimal value of close to 0.5, perhaps providing an answer to our question above about why was used for that dataset alone. However that explanation directly implies that the authors knew the original figure was incorrect. We are left with two questions:

1. If the authors believed their original figure and their original statement that close to 1 “leads to the best performance in all considered network deconvolution applications”, then why did they use for one of the applications?

2.When and how was the July 16th Figure S4 made?

**Manolis Kellis declined to answer question 1 and by not acknowledging the figure change in the correction did not permit the readers of Nature Biotechnology to ask question 2. **

Clearly we have fallen a long way from the clean and canonical-seeming Figure 1. We wish we could say we have reached the bottom, but there is still a ways to go. **It turns out that “network deconvolution” is actually neither of the two models (2) and (3) above, a fact revealed only upon reading the code distributed by the authors. **Here is what is in the code:

- affinely map the entries of the matrix to lie between 0 and 1,
- set the diagonal of the matrix to 0,
- threshold the matrix, keeping only the largest fraction of entries,
- symmetrize the matrix,
- scale the matrix so that inferred in the next step will have maximum eigenvalue ,
- apply formula (2),
- affinely map between 0 and 1 again.

The parameter is the scaling parameter we’ve already discussed, but **there is a second parameter** . When the Feizi *et al.* paper was first published, the parameter appeared in the code with default value 1, and was literally not mentioned in the paper. When asked, the authors revealed that it also takes on different values in the different experiments in the paper. In particular, the regulatory networks DREAM5 experiments used . The only guidance for how to set , now provided with the new version of the supplement, is “In practice, the threshold chosen for visualization and subsequent analysis will depend on the specific application.”

It is hard to say what steps 1–7 actually do. While it is impossible to give a simple analytic formula for this procedure as a whole, using the Sherman-Morrison formula we found that when applied to a correlation matrix *C*, steps 1, 2, 4, 6 and 7 produce a matrix whose *ij*th entry is (up to an affine mapping) where and *m* is the minimum entry of *C*. Omitting step 1 results in *P _{ii }*, the inverse correlation matrix, so the effect of the mapping in this case is the addition of the final two terms whose possible meaning escapes us.

**We are left to wonder why the authors chose to add this step which lacks any apparent theoretical justification**. This bring us to question the meaning of the contribution of author Muriel Médard, a professor of information theory at MIT. According to the contributions section she is the one who “contributed to correctness proof and robustness analysis”. Presumably “correctness” meant describing the two steps needed to show that the solution to (1) is (2). But what would actually be relevant to the paper is a theorem about steps 1–7. Unfortunately such a theorem is hard to imagine.

When Feizi *et al.* was published in August 2013, they set up a companion website at http://compbio.mit.edu/nd/ where the code and data was made available (the data is no longer available at the site). On December 4, 2013, the authors updated the website. While there used to be one version of the program, there are now two different versions, one specifically for use on regulatory networks where the default values for the parameters are and the other with defaults . **“Network deconvolution” is supposedly a universal method applicable to any kind of network. Will the authors continue to distribute different domain-specific versions of the program, each with their own “default” parameters somehow divined by themselves?**

To summarize, “network deconvolution”, as best as we can currently tell, consists of steps 1–7 above with the parameters:

- DREAM5 challenge: and .
- Protein network: and .
- Co-authorship: and .

The description of the method in the main text of the paper is different from what is in the online methods, which in turn is different from what is in the supplement, which in turn is different from what is actually in each of the (two) released programs, namely the ad-hoc heuristic steps we have described above.

Having said all of that, one might be curious whether the “method” actually works in practice. Sometimes heuristics work well for reasons that are not well understood (although they are usually not published in journals such as Nature Biotechnology). Unfortunately, we have to disclose that we don’t know how the method performs on the datasets in the paper. We tried really hard to replicate the results of the paper, running the software on the provided input matrices and comparing them to the distributed output matrices (available on the NBT website). Despite our best attempts, and occasionally getting very close results, **we have been unable to replicate the results of the paper**. This bothers us to such an extent that **I, Lior Pachter, am hereby offering $100** to anyone (including Feizi *et al.*) who can provide me with the code and input to run network deconvolution and produce exactly the figures in the paper (including *both* versions of Figure S4) and the output matrices in the supplementary data.

Because we couldn’t replicate the results of the paper, we decided to examine how FK performs in one* *case for which Feizi *et al*. originally claimed (Supplement version July 14 2013) that their method is optimal. They stated “if the observed network is a covariance matrix of jointly Gaussian variables, ND infers direct interactions using global partial correlations”. This claim was deleted without mention in the updated supplement, however even if network deconvolution is not optimal for inferring a network from a correlation matrix (presumably what they meant in the original statement), the claimed universality suggested in the paper implies that it should be very good on *any *input.

The figure above shows a comparison of the Feizi* et al. *method (FK) with the various beta parameters used in their paper to regularized partial correlation, the “geometric root” (equation (2)) and the naïve method of simply selecting the top entries of the correlation matrix. In order to test the method we performed 40 simulations of sampling 500 observations from a random Gaussian graphical model with 1000 variables and an edge density of 5% to ensure the graph was connected yet sparse. Performance was assessed by comparing the ranking of the edges to the true edges present in the model and computing the area under the corresponding ROC curve. As a benchmark we applied partial correlation structural inference based on a James-Stein shrinkage estimation of the covariance matrix (a standard approach to gene regulatory network inference, described in Schäfer, J. & Strimmer, K. A Shrinkage Approach to Large-Scale Covariance Matrix Estimation and Implications for Functional Genomics. *Statistical Applications in Genetics and Molecular Biology* **4,** (2005)).

That FK does better than the most naïve method and yet worse than a method actually designed for this task is perhaps to be expected for a heuristic method that is based on a metaphor (one that dates back to Seawall Wright’s work on path coefficients in the 1920s- although Wright used (1) in the restricted setting of directed acyclic graphs where it makes sense). It is possible that in some contexts, FK will make things somewhat better. However what such contexts might be is unclear. The only guidance that Feizi *et al.* provide on the assumptions needed to run their method is that they state in the supplement that **the model assumes that networks are “l****inear time-invariant flow-preserving operators”**. While it is true that individual parts of that phrase mean something in certain contexts, the complete phrase is word salad. We wonder: is co-authorship flow preserving?

**To conclude: Feizi et al. have written a paper that appears to be about inference of edges in networks based on a theoretically justifiable model but**

**the method used to obtain the results in the paper is completely different than the idealized version sold in the main text of the paper and****the method actually used has parameters that need to be set, yet no approach to setting them is provided. Even worse,****the authors appear to have deliberately tried to hide the existence of the parameters. It looks like****the reason for covering up the existence of parameters is that the parameters were tuned to obtain the results.****Moreover,****the results are not reproducible. The provided data and software is not enough to replicate even a single figure in the paper. This is disturbing because****the performance of the method on the simplest of all examples, a correlation matrix arising from a Gaussian graphical model, is poor.**

**In academia the word fraudulent is usually reserved for outright forgery. However given what appears to be deliberate hiding, twisting and torturing of the facts by Feizi et al., we think that fraud (“deception deliberately practiced in order to secure unfair or unlawful gain”) is a reasonable characterization. If the paper had been presented in a straightforward manner, would it have been accepted by Nature Biotechnology?
**

*Post scriptum*. After their paper was published, Feizi *et al.* issued some press releases. They explain how original and amazing their work is. Kellis compliments himself by explaining that “Introducing such a foundational operation on networks seems surprising in this day and age” and Médard adds that “the tool can be applied to networks of arbitrary dimension.” They describe the method itself as a way to have “…expressed the matrix representing all element correlations as a function of its principal components, and corresponding weights for each component” even though principal components has *nothing* to do with anything in their paper. As with Barzel-Barabási, this seems worthy of nomination for the Pressies.

*Post post scriptum: *In September 2013, we submitted a short commentary to Nature Biotechnology with many of the points made above. While it was rejected four months later, we really *really *wish we could share the reviews here. Unfortunately they were sent to us with a confidentiality restriction, though having said that, any of the reviewers are hereby invited to guest post on network science on this blog. We were disappointed that Nature Biotechnology would not publish the commentary, but not surprised. Frankly, it would have been too much to expect them and their reviewers to unravel the layers of deception in Feizi *et al. *in the initial review process; it took us an extraordinary amount of time and effort. Having published the paper, and without a clear path to unpublication (retraction is usually reserved for faking of experimental data), there were presumably not many options. The question is whether anything will change. Feizi *et al. *apparently have more Nature Biotechnology papers on the way as evidenced by an extraordinary declaration on Soheil Feizi’s publication page (this is a pdf of the version when this blog was posted; Feizi changed his site an hour later) where the publication venue of their next paper appears to have been preordained:

S. Feizi, G. Quon , M. Mendoza, M. Medard, M. Kellis

*Comparative Analysis of Modular Integrative Regulatory Networks across Fly, Worm and Human*

`in preparation for Nature Biotechnology.`

The term “in preparation” is common on academic websites, especially on sites of graduate students or postdocs who are eager to advertise forthcoming work, but “in preparation **for**” really requires some hubris. We have to wonder, why do Feizi *et al. *assume that peer review is just a formality in the journal Nature Biotechnology?

[Addendum 2/23: The authors posted a rebuttal to this blog post, which we replied to here.]

## Recent Comments