Steven Miller is a math professor at Williams College who specializes in number theory and theoretical probability theory. A few days ago he published a “declaration” in which he performs an “analysis” of phone bank data of registered Republicans in Pennsylvania. The data was provided to him by Matt Braynyard, who led Trump’s data team during the 2016. Miller frames his “analysis” as an attempt to “estimate the number of fraudulent ballots in Pennsylvania”, and his analysis of the data leads him to conclude that

“almost surely…the number of ballots requested by someone other than the registered Republican is between 37,001 and 58,914, and almost surely the number of ballots requested by registered Republicans and returned but not counted is in the range from 38,910 to 56,483.”

A review of Miller’s “analysis” leads me to conclude that his estimates are fundamentally flawed and that the data as presented provide no evidence of voter fraud.

This conclusion is easy to arrive at. The declaration claims (without a reference) that there were 165,412 mail-in ballots requested by registered Republicans in PA, but that “had not arrived to be counted” as of November 16th, 2020. The data Miller analyzed was based on an attempt to call some of these registered Republicans by phone to assess what happened to their ballots. The number of phone calls made, according to the declaration, is 23,184 = 17,000 + 3,500 + 2,684. The number 17,000 consists of phone calls that did not provide information either because an answering machine picked up instead of a person, or a person picked up and summarily hung up. 3,500 numbers were characterized as “bad numbers / language barrier”, and 2,684 individuals answered the phone. Curiously, Miller writes that “Almost 20,000 people were called”, when in fact 23,184 > 20,000.

In any case, clearly many of the phone numbers dialed were simply wrong numbers, as evident by the number of “bad” calls: 3,500. It’s easy to imagine how this can happen: confusion because some individuals share a name, phone numbers have changed, people move, the phone call bank makes an error when dialing etc. Let b be the fraction of phone numbers out of the 23,184 that were “bad”, i.e. incorrect. We can estimate b by noting that we have some information about it: we know that the 3,500 “bad numbers” were bad (by definition). Additionally, it is reported in the declaration that 556 people literally said that they did not request a ballot, and there is no reason not to take them at their word. We don’t know what fraction of the 17,000 individuals called and did not pick up or hung up were wrong numbers, but we do know that the fraction out of the total must equal the fraction out of the 17,000 + those we know for sure were bad numbers, i.e.

23184 \cdot b = 17,000 \cdot b + 556 + 3500.

Solving for b we find that b \approx \frac{2}{3}. I’m surprised the number is so low. One would expect that individuals who requested ballots, but then didn’t send them in, would be enriched for people who have recently moved or are in the process of moving, or have other issues making it difficult to reach them or impossible to reach them at all.

The fraction of bad calls derived translates to about 1,700 bad numbers out of the 2,684 people that were reached. This easily explains not only the 556 individuals who said they did not request a ballot, but also the 463 individuals who said that they mailed back their ballots. In the case of the latter there is no irregularity; the number of bad calls suggests that all those individuals were reached in error and their ballots were legitimately counted so they weren’t part of the 165,412. It also explains the 544 individuals who said they voted in person.

That’s it. The data don’t point to any fraud or irregularity, just a poorly design poll with poor response rates and lots of erroneous information due to bad phone numbers. There is nothing to explain. Miller, on the other hand, has some things to explain.

First, I note that his declaration begins with a signed page asserting various facts about Steven Miller and the analysis he performed. Notably absent from the page, or anywhere else in the document, is a disclosure of funding source for the work and of conflicts of interest. On his work webpage, Miller specifically states that one should always acknowledge funding support.

Second, if Miller really wanted to understand the reason why some ballots were requested for mail-in, but had not yet arrived to be counted, he would also obtain data from Democrats. That would provide a control on various aspects of the analysis, and help to establish whether irregularities, if they were to be detected, were of a partisan nature. Why did Miller not include an analysis of such data?

Third, one might wonder why Steven Miller chose to publish this “declaration”. Surely a professor who has taught probability and statistics for 15 years (as Miller claims he has) must understand that his own “analysis” is fundamentally flawed, right? Then again, I’ve previously found that excellent pure mathematicians are prone to falling into a data analysis trap, i.e. a situation where their lack of experience analyzing real-world datasets leads them to believe naïve analysis that is deeply flawed. To better understand whether this might be the case with Miller, I examined his publication record, which he has shared publicly via Google Scholar, to see whether he has worked with data. The first thing I noticed was that he has published more than 700 articles (!) and has an h-index of 47 for a total of 8,634 citations… an incredible record for any professor, and especially for a mathematician. A Google search for his name displays this impressive number of citations:

As it turns out, his impressive publication record is a mirage. When I took a closer look and found that many of the papers he lists on his Google Scholar page are not his, but rather articles published by other authors with the name S Miller. “His” most cited article was published in 1955, a year that transpired well before he was born. Miller’s own most cited paper is a short unpublished tutorial on least squares (I was curious and reviewed it as well only to find some inaccuracies but hey, I don’t work for this guy).

I will note that in creating his Google Scholar page, Miller did not just enter his name and email address (required). He went to the effort of customizing the page, including the addition of keywords and a link to his homepage, and in doing so followed his own general advice to curate one’s CV (strangely, he also dispenses advice on job interviews, including about shaving- I guess only women interview for jobs?). But I digress: the question is, why is his Google Scholar page display massively inflated publication statistics based on papers that are not his? I’ve seen this before, and in one case where I had hard evidence that it was done deliberately to mislead I reported it as fraud. Regardless of Miller’s motivations, by looking at his actual publications I confirmed what I suspected, namely that he has hardly any experience analyzing real world data. I’m willing to chalk up his embarrassing “declaration” to statistics illiteracy and naïveté.

In summary, Steven Miller’s declaration provides no evidence whatsoever of voter fraud in Pennsylvania.

Lior Pachter
Division of Biology and Biological Engineering &
Department of Computing and Mathematical Sciences California Institute of Technology

Abstract

A recently published pilot study on the efficacy of 25-hydroxyvitamin D3 (calcifediol) in reducing ICU admission of hospitalized COVID-19 patients, concluded that the treatment “seems able to reduce the severity of disease, but larger trials with groups properly matched will be required go show a definitive answer”. In a follow-up paper, Jungreis and Kellis re-examine this so-called “Córdoba study” and argue that the authors of the study have undersold their results. Based on a reanalysis of the data in a manner they describe as “rigorous” and using “well established statistical techniques”, they urge the medical community to “consider testing the vitamin D levels of all hospitalized COVID-19 patients, and taking remedial action for those who are deficient.” Their recommendation is based on two claims: in an examination of unevenness in the distribution of one of the comorbidities between cases and controls, they conclude that there is “no evidence of incorrect randomization”, and they present a “mathematical theorem” to make the case that the effect size in the Córdoba study is significant to the extent that “they can be confident that if assignment to the treatment group had no effect, we would not have observed these results simply due to chance.”

Unfortunately, the “mathematical analysis” of Jungreis and Kellis is deeply flawed, and their “theorem” is vacuous. Their analysis cannot be used to conclude that the Córdoba study shows that calcifediol significantly reduces ICU admission of hospitalized COVID- 19 patients. Moreover, the Córdoba study is fundamentally flawed, and therefore there is nothing to learn from it.

The Córdoba study

The Córdoba study, described by the authors as a pilot, was ostensibly a randomized controlled trial, designed to determine the efficacy of 25-hydroxyvitamin D3 in reducing ICU admission of hospitalized COVID-19 patients. The study consisted of 76 patients hospitalized for COVID-19 symptoms, with 50 of the patients treated with calcifediol, and 26 not receiving treatment. Patients were administered “standard care”, which according to the authors consisted of “a combination of hydroxychloroquine, azithromycin, and for patients with pneumonia and NEWS score 5, a broad spectrum antibiotic”. Crucially, admission to the ICU was determined by a “Selection Committee” consisting of intensivists, pulmonologists, internists, and members of an ethics committee. The Selection Committee based ICU admission decisions on the evaluation of several criteria, including presence of comorbidities, and the level of dependence of patients according to their needs and clinical criteria.

The result of the Córdoba trial was that only 1/50 of the treated patients was admitted to the ICU, whereas 13/26 of the untreated patients were admitted (p-value = 7.7 ∗ 10−7 by Fisher’s exact test). This is a minuscule p-value but it is meaningless. Since there is no record of the Selection Committee deliberations, it impossible to know whether the ICU admission of the 13 untreated patients was due to their previous high blood pressure comorbidity. Perhaps the 11 treated patients with the comorbidity were not admitted to the ICU because they were older, and the Selection Committee considered their previous higher blood pressure to be more “normal” (14/50 treatment patients were over the age of 60, versus only 5/26 of the untreated patients).

Figure 1: Table 2 from [1] showing the comorbidities of patients. It is reproduced by virtue of [1] being published open access under the CC-BY license.

The fact that admission to the ICU could be decided in part based on the presence of co-morbidities, and that there was a significant imbalance in one of the comorbidities, immediately renders the study results meaningless. There are several other problems with it that potentially confound the results: the study did not examine the Vitamin D levels of the treated patients, nor was the untreated group administered a placebo. Most importantly, the study numbers were tiny, with only 76 patients examined. Small studies are notoriously problematic, and are known to produce large effect sizes [9]. Furthermore, sloppiness in the study does not lead to confidence in the results. The authors state that the “rigorous protocol” for determining patient admission to the ICU is available as Supplementary Material, but there is no Supplementary Material distributed with the paper. There is also an embarrassing typo: Fisher’s exact test is referred to twice as “Fischer’s test”. To err once in describing this classical statistical test may be regarded as misfortune; to do it twice looks like carelessness.

A pointless statistics exercise

The Córdoba study has not received much attention, which is not surprising considering that by the authors’ own admission it was a pilot that at best only motivates a properly matched and powered randomized controlled trial. Indeed, the authors mention that such a trial (the COVIDIOL trial), with data being collected from 15 hospitals in Spain, is underway. Nevertheless, Jungreis and Kellis [3], apparently mesmerized by the 7.7 ∗ 10−7 p-value for ICU admission upon treatment, felt the need to “rescue” the study with what amounts to faux statistical gravitas. They argue for immediate consideration of testing Vitamin D levels of hospitalized patients, so that “deficient” patients can be administered some form of Vitamin D “to the extent it can be done safely”. Their message has been noticed; only a few days after [3] appeared the authors’ tweet to promote it has been retweeted more than 50 times [8].

Jungreis and Kellis claim that the p-value for the effect of calcifediol on patients is so significant, that in and of itself it merits belief that administration of calcifediol does, in fact, prevent admission of patients to ICUs. To make their case, Jungreis and Kellis begin by acknowledging that imbalance between the treated and untreated groups in the previous high blood pressure comorbidity may be a problem, but claim that there is “no evidence of incorrect randomization.” Their argument is as follows: they note that while the p-value for the imbalance in the previous high blood pressure comorbidity is 0.0023, it should be adjusted for the fact that there are 15 distinct comorbidities, and that just by chance, when computing so many p-values, one might be small. First, an examination of Table 2 in [1] (Figure 1) shows that there were only 14 comorbidities assessed, as none of the patients had previous chronic kidney disease. Thus, the number 15 is incorrect. Second, Jungreis and Kellis argue that a Bonferroni correction should be applied, and that this correction should be based on 30 tests (=15 × 2). The reason for the factor of 2 is that they claim that when testing for imbalance, one should test for imbalance in both directions. By applying the Bonferroni correction to the p-values, they derive a “corrected” p-value for previous high blood pressure being imbalanced between groups of 0.069. They are wrong on several counts in deriving this number. To illustrate the problems we work through the calculation step-by-step:

The question we want to answer is as follows: given that there are multiple comorbidities, is there is a significant imbalance in at least one comorbidity. There are several ways to test for this, with the simplest being Šidák’s correction [10] given by

q \quad = \quad 1-(1-m)^n,

where m is the minimum p-value among the comorbidities, and n is the number of tests. Plugging in m = 0.0023 (the smallest p-value in Table 2 of [1]) and n = 14 (the number of comorbidities) one gets 0.032 (note that the Bonferroni correction used by Jungreis And Kellis is the Taylor approximation to the Šidák correction when m is small). The Šidák correction is based on an assumption that the tests are independent. However, that is certainly not the case in the Córdoba study. For example, having at least one prognostic factor is one of the comorbidities tabulated. In other words, the p-value obtained is conservative. The calculation above uses n = 14, but Jungreis and Kellis reason that the number of tests is 30 = 15 × 2, to take into account an imbalance in either the treated or untreated direction. Here they are assuming two things: that two-sided tests for each comorbidity will produce double the p-value of a one-sided test, and that two sided tests are the “correct” tests to perform. They are wrong on both counts. First, the two-sided Fisher exact test does not, in general produce a p-value that is double the 1-sided test. The study result is a good example: 1/49 treated patients admitted to the ICU vs. 13/26 untreated patients produces a p-value of 7.7 ∗ 10−7 for both the 1-sided and 2-sided tests. Jungreis and Kellis do not seem to know this can happen, nor understand why; they go to great lengths to explain the importance of conducting a 1-sided test for the study result. Second, there is a strong case to be made that a 1-sided test is the correct test to perform for the comorbidities. The concern is not whether there was an imbalance of any sort, but whether the imbalance would skew results by virtue of the study including too many untreated individuals with comorbidities. In any case, if one were to give Jungreis and Kellis the benefit of the doubt, and perform a two sided test, the corrected p-value for the previous high blood pressure comorbidity is 0.06 and not 0.069.

The most serious mistake that Jungreis and Kellis make, however, is in claiming that one can accept the null hypothesis of a hypothesis test when the p-value is greater than 0.05. The p-value they obtain is 0.069 which, even if it is taken at face value, is not grounds for claiming, as Jungreis and Kellis do, that “this is not significant evidence that the assignment was not random” and reason to conclude that there is “no evidence of incorrect randomization”. That is not how p-values work. A p-value less than 0.05 allows one to reject the null hypothesis (assuming 0.05 is the threshold chosen), but a p-value above the chosen threshold is not grounds for accepting the null. Moreover, the corrected p-value is 0.032 which is certainly grounds for rejecting the null hypothesis that the randomization was random.

Correction of the incorrect Jungreis and Kellis statistics may be a productive exercise in introductory undergraduate statistics for some, but it is pointless insofar as assessing the Córdoba study. While the extreme imbalance in the previous high blood pressure comorbidity is problematic because patients with the comorbidity may be more likely to get sick and require ICU admission, the study was so flawed that the exact p-value for the imbalance is a moot point. Given that the presence of comorbidities, not just their effect on patients, was a factor in determining which patients were admitted to the ICU, the extreme imbalance in the previous high blood pressure comorbidity renders the result of the study meaningless ex facie.

A definition is not a theorem is not proof of efficacy

In an effort to fend off criticism that the comorbidities of patients were improperly balanced in the study, Jungreis and Kellis go further and present a “theorem” they claim shows that there was a minuscule chance that an uneven distribution of comorbidities could render the study results not significant. The “theorem” is stated twice in their paper, and I’ve copied both theorem statements verbatim from their paper:

Theorem 1 In a randomized study, let p be the p-value of the study results, and let q be the probability that the randomization assigns patients to the control group in such a way that the values of Pprognostic(Patient) are sufficiently unevenly distributed between the treatment and control groups that the result of the study would no longer be statistically significant at the 95% level after p controlling for the prognostic risk factors. Then q < \frac{p}{0.05}.

According to Jungreis and Kellis, Pprognostic(Patient) is the following: “There can be any number of prognostic risk factors, but if we knew what all of them were, and their effect sizes, and the interactions among them, we could combine their effects into a single number for each patient, which is the probability, based on all known and yet-to-be discovered risk factors at the time of hospital admission, that the patient will require ICU care if not given the calcifediol treatment. Call this (unknown) probability Pprognostic(Patient).”

The theorem is restated in the Methods section of Jungreis and Kellis paper as follows:

Theorem 2 In a randomized controlled study, let p be the p-value of the study outcome, and let q be the probability that the randomization distributes all prognostic risk factors combined sufficiently unevenly between the treatment and control groups that when controlling for these prognostic risk p factors the outcome would no longer be statistically significant at the 95% level. Then q < \frac{p}{0.05}.

While it is difficult to decipher the language the “theorem” is written in, let alone its meaning (note Theorem 1 and Theorem 2 are supposedly the same theorem), I was able to glean something about its content from reading the “proof”. The mathematical content of whatever the theorem is supposed to mean, is the definition of conditional probability, namely that if A and B are events with P(B) > 0, then

P(A|B) \quad := \quad \frac{P(A \cap B)}{P(B)}.

To be fair to Jungreis and Kellis, the “theorem” includes the observation that

P(A \cap B) \leq P(A) \quad \Rightarrow \quad P(A|B) \leq \frac{P(A)}{P(B)}.

This is not, by any stretch of the imagination, a “theorem”; it is literally the definition of conditional probability followed by an elementary inequality. The most generous interpretation of what Jungreis and Kellis were trying to do with this “theorem”, is that they were showing that the p-value for the study is so small, that it is small even after being multiplied by 20. There are less generous interpretations.

Does Vitamin D intake reduce ICU admission?

There has been a lot of interest in Vitamin D and its effects on human health over the past decade [2], and much speculation about its relevance for COVID-19 susceptibility and disease severity. One interesting result on disease susceptibility was published recently: in a study of 489 patients, it was found that the relative risk of testing positive for COVID-19 was 1.77 times greater for patients with likely deficient vitamin D status compared with patients with likely sufficient vitamin D status [7]. However, definitive results on Vitamin D and its relationship to COVID- 19 will have to await larger trials. One such trial, a large randomized clinical trial with 2,700 individuals sponsored by Brigham and Women’s Hospital, is currently underway [4]. While this study might shed some light on Vitamin D and COVID-19, it is prudent to keep in mind that the outcome is not certain. Vitamin D levels are confounded with many socioeconomic factors, making the identification of causal links difficult. In the meantime, it has been suggested that it makes sense for individuals to maintain reference nutrient intakes of Vitamin D [6]. Such a public health recommendation is not controversial.

As for Vitamin D administration to hospitalized COVID-19 patients reducing ICU admission, the best one can say about the Córdoba study is that nothing can be learned from it. Unfortunately, the poor study design, small sample size, availability of only summary statistics for the comorbidities, and imbalanced comorbidities among treated and untreated patients render the data useless. While it may be true that calcifediol administration to hospital patients reduces subsequent ICU admission, it may also not be true. Thus, the follow-up by Jungreis and Kellis is pointless at best. At worst, it is irresponsible propaganda, advocating for potentially dangerous treatment on the basis of shoddy arguments masked as “rigorous and well established statistical techniques”. It is surprising to see Jungreis and Kellis argue that it may be unethical to conduct a placebo randomized controlled trial, which is one of the most powerful tools in the development of safe and effective medical treatments. They write “the ethics of giving a placebo rather than treatment to a vitamin D deficient patient with this potentially fatal disease would need to be evaluated.” The evidence for such a policy is currently non-existent. On the other hand, there are plenty of known risks associated with excess Vitamin D [5].

References

  1. Marta Entrenas Castillo, Luis Manuel Entrenas Costa, José Manuel Vaquero Barrios, Juan Francisco Alcalá Díaz, José López Miranda, Roger Bouillon, and José Manuel Quesada Gomez. Effect of calcifediol treatment and best available therapy versus best available therapy on intensive care unit admission and mortality among patients hospitalized for COVID-19: A pilot randomized clinical study. The Journal of steroid biochemistry and molecular biology, 203:105751, 2020.
  2. Michael F Holick. Vitamin D deficiency. New England Journal of Medicine, 357(3):266–281, 2007.
  3. Irwin Jungreis and Manolis Kellis. Mathematical analysis of Córdoba calcifediol trial suggests strong role for Vitamin D in reducing ICU admissions of hospitalized COVID-19 patients. medRxiv, 2020.
  4. JoAnn E Manson. https://clinicaltrials.gov/ct2/show/nct04536298.
  5. Ewa Marcinowska-Suchowierska, Małgorzata Kupisz-Urbańska, Jacek Łukaszkiewicz, Paweł Płudowski, and Glenville Jones. Vitamin D toxicity–a clinical perspective. Frontiers in endocrinology, 9:550, 2018
  6. Adrian R Martineau and Nita G Forouhi. Vitamin D for COVID-19: a case to answer? The Lancet Diabetes & Endocrinology, 8(9):735–736, 2020.
  7. David O Meltzer, Thomas J Best, Hui Zhang, Tamara Vokes, Vineet Arora, and Julian Solway. Association of vitamin D status and other clinical characteristics with COVID-19 test results. JAMA network open, 3(9):e2019722–e2019722, 2020.
  8. Vivien Shotwell. https://tweetstamp.org/1327281999137091586.
  9. Robert Slavin and Dewi Smith. The relationship between sample sizes and effect sizes in systematic reviews in education. Educational evaluation and policy analysis, 31(4):500–506, 2009.
  10. Lynn Yi, Harold Pimentel, Nicolas L Bray, and Lior Pachter. Gene-level differential analysis at transcript-level resolution. Genome biology, 19(1):53, 2018.

“It is not easy when people start listening to all the nonsense you talk. Suddenly, there are many more opportunities and enticements than one can ever manage.”

– Michael Levitt, Nobel Prize in Chemistry, 2013

In 1990 Glendon MacGregor, a restaurant waiter in Pretoria, South Africa, set up an elaborate hoax in which he posed as the crown prince of Liechtenstein to organize for himself a state visit to his own country. Amazingly, the ruse lasted for two weeks, and during that time MacGregor was wined and dined by numerous South African dignitaries. He had a blast in his home town, living it up in a posh hotel, and enjoying a trip to see the Blue Bulls in Loftus Versfeld stadium. The story is the subject of  the 1993 Afrikaans film “Die Prins van Pretoria” (The Prince of Pretoria). Now, another Pretorian is at it, except this time not for two weeks but for several months. And, unlike MacGregor’s hoax, this one does not just embarrass a government and leave it with a handful of hotel and restaurant bills. This hoax risks lives.

Michael Levitt, a Stanford University Professor of structural biology and winner of  the Nobel Prize in Chemistry in 2013, wants you to believe the COVID-19 pandemic is over in the US. He claimed it ended on August 22nd, with a total of 170,000 deaths (there are now over 200,000 with hundreds of deaths per day). He claims those 170,000 deaths weren’t even COVID-19 deaths, and since the virus is not very dangerous, he suggests you infect yourself. How? He proposes you set sail on a COVID-19 cruise.

Royal Caribbean Wonder of the Sea world's largest cruise ship under  construction - Business Insider

Levitt’s lunacy began with an attempt to save the world from epidemiologists. Levitt presumably figured this would not be a difficult undertaking, because, he has noted, epidemiologists see their job not as getting things correct“. I guess he figured that he could do better than that. On February 25th of this year, at a time when there had already been 2,663 deaths due to the SARS-CoV-2 virus in China but before the World Health Organization had declared the COVID-19 outbreak a pandemic, he delivered what sounded like good news. He predicted that the virus had almost run its course, and that the final death toll in China would be 3,250. This turned out to be a somewhat optimistic prediction. As of the writing of this post (September 21, 2020), there have been 4,634 reported COVID-19 deaths in China, and there is reason to believe that the actual number of deaths has been far higher (see, e.g. He et al., 2020, Tsang et al., 2020, Wadham and Jacobs, 2020).

Instead of publishing his methods or waiting to evaluate the veracity of his claims, Levitt signed up for multiple media interviews. Emboldened by “interest in his work” (who doesn’t want to interview a Nobel laureate?), he started making more predictions of the form “COVID-19 is not a threat and the pandemic is over”. On March 20th he said that “he will be surprised if the number of deaths in Israel surpasses 10“. Unfortunately, there have been 1,256 COVID-19 deaths in Israel so far with a massive increase in cases over the past few weeks and no end to the pandemic in sight. On March 28th, when Switzerland had 197 deaths, he predicted the pandemic was almost over and would end with 250. Switzerland are now seen 1,762 deaths and a recent dramatic increase in cases has overwhelmed hospitals in some regions leading to new lockdown measures. Levitt’s predictions have come loose and fast. On June 28th he predicted deaths in Brazil would plateau at 98,000. There have been over 137,000 deaths in Brazil with hundreds of people dying every day now. In Italy he predicted on March 28th that the pandemic was past its midpoint and deaths would end at 17,000 – 20,000. There have now been 35,707 deaths in Italy. The way he described the situation in the country at the time, when crematoria were overwhelmed, was “normal”.

I became aware of Levitt’s predictions via an email list of the Fellows of the International Society of Computational Biology on March 14th. I’ve been a Fellow for 3 years, and during this time I’ve received hardly any mail, except during Fellow nomination season. It was therefore somewhat of a surprise to start receiving emails from Michael Levitt regarding COVID-19, but it was a time when scientists were scrambling to figure out how they could help with the pandemic and I was excited at the prospect of all of us learning from each other and possibly helping out. Levitt began by sending around a PDF via a Dropbox link and asked for feedback. I wrote back right away suggesting he distribute the code used to make the figures, make clear the exact versions of data he was scraping to get the results (with dates and copies so the work could be replicated), suggested he add references and noted there were several typos (e.g. the formula D_n = C_nP_0 + C_{n-1}P_1 + C_{n-2}P_2 + \ldots + C_{n-29}P_2 clearly had wrong indices). I asked that he post it on the bioRxiv so it could receive community feedback, and suggested he fill in some details so I and others could better evaluate the methods (e.g. I pointed out that I thought the use of a Gaussian for P_n was problematic).

The initial correspondence rapidly turned into a flurry of email on the ISCB Fellows list. Levitt was full of advice. He suggested everyone wear a mask and I and others pushed back noting, as Dr. Anthony Fauci did at the time, that there was a severe shortage of masks and they should go to doctors first. Several exchanges centered on who to blame for the pandemic (one Fellow suggested immigrants in Italy). Among all of this, there was one constant: Levitt’s COVID-19 advice and predictions kept on coming, and without reflection or response to the well-meaning critiques. After Levitt said he’d be surprised if there were more than 10 deaths in Israel, and after he refused to send code reproducing his analyses, or post a preprint, I urged my fellow Fellows in ISCB to release a statement distancing our organization from his opinions, and emphasizing the need for rigorous, reproducible work. I was admonished by two colleagues and told, in so many words, to shut up. 

Meanwhile, Levitt did not shut up. In March, after talking to Israeli newspapers about how he would be surprised if there were more than 10 deaths, he spoke directly to Israeli Prime Minister Benjamin Netanyahu to deliver his message that Israel was overreacting to the virus (he tried to speak to US president Donald Trump as well). Israel is now in a very dangerous situation with COVID-19 out of control. It has the highest number of cases per capita in the world. Did Levitt play a role in this by helping to convince Netanyahu to ease restrictions in the country in May? We may never know. There were likely many factors contributing to Israel’s current tragedy but Levitt, by virtue of speaking directly to Netanyahu, should be scrutinized for his actions. What we do know is that at the time, he was making predictions about the nature and expected course of the virus with unpublished methods (i.e. not even preprinted), poorly documented data, and without any possibility for anyone to reproduce any of his work. His disgraceful scholarship has not improved in the subsequent months. He did, eventually post a preprint, but the data tab states “all data to be made available” and there is the following paragraph relating to availability of code: 

We would like to make the computer codes we use available to all but these are currently written in a variety of languages that few would want to use. While Dr. Scaiewicz uses clean self-documenting Jupyter Python notebook code, Dr. Levitt still develops in a FORTRAN dialect call Mortran (Mortran 1975) that he has used since 1980. The Mortran preprocessor produces Fortran that is then converted to C-code using f2c. This code is at least a hundred-fold faster than Python code. His other favorite language is more modern, but involves the use of the now deprecated language Perl and Unix shell scripts.

Nevertheless, the methods proposed here are simple; they are easily and quickly implemented by a skilled programmer. Should there be interest, we would be happy to help others develop the code and test them against ours. We also realize that there is ample room for code optimization. Some of the things that we have considered are pre-calculating sums of terms to convert computation of the correlation coefficient from a sum over N terms to the difference of two sums. Another way to speed the code would be to use hierarchical step sizes in a binary search to find the value of lnN that gives the best straight line.

Our study involving as it did a small group working in different time zones and under extreme time pressure revealed that scientific computation nowadays faces a Babel of computer languages. In some ways this is good as we generally re-coded things rather than struggle with the favorite language of others. Still, we worry about the future of science when so many different tools are used. In this work we used Python for data wrangling and some plotting, Perl and Unix shell tools for data manipulation, Mortran (effectively C++) for the main calculations, xmgrace and gnuplot for other plotting, Excel (and Openoffice) for playing with data. And this diversity is for a group of three!

tl;dr, there is no code. I’ve asked Michael Levitt repeatedly for the code to reproduce the figures in his paper and have not received it. I can’t reproduce his plots.

Levitt now lies when confronted about his misguided and wrong prediction about COVID-19 in Israel. He claims it is a “red herring”, and that he was talking about “excess deaths”. I guess he figures he can hide behind Hebrew. There is a recording where anyone can hear him being asked directly if he is saying he will be surprised with more than 10 COVID-19 deaths in Israel, and his answers is very clear: “I will be very surprised”. It is profoundly demoralizing to discover that a person you respected is a liar, a demagogue or worse. Sadly, this has happened to me before.

Levitt continues to put people’s lives at risk by spewing lethal nonsense. He is suggesting that we should let COVID-19 spread in the population so it will mutate to be less harmful. This is nonsense. He is promoting anti-vax conspiracy theories that are nonsense. He is promoting nonsense conspiracy theories about scientists. And yet, he continues to have a prominent voice. It’s not hard to see why. The article, similar to all the others where he is interviewed, begins with “Nobel Prize winner…”

In the Talmud, in Mishnah Sanhedrin 4:9, it is written “Whoever destroys a soul, it is considered as if he destroyed an entire world”. I thought of this when listening to an interview with Michael Levitt that took place in May, where he said:

I am a real baby-boomer, I was born in 1947, and I think we’ve really screwed up. We cause pollution, we allowed the world’s population to increase three-fold, we’ve caused the problems of global warming, we’ve left your generation with a real mess in order to save a really small number of very old people. If I was a young person now, I would say, “now you guys are gonna pay for this.” 

Despite much ado about the #metoo movement in recent years, the crisis of sexual harassment in academia persists without an end in sight. The academic sexual misconduct database now lists 1,051 cases, each of them a tragedy of trauma, unspeakable violations of victims, and dreams destroyed. I’ve written previously about two cases listed in the database (Yuval Peres and Terry Speed). Now, I feel compelled to write about yet another sexual harassment case.

Adrian Dumitrescu is a professor in the Department of Mathematical Sciences at the University of Wisconsin, Milwaukee. I have known of his work for many years, as we have a shared interest in extremal combinatorics, having both worked on the Erdös-Szekeres “Happy Ending Problem”. Last week a Facebook post was brought to my attention, in which a graduate student describes a horrible case of sexual harassment by Prof. Dumitrescu that occurred during a conference in Boston in 2016.

This student filed a Title IX complaint with the University of Wisconsin, and I have a copy of the report. The Office of Equity and Diversity (EDS) that investigated the case found that “Based on the totality of the circumstances, the information obtained pursuant to this investigation, and for all the reasons set forth above, EDS concludes that there is sufficient evidence to support a finding, by preponderance of the evidence, of sexual harassment against the Respondent [Prof. Dumitrescu].” Furthermore, the report states that “based on the seriousness of the Respondent’s conduct, EDS believes that disciplinary action is warranted in this matter, and recommends that the Provost refer this case for imposition of discipline”. As I write this post, Prof. Dumitrescu is still listed as a professor at the University of Wisconsin, Milwaukee.

Notably, after being sexually harassed by the Respondent, and before filing a report with Title IX, the student consulted her Ph.D. advisor. The report describes his response as follows: “[he] told her that the Respondent had a ‘high reputation’ in the field and it was better to ‘avoid trouble’ and not to report her concerns.” And yet she had the courage to report the case, despite the attempt to silence her, and having being threatened by the Respondent, as he coerced her to sleep with him, that if she did not acquiesce to his demands he would not conduct research with her and he might prevent senior scholars at her university from working with her.

The report details how the sexual harassment impacted the complainant’s research progress and mental well-being. Yet again, a talented young scientist finds herself with debilitating trauma, a career in jeopardy, and powerless in the face of an establishment that excuses harassers.

The details of this case are of course different than every other sexual harassment case. Each is tragic in its own way. And yet elements of what happened here are to be found in all sexual harassment cases. Power imbalance. Coercion. Threats. Silencing of the victim. Inaction. Banal injustice. This will be case number 1,052 in the academic sexual misconduct database.

We must do better.

Rapid testing has been a powerful tool to control COVID-19 outbreaks around the world (see Iceland, Germany, …). While many countries support testing through government sponsored healthcare infrastructure, in the United States COVID-19 testing has largely been organized and provided by for-profit businesses. While financial incentives coupled with social commitment have motivated many scientists and engineers at companies and universities to work hard around the clock to facilitate testing, there are also many individuals who have succumbed to greed. Opportunism has bubbled to the surface and scams, swindles, rackets, misdirection and fraud abound. This is happening at a time when workplaces are in desperate need of testing, and demands for testing are likely to increase as schools, colleges and universities start opening up in the next month. Here are some examples of what is going on:

  • First and foremost there is your basic fraud. In July, a company called “Fillakit”, which had been awarded a $10.5 million federal contract to make COVID-19 test kits, was shipping unusable, contaminated, soda bottles. This “business”, started by some law and real estate guy called Paul Wexler, who has been repeatedly accused of fraud, went under two months after it launched amidst a slew of investigations and complaints from former workers. Oh, BTW, Michigan ordered 322,000 Fillakit tubes which went straight to the trash (as a result they could not do a week worth of tests).
  • Not all fraud is large scale. Some former VP at now defunct “Cure Cannabis Solutions” ordered 100 COVID-19 test kits that do who-knows-what at a price of 50c a kit. The Feds seized it. These kits, which were not FDA approved, were sourced from “Anhui DeepBlue Medical” in Hefei, China.
  • To be fair, the Cannabis guy was small fry. In Laredo Texas, some guy called Robert Castañeda received assistance from a congressman to purchase $500,000 of kits from the same place! Anhui DeepBlue Medical sent Castańeda 20,000 kits ($25 a test). Apparently the tests had 20% accuracy. To his credit, the Cannabis guy paid 1/50th the price for this junk.
  • Let’s not forget who is really raking in the bucks though. Quest Diagnostics and LabCorp are the primary testing outfits in the US right now; each is processing around 150,000 tests a day. These are for-profit companies and indeed they are making a profit. The economics is simple: insurance companies reimburse LabCorp and Quest Diagnostics for the tests. The rates are basically determined by the amount that Medicare will pay, i.e. the government price point. Intiially, the reimbursement was set at $51, and well… at that price LabCorp and Quest Diagnostics just weren’t that interested. I mean, people have to put food on the table, right? (Adam Schechter, CEO of LabCorp makes $4.9 million a year; Steve Rusckowski, CEO of Quest Diagnostics, makes $9.9 million a year). So the Medicare reimbursement rate was raised to $100. The thing is, LabCorp and Quest Diagnostics get paid regardless of how long it takes to return test results. Some people are currently waiting 15 days to get results (narrator: such tests results are useless).
  • Perhaps a silver lining lies in the stock price of these companies. The title of this post is “$ How to Profit From COVID-19 Testing $”. I guess being able to take a week or two to return a test result and still get paid $100 for something that cost $30 lifts the stock price… and you can profit!Screen Shot 2020-07-31 at 2.03.23 AM
  • Let’s not forget the tech bros! A bunch of dudes in Utah from companies like Nomi, Domo and Qualtrics signed a two-month contract with the state of Utah to provide 3,000 tests a day. One of the tech executives pushing the initiative, called TestUtah, was a 37-year old founder (of Nomi Health) by the name of Mark Newman. He admitted that “none of us knew anything about lab testing at the start of the effort”. Didn’t stop Newman et al. from signing more than $50 million in agreements with several states to provide testing. tl;dr: the tests had a poor limit of detection, samples were mishandled, throughput was much lower than promised etc. etc. and as a result they weren’t finding positive cases at rates similar to other testing facilities. The significance is summarized poignantly in a New Yorker piece about the debacle:

    “I might be sick, but I want to go see my grandma, who’s ninety-five. So I go to a TestUtah site, and I get tested. TestUtah tells me I’m negative. I go see grandma, and she gets sick from me because my result was wrong, because TestUtah ran an unvalidated test.”

    P.S. There has been a disturbing TestUtah hydroxycholorquine story going on behind the scenes. I include this fact because no post on fraud and COVID-19 would be complete without a mention of hydroxycholoroquine.

  • Maybe though, the tech bros will save the day. The recently launched $5 million COVID-19 X-prize is supposed to incentivize the development of “Frequent. Fast. Cheap. Easy.” COVID-19 testing. The goal is nothing less than to “radically change the world.” I’m hopeful, but I just hope they don’t cancel it like they did the genome prize. After all, their goal of “500 tests per week with 12 hour turnaround from sample to result” is likely to be outpaced by innovation just like what happened with genome sequencing. So in terms of making money from COVID-19 testing don’t hold your breath with this prize.
  • As is evident from the examples above, one of the obstacles to quick riches with COVID-19 testing in the USA is the FDA. The thing about COVID-19 testing is that lying to the FDA on applications, or providing unauthorized tests, can lead to unpleasantries, e.g. jail. So some play it straight and narrow. Consider, for example, SeqOnce, which has developed the Azureseq SARS-CoV-2 RT-qPCR kit. These guys have an “EUA-FDA validated test”: Screen Shot 2020-07-31 at 2.14.45 AM
    This is exactly what you want! You can click on “Order Now” and pay $3,000 for a kit that can be used to test several hundred samples (great price!) and the site has all the necessary information: IFUs (these are “instructions for use” that come with FDA authorized tests), validation results etc. If you look carefully you’ll see that administration of the test requires FDA approval. The company is upfront about this. Except the test is not FDA authorized; this is easy to confirm by examining the FDA Coronavirus EUA site. One can infer from a press release that they have submitted an EUA (Emergency Use Authorization) but while they claim it has been validated, nowhere does it say it has been authorized.Clever eh? Authorized, validated, authorized, validated, authorized, .. and here I was just about to spend $3,000 for a bunch of tests that cannot be currently legally administered. Whew!At least this is not fraud. Maybe it’s better called… I don’t know… a game? Other companies are playing similar games. Gingko Bioworks is advertising “testing at scale, supporting schools and businesses” with an “Easy to use FDA-authorized test” but again this seems to be a product that has “launched“, not one that, you know, actually exists; I could find no Gingko Bioworks test that works at scale that is authorized on the FDA Coronavirus EUA website, and it turns out that what they mean by FDA authorized is an RT-PCR test that they have outsourced to others.  Fingers crossed though- maybe the marketing helped CEO Jason Kelly raise the $70 million his company has received for the effort; I certainly hope it works (soon)!
  • By the way, I mentioned that the SeqOnce operation is a bunch of “guys”. I meant this literally; this is their “team”:
    Screen Shot 2020-07-31 at 2.18.45 AM
    Just one sec… what is up with biotech startups and 100% men leadership teams? See Epinomics, Insight Genetics, Ocean Genomics, Kailos Genetics, Circulogene, etc. etc.)… And what’s up with the black and white thing? Is that to try to hide that there are no people of color?
    I mention the 100% male team because if you look at all the people mentioned in this post, all of them are guys (except the person in the next sentence), and I didn’t plan that, it just how it worked out. Look, I’m not casting shade on the former CEO of Theranos. I’m just saying that there is a strong correlation here.

    Sorry, back to the regular programming…

  • Speaking of swindlers and scammers, this post would not be complete without a mention of the COVID-19 testing czar, Jared Kushner. His secret testing plan for the United States went “poof into thin air“! I felt that the 1 million contaminated and unusable Chinese test kits that he ordered for $52 million deserved the final mention in this post. Sadly, these failed kits appear to be the main thrust of the federal response to COVID-19 testing needs so far, and consistent with Trump’s recent call to, “slow the testing down” (he wasn’t kidding). Let’s see what turns up today at the hearings of the U.S. House Select Subcommittee on Coronavirus, whose agenda is “The Urgent Need for a National Plan to Contain the Coronavirus”.

 

 

Today, June 10th 2020, black academic scientists are holding a strike in solidarity with Black Lives Matter protests. I strike with them and for them. This is why:

I began to understand the enormity of racism against blacks thirty five years ago when I was 12 years old. A single event, in which I witnessed a black man pleading for his life, opened my eyes. I don’t remember his face but I do remember looking at his dilapidated brown pants and noticing his hands shaking around the outside of his pockets while he plead for mercy:

“Please baas, please baas, … ”

The year was 1985, and I was visiting my friend Tamir Orbach at his house in Pretoria Tshwane, South Africa, located in Muckleneuk hill. We were playing in the courtyard next to Tamir’s garage, which was adjacent to a retaining wall and a wide gate. Google Satellite now enables virtual visits to anywhere in the world, and it took me seconds to find the house. The courtyard and retaining wall look the same. The gate we were playing in front of has changed color from white to black:

Screen Shot 2020-06-09 at 1.36.08 AM

The house was located at the bottom of a short cul de sac on the slope of a hill. It’s difficult to see from the aerial photo, but in the street view, looking down, the steep driveway is visible. The driveway stones are the same as they were the last time I was at the house in the 1980s:

Screen Shot 2020-06-09 at 1.08.38 AM

We heard some commotion at the top of the driveway. I don’t remember what we were doing at that moment, but I do remember seeing a man sprinting down the hill towards us. I remember being afraid of him. I was afraid of black men. A police officer was chasing him, gun in hand, shouting at the top of his lungs. The man ran into the neighboring property, scaled a wall to leap onto a roof, only to realize he may be trapped. He jumped back onto the driveway, dodged the cop, and and ran back up the hill. I remember thinking that I had never seen a man run so fast. The policeman, by now out of breath but still behind the man, chased close behind with his gun swinging around wildly.

There was a second police officer, who was now visible standing at the top of the driveway, feet apart, and pointing a gun down at the man. We were in the line of fire, albeit quite far away behind the gate. The sprint ended abruptly when the man realized he had, in fact, been trapped. Tamir and I had been standing, frozen in place, watching the events unfold in front of us. Meanwhile the screaming had drawn one of our parents out of the house, concerned about the commotion and asking us what was going on. We walked, together, up the driveway to the street.

The man was being arrested next to a yellow police pickup truck, a staple of South African police at the time and an emblem of police brutality. The police pickup trucks had what was essentially a small jail cell mounted on the flat bed, and they were literal pick up trucks; their purpose was to pick up blacks off the streets.

D87KEklXYAAhWyw

Dogs were barking loudly in the back of the pickup truck and the man was sobbing.

“Please baas, not the dogs. Not the dogs. Please baas. Please baas…”

The police were yelling at the man.

“Your passbook no good!! No pass!! Your passbook!! You’re going in with the dogs and coming with us!”

“Please… please… ” the man begged. I remember him crying. He was terrified of the dogs. They had started barking so loudly and aggressively that the vehicle was shaking. The man kept repeating “Please… not with the dogs… please… they will kill me. Please… help me. Please… the dogs will kill me.”

He was pleading for his life.


Law

The passbook the police were yelling about was a sort of domestic or internal passport all black people over the age of 16 were required to carry at all times in white areas. South Africa, in 1985, was a country that was racially divided. Some cities were for whites only. Some only for blacks. “Coloureds”, who were defined as individuals of mixed ancestry, were restricted to cities of their own. In his book “Born a Crime“, Trevor Noah describes how these anti-miscegenation laws resulted in it being impossible for him to legally live with his mother when he was a child. Note that Mississippi removed anti-miscegenation laws from its state constitution only in 1987 and Alabama in 2000.

The South African passbook requirement stemmed from a law passed in 1952, with origins dating back to British policies from the 18th century. The law had the following stipulation:

No black person could stay in a white urban area for more than 72 hours unless explicit permission was granted by an employer (required to be white).

The passbook contained behavioral evaluations from employers. Permission to enter an area could be revoked by any government employee for any reason.

All the live-in maids (as they were called) in Pretoria had passbooks permitting them to live (usually in an outhouse) on the property of their “employer”. I put “employer” in quotes because at best they would earn $250 a month (in todays $ adjusted for inflation) would sleep in a small shack outside of a large home, and receive a small budget for food which would barely cover millie pap. In many cases they lived in outhouses without running water, were abused, beaten and raped. Live-in-maids spent months at a time apart from their children and families- they couldn’t leave their jobs for fear of being fired and/or losing their pass permission. Their families couldn’t visit them as they did not have permission, by pass laws, to enter the white areas in which the live-in-maids worked.

Most males had passbooks allowing them only day trips into the city from the black townships in which they lived. Many lived in Mamelodi, a township 15 miles east of Tswhane, and would travel hours to and from work because they were not allowed on white public transport. I lived in Pretoria for 13 years and I never saw Mamelodi.

I may have heard about passbooks before the incident at Tamir’s house, but I didn’t know what they were or how they worked. Learning about pass laws was not part of our social studies or history curriculum. At my high school, Pretoria Boys High School, a Milner school which counts among its alumni individuals such as dilettante Elon Musk and murderer Oscar Pistorius, we learned about the history of South Africa’s white architects, people like Cecil Rhodes (may his name and his memory be erased). There was one black boy in the school when I was there (out of about 1,200 students). He was allowed to attend because he was the son of an ambassador, as if somehow that mitigated his blackness.

South Africa started abandoning its pass laws in 1986, just a few months after the incident I described above. Helen Suzman described it at the time as possibly one of the most eminent government reforms ever enacted. Still, although this was a small step towards dismantling apartheid, Nelson Mandela was still in jail, in Pollsmoor Prison at that time, and he remained imprisoned for 3 more years until he was released from captivity after 27 years in 1990.

20136241236894489_8


Order

We did not stand by idly while the man was being arrested. We asked the police to let him go, or at least to not throw him in with the dogs, but the cops ignored us and dragged the man towards the back of the van. The phrase “kicking and screaming” is bantered about a lot; there is even a sports comedy with that title. That day I saw a man literally kicking and screaming for his life. The back doors of the van were opened and the dogs, tugging against their leash, appeared to be ready to devour him whole. He was tossed inside like a piece of meat.

The ferocity of the police dogs I saw that day was not a coincidence or accident, it was by design. South Africa, at one time, developed a breeding program at Roodeplaat Breeding Enterprises led by German geneticist Peter Geertshen to create a wolf-dog hybrid. Dogs were bred for their aggression and strength. The South African Boerboel is today one of the most powerful dog breeds in the world, and regularly kills in the United States, where it is imported from South Africa.

22_Month_old_Boerboel

After encounters with numerous Boerboels, Dobermans, Rottweilers and Pitbull dogs as a child in South Africa I am scared of dogs to this day. I know it’s not rational, and some of my best friends and family have dogs that I adore and love, but the fear lingers. Sometimes I come across a K-9 unit and the terror surfaces. Police dogs are potent police weapons here, today, just as they were in South Africa in the 1980s. There is a long history of this here. Dogs were used to terrorize blacks in the Civil Rights era, and the recent invocation of “vicious dogs” by the president of the United States conjures up centuries of racial terror:

I learned at age 12 that LAW & ORDER isn’t all it’s hyped up to be.


Academia

I immigrated to America in August 1988, and imagined that here I would find a land free of the suffocating racism of South Africa. In my South African high school racism was open, accepted and embraced. Nigg*r balls were sold in the campus cafeteria (black licorice balls), and students would tell idiotic “jokes”  in which dead blacks were frequently the punchline. Some of the teachers were radically racist. My German teacher, Frau Webber, once told me and Tamir that she would swallow her pride and agree to teach us despite the fact that we were Jews. But much more pernicious was the systemic, underlying, racism. When I grew up the idea that someday I would go to university and study alongside a black person just seemed preposterous. My friends and I would talk about girls. The idea that any of us would ever date, let alone marry an African girl, was just completely and totally out of the realm of possibility. While my school, teachers and friends were what one would consider “liberal” in South Africa, e.g. many supported the ANC, their support of blacks was largely restricted to the right to vote.

Sadly, America was not the utopia I imagined. In 1989, a year after I immigrated here, Yusef Hawkins was murdered in a hate crime by white youths who thought he was dating a white woman. That was also the year of the “Central Park Five“, in which Trump played a central, disgraceful and racist role. I finished high school in Palo Alto, across a highway from East Palo Alto, and the difference between the cities seemed almost as stark as between the white and black neighborhoods in South Africa. I learned later that this was the result of redlining. My classmates and teachers in Palo Alto were obsessed, in 1989, with the injustices in South Africa. but never once discussed East Palo Alto with me or with each other. I was practicing for the SAT exams at the time and remember thinking Palo Alto : East Palo Alto = Pretoria : Mamelodi.

Three years after that, when I was an undergraduate student studying at Caltech in Los Angeles, the Rodney King beating happened. I saw a black man severely beaten on television in what looked like a clip borrowed from South Africa. My classmates at the time thought it would be exciting to drive to South Central Los Angeles to see the “rioters” up close. They had never visited those areas before,  nor did they return afterwards. I was reminded at the time of the poverty tourism my friends in South Africa would partake in: a tour to Soweto accompanied by guides with guns to see for oneself how blacks lived. Then right back home for a braai (BBQ). My classmates came back from their Rodney King tour excitedly telling stories of violence and dystopia. Then they partied into the night.

I thought about my only classmate, one out of 200, who was actually from South Los Angeles and about the dissonance that was his life and my classmates’ partying.

Now I am a professor, and I am frequently present in discussions on issues such as undergraduate and graduate admissions, and hiring. Faculty talk a lot, sometimes seemingly endlessly, about diversity, representation, gender balance, and so forth and so on. But I’ve been in academia for 20+ years and it was only three years ago, after moving to Caltech, that I attended a faculty meeting with a black person for the first time. Sometimes I look around during faculty meetings and wonder if I am in America or South Africa? How can I tell?


Racism

Today is an opportunity for academics to reflect on the murder of George Floyd, and to ask difficult questions of themselves. It’s not for me to say what all the questions are or ought to be. I will say this: at a time when everything is unprecedented (Trump’s tweets, the climate, the stock market, the pandemic, etc. etc.) the murder of George Floyd was completely precedented. His words. The mode of murder. The aftermath. It has happened many times before, including recently. And so it is in academia. The fundamental racism, the idea that black students, staff, and faculty, are not truly as capable as whites, it’s simply a day-to-day reality in academia, despite all the talk and rhetoric to the contrary. Did any academics, upon hearing of the murder of George Floyd, worry immediately that it was one of their colleagues, George Floyd, Ph.D., working at the University of Minnesota who was killed?

I will take the time today to read. I will pick up Long Walk to Freedom, and I will also read #BlackintheIvory. I may read some Alan Paton. I will pause to think about how my university can work to improve the recruitment, mentoring, and experience of black students, staff and faculty. Just some ideas.

All these years since leaving South Africa I’ve had a recurring dream. I fly around Pretoria. The sun has just set and the Union Buildings are lit up, glowing a beautiful orange in the distance. The city is empty. My friends are not there. The man I saw pleading for his life in 1985 is gone. I wonder what the police did to him when he arrived at the police station. I wonder whether he died there, like many blacks at the time did. I fly nervously, trying to remember whether I have my passbook on me. I remember I’m classified white and I don’t need a passbook. I hear dogs barking and wonder where they are, because the city is empty. I wonder what it will feel like when they eat me, and then I remember I’m white and I’m not their target. I hope that I don’t encounter them anyway, and I realize what a privilege it is to be able to fly where they can’t reach me. Then I notice that I’m slowly falling, and barely clearing the slopes of Muckleneuk hill. I realize I will land and am happy about that. I slowly halt my run as my feet gently touch the ground.

g9

 

 

The widespread establishment of statistics departments in the United States during the mid-20th century can be traced to a presentation by Harold Hotelling in the Berkeley Symposium on Mathematical Statistics and Probability in 1945. The symposium, organized by Berkeley statistician Jerzy Neyman, was the first of six such symposia that took place every five years, and became the most influential meetings in statistics of their time. Hotelling’s lecture on “The place of statistics in the university” inspired the creation of several statistics departments, and at UC Berkeley, Neyman’s establishment of the statistics department in the 1950s was a landmark moment for statistics in the 20th century.

Neyman was hired in the mathematics department at UC Berkeley by a visionary chair, Griffith Evans, who transformed the UC Berkeley math department into a world-class institution after his hiring in 1934. Evans’ vision for the Berkeley math department included statistics, and Eric Lehmann‘s history of the UC Berkeley statistics department details how Evans’ commitment to diverse areas in the department led him to hire Neyman without even meeting him. However, Evans’ progressive vision for mathematics was not shared by all of his colleagues, and the conservative, parochial attitudes of the math department contributed to Neyman’s breakaway and eventual founding of the statistics department. This dynamic was later repeated at universities across the United States, resulting in a large gulf between mathematicians and statistics (ironically history may be repeating itself with some now suggesting that the emergence of “data science” is a result of conservatism among statisticians leading them to cling to theory rather than to care about data).

The divide between mathematicians and statistics is unfortunate for a number of reasons, one of them being that statistical literacy is important even for the purest of the pure mathematicians. A recent debate on the appropriateness of diversity statements for job applicants in mathematics highlights the need: analysis of data, specifically data on who is in the maths community, and their opinions on the issue, turns out to be central to understanding the matter at hand. Case in point is a recent preprint by two mathematicians:

Joshua Paik and Igor Rivin, Data Analysis of the Responses to Professor Abigail Thompson’s Statement on Mandatory Diversity Statements, arXiv, 2020.

This statistics preprint examines attempts to identify the defining attributes of mathematicians who signed recent letters related to diversity statement requirements in mathematics job searches. I was recently asked to provide feedback on the manuscript, ergo this blog post.

Reproducibility

In order to assess the results of any preprint or paper, it is essential, as a first step, to be able to reproduce the analysis and results. In the case of a preprint such as this one, this means having access to the code and data used to produce the figures and to perform the calculations. I applaud the authors for being fully transparent and making available all of their code and data in a Github repository in a form that made it easy to reproduce all of their results; indeed I was able to do so without any problems. 👏

The dataset

The preprint analyzes data on signatories of three letters submitted in response to an opinion piece on diversity statement requirements for job applicants published by Abigail Thompson, chair of the mathematics department at UC Davis. Thompson’s letter compared diversity statement requirements of job applicants to loyalty oaths required during McCarthyism. The response letters range from strong affirmation of Thompson’s opinions, to strong refutation of them. Signatories of “Letter A”, titled “The math community values a commitment to diversity“, “strongly disagreed with the sentiments and arguments of Dr. Thompson’s editorial” and are critical of the AMS for publishing her editorial.” Signatories of “Letter B”, titled “Letter to the editor“, worry about “direct attempt[s] to destroy Thompson’s career and attempt[s] to intimidate the AMS”. Signatories of “Letter C”,  titled “Letter to the Notices of the AMS“, write that they “applaud Abigail Thompson for her courageous leadership [in publishing her editorial]” and “agree wholeheartedly with her sentiments.”

The dataset analyzed by Paik and Rivin combines information scraped from Google Scholar and MathSciNet with data associated to the signatories that was collated by Chad Topaz. The dataset is available in .csv format here.

The Paik and Rivin result

The main result of Paik and Rivin is summarized in the first paragraph of their Conclusion and Discussion section:

“We see the following patterns amongst the “established” mathematicians who signed the three letters: the citations numbers distribution of the signers of Letter A is similar to that of a mid-level mathematics department (such as, say, Temple University), the citations metrics of Letter B are closer to that of a top 20 department such as Rutgers University, while the citations metrics of the signers of Letter C are another tier higher, and are more akin to the distribution of metrics for a truly top department.”

A figure from their preprint summarizing the data supposedly supporting their result, is reproduced below (with the dotted blue line shifted slightly to the right after the bug fix):

Screen Shot 2020-01-17 at 2.41.14 PM

Paik and Rivin go a step further, using citation counts and h-indices as proxies for “merit in the judgement of the community.” That is to say, Paik and Rivin claim that mathematicians who signed letter A, i.e. those who strongly disagreed with Thompson’s equivalence between diversity statements and McCarthy’s loyalty oaths, have less “merit in the judgement of the community” than mathematicians who signed letter C, i.e. those who agreed wholeheartedly with her sentiments.

The differential is indeed very large. Paik and Rivin find that the mean number of citations for signers of Letter A is 2397.75, the mean number of citations for signers of Letter B is 4434.89, and the mean number of citations for signers of Letter C is 6226.816. To control for an association between seniority and number of citations, the computed averages are based only on citation counts of full professors. [Note: a bug in the Paik-Rivin code results in an error in their reporting for the mean for group B. They report 4136.432 whereas the number is actually 4434.89.]

This data seems to support Paik and Rivin’s thesis that mathematicians who support the use of diversty statements in hiring and who strongly disagree with Thompson’s analogy of such statements to McCarthy’s loyalty oaths, are second rate mathematicians, whereas those who agree wholeheartedly with Thompson are on par with professors at “truly top departments”.

But do the data really support this conclusion?

A fool’s errand

Before delving into the details of the data Paik and Rivin analyzed, it is worthwhile to pause and consider the validity of using citations counts and h-indices as proxies for “merit in the judgement of the community”. The authors themselves note that “citations and h-indices do not impose a total order on the quality of a mathematician” and emphasize that “it is quite obvious that, unlike in competitive swimming, imposing such an order is a fool’s errand.” Yet they proceed to discount their own advice, and wholeheartedly embark on the fool’s errand they warn against. 🤔

I examined the mathematicians in their dataset and first, as a sanity check, confirmed that I am one of them (I signed one of the letters). I then looked at the associated citation counts and noticed that out of 1435 mathematicians who signed the letters, I had the second highest number of citations according to Google Scholar (67,694), second only to Terence Tao (71,530). We are in the 99.9th percentile. 👏 Moreover, I have 27 times more citations than Igor Rivin. According to Paik and Rivin this implies that I have 27 times more merit in the judgement of our peers. I should say at least 27 times, because one might imagine that the judgement of the community is non-linear in the number of citations. Even if one discounts such quantitative comparisons (Paik and Rivin do note that Stephen Smale has fewer citations than Terence Tao, and that it would be difficult on that basis alone to conclude that Tao is the better mathematician), the preprint makes use of citation counts to assess “merit in the judgement of the community”, and thus according to Paik and Rivin my opinions have substantial merit. In fact, according to them, my opinion on diversity statements must be an extremely meritorious one. I imagine they would posit that my opinion on the debate that is raging in the math community regarding diversity statement requirements from job applicants is the correct, and definitive one. Now I can already foresee protestations that, for example, my article on “Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation” which has 9,438 citations is not math per se, and that it shouldn’t count. I’ll note that my biology colleagues, after reading the supplement, think it’s a math paper, but in any case, if we are going to head down that route shouldn’t Paik and Rivin read the paper to be sure? And shouldn’t they read every paper of mine, and every paper of every signatory to determine it is valid for their analysis? And shouldn’t they adjust the citation counts of every signatory? Well they didn’t do any of that, and besides, they included me in their analysis so… I proceed…

The citation numbers above are based on Google Scholar citations. Paik and Rivin also analyze MathSciNet citations and state that they prefer them because “only published mathematics are in MathSciNet, and is hence a higher quality data source when comparing mathematicians.” I checked the relationship between Scholar and MathSciNet citations and found that, not surprisingly, they have a correlation of 0.92:

Screen Shot 2020-01-17 at 8.48.17 AM

I’d say they are therefore interchangeable in terms of the authors’ use of them as a proxy for “merit”.

But citations are not proxies for merit. The entire premise of the preprint is ridiculous. Furthermore, even if it was true that citations were a meaningful attribute of the signatories to analyze, there are many other serious problems with the preprint.

The elephant not in the room

Paik and Rivin begin their preprint with a cursory examination of the data and immediately identify a potential problem… missing data. How much data is missing? 64.11% of individuals do not have associated Google Scholar citation data, and 78.82% don’t have MathSciNet citation data. Paik and Rivin brush this issue aside remarking that “while this is not optimal, a quick sample size calculation shows that one needs 303 samples or 21% of the data to produce statistics at a 95% confidence level and a 5% confidence interval.” They are apparently unaware of the need for uniform population sampling, and don’t appear to even think for a second of the possible ascertainment biases in their data. I thought for a second.

For instance, I wondered whether there might be a discrepancy between the number of citations of women with Google Scholar pages vs. women without such pages. This is because I’ve noticed anecdotally that several senior women mathematicians I know don’t have Google Scholar pages, and since senior scientists presumably have more citations this could create a problematic ascertainment bias. I checked and there is, as expected, some correlation between age post-Ph.D. and citation count (cor = 0.36):

Screen Shot 2020-01-17 at 8.19.05 AM

To test whether there is an association between presence of a Google Scholar page and citation number I examined the average number of MathSciNet citations of women with and without Google Scholar pages. Indeed, the average number of citations of women without Google Scholar pages is much lower than those with a Google Scholar page (898 vs. 621). For men the difference is much smaller (1816 vs. 1801). By the way, the difference in citation number between men and women is itself large, and can be explained by a number of differences starting with the fact that the women represented in the database have much lower age post-Ph.D. than the men (17.6 vs. 26.3), and therefore fewer citations (see correlation between age and citations above).

The analysis above suggests that perhaps one should use MathSciNet citation counts instead of Google Scholar. However the extent of missing data for that attribute is highly problematic (78.82% missing values). For one thing, my own MathSciNet citation counts are missing, so there were probably bugs in the scraping. The numbers are also tiny. There are only 46 women with MathSciNet data among all three letter signatories out of 452 women signatories. I believe the data is unreliable. In fact, even my ascertainment bias analysis above is problematic due to the small number of individuals involved. It would be completely appropriate at this point to accept that the data is not of sufficient quality for even rudimentary analysis. Yet the authors continued.

A big word 

Confounder is a big word for a variable that influences both the dependent and independent variable in an analysis, thus causing a spurious association. The word does not appear in Paik and Rivin’s manuscript, which is unfortunate because it is in fact a confounder that explains their main “result”.  This confounder is age. I’ve already shown the strong relationship between age post-Ph.D. and citation count in a figure above. Paik and Rivin examine the age distribution of the different letter signatories and find distinct differences. The figure below is reproduced from their preprint:

Screen Shot 2020-01-17 at 2.15.55 PM

The differences are stark: the mean time since PhD completion of signers of Letter A is 14.64 years, the mean time since PhD completion of signers of Letter B is 27.76 years and the the mean time since PhD completion of signers of Letter C is 35.48 years. Presumably to control for this association, Paik and Rivin restricted the citation count computations to full professors. As it turns out, this restriction alone does not control for age.

The figure below shows the number of citations of letter C signatories who are full professors as a function of their age:

Screen Shot 2020-01-17 at 2.56.54 PM

The red line at 36 years post-Ph.D. divides two distinct regimes. The large jump at that time (corresponding to chronological age ~60) is not surprising: senior professors in mathematics are more famous and have more influence than their junior peers, and their work has had more time to be understood and appreciated. In mathematics results can take many years before they are understood and integrated into mainstream mathematics. These are just hypotheses, but the exact reason for this effect is not important for the Paik-Rivin analysis. What matters is that there are almost no full professors among Letter A signers who are more than 36 years post-Ph.D. In fact, the number of such individuals (filtered for those who have published at least 1 article), is 2. Two individuals. That’s it.

Restricting the analysis to full professors less than 36 years post-Ph.D. tells a completely different story to the one Paik and Rivin peddle. The average number of citations of full professors who signed letter A (2922.72) is higher than the average number of citations of full professors who signed letter C (2348.85). Signers of letter B have 3148.83 citations on average. The figure for this analysis is shown below:

Screen Shot 2020-01-17 at 2.42.48 PM

The main conclusion of Paik and Rivin, that signers of letters A have less merit than signers of letter B, who in turn have less merit than signers of letter C can be seen to be complete rubbish. What the data reveal is simply that the signers of letter A are younger than the signers of the other two letters.

Note: I have performed my analysis in a Google Colab notebook accessible via the link. It allows for full reproducibility of the figures and numbers in this post, and facilitates easy exploration of the data. Of course there’s nothing to explore. Use of citations as a proxy for merit is a fool’s errand.

Miscellania

There are numerous other technical problems with the preprint. The authors claim to have performed “a control” (they didn’t). Several p-values are computed and reported without any multiple testing correction. Parametric approximations for the citation data are examined, but then ignored. Moreover, appropriate zero-inflated count distributions for such data are never considered (see e.g. Yong-Gil et al. 2007).  The results presented are all univariate (e.g. histograms of one data type)- there is not a single scatterplot in the preprint! This suggests that the authors are unaware of the field of multivariate statistics. Considering all of this, I encourage the authors to enroll in an introductory statistics class.

The Russians

In a strange final paragraph of the Conclusion and Discussion section of their preprint, Paik and Rivin speculate on why mathematicians from communist countries are not represented among the signers of letter A. They present hypotheses without any data to back up their claims.

The insistence that some mathematicians, e.g. Mikhail Gromov who signed letters B and C and is a full member at IHES and professor at NYU, are not part of the “power elite” of mathematics is just ridiculous. Furthermore, characterizing someone like Gromov, who arrived in the US from Russia to an arranged job at SUNY Stonybrook (thanks to Tony Phillips) as being a member of a group who “arrived at the US with nothing but the shirts on their backs” is bizarre. 

Diversity matters

I find the current debate in the mathematics community surrounding Prof. Thompson’s letter very frustrating. The comparison of diversity statements to McCarthy’s loyalty oaths is ridiculous. Instead of debating such nonsense, mathematicians need to think long and hard about how to change the culture in their departments, a culture that has led to appallingly few under-represented minorities and women in the field. Under-represented minorities and women routinely face discrimination and worse. This is completely unacceptable.

The preprint by Paik and Rivin is a cynical attempt to use the Thompson kerfuffle to advertise the tired trope of the second-rate mathematician being the one to advocate for greater diversity in mathematics. It’s a sad refrain that has a long history in mathematics. But perhaps it’s not surprising. The colleagues of Jerzy Neyman in his mathematics department could not even stomach a statistician, let alone a woman, let alone a person from an under-represented minority group. However I’m optimistic reading the list of signatories of letter A. Many of my mathematical heroes are among them. The future is theirs, and they are right.

Algorithmic bias is a term used to describe situations where an algorithm systematically produces outcomes that are less favorable to individuals within a particular group, despite there being no relevant properties of individuals in that group that should lead to distinct outcomes from other groups . As “big data” approaches become increasingly popular for optimizing complex tasks, numerous examples of algorithmic bias have been identified, and sometimes the implications can be serious. As a result, algorithmic bias has become a matter of concern, and there are ongoing efforts to develop methods for detecting it and mitigating its effects. However, despite increasing recognition of the problems due to algorithmic bias, sometimes bias is embraced by the individuals it benefits. For example, in her book Weapons of Math Destruction, Cathy O’Neil discusses the gaming of algorithmic ranking of universities via exploitation of algorithmic bias in ranking algorithms. While there is almost universal agreement that algorithmic rankings of universities are problematic, many faculty at universities that do achieve a top ranking choose to ignore the problems so that they can boast of  their achievement.

Of the algorithms that are embraced in academia, Google Scholar is certainly among the most popular. It’s used several times a day by every researcher I know to find articles via keyword searches, and, Google Scholar pages has made it straightforward for researchers to create easily updatable publication lists. These now serve as proxies for formal CV publication lists, with the added benefit (?) that citation metrics such as the h-index are displayed as well (Jacsó, 2012). Provided as an option along with publication lists, the Google Scholar coauthor list of a user can be displayed on the page. Google offers users who have created a Google Scholar page the ability to view suggested coauthors, and authors can then select to add or delete those suggestions. Authors can also add as coauthors individuals not suggested by Google. The Google Scholar co-author rankings and the suggestion lists, are generated automatically by an algorithm that has not, to my knowledge, been disclosed.

Google Scholar coauthor lists are useful. I occasionally click on the coauthor lists to find related work, or to explore the collaboration network of a field that may be tangentially related to mine but that I’m not very familiar with. At some point I started noticing that the lists were almost entirely male. Frequently, they were entirely male. I decided to perform a simple exercise to understand the severity of what appeared to me to be a glaring problem:

Let the Google Scholar coauthor graph be a directed graph GS = (V,E) whose vertices correspond to authors in Google Scholar, and with an edge (v_1,v_2) \in E  from v_1 \in V to v_2 \in V if author v_2 is listed as a coauthor on the main page of author v_1. We define an author to be manlocked (terminology thanks to Páll Melsted) if its out-degree is at least 1, and if every vertex that it is adjacent to (i.e., for which (v,w) is an edge) and that is ranked among the top twenty coauthors by Google Scholar (i.e., w appears on the front page of v), is a male.

For example, the Google Scholar page of Steven Salzberg is not manlocked: of the 20 coauthors listed on the Scholar page, only 18 are men. However several of the vertices it is adjacent to, for example the one corresponding Google Scholar page of Ben Langmead, are manlocked. There are so many manlocked vertices that it is not difficult, starting at a manlocked vertex, to embark on a long manlocked walk in the GS graph, hopping from one manlocked vertex to another. For example, starting with the manlocked Dean of the College of Computer, Mathematical and Natural Sciences at the University of Maryland, we find a manlocked walk of length 14 (I leave it as an exercise for the reader to find the longest walk that this walk is contained in):

Amitabh VarshneyJihad El SanaPeter LindstromMark DuchaineauAlexander HartmaierAnxin MaRoger ReedDavid DyePeter D LeeOluwadamilola O. TaiwoPaul ShearingDonal P. FineganThomas J. Mason → Tobias Neville

A country is doubly landlocked when it is surrounded only by landlocked countries. There are only two such countries in the world: Uzbekistan and Lichtenstein. Motivated by this observation, we define a vertex in the Google Scholar coauthor graph to be doubly manlocked if it is adjacent only to manlocked vertices.

Open problem: determine the number of  doubly manlocked individuals in the Google Scholar coauthor graph.

880px-Landlocked_countries.svg

Why are there so many manlocked vertices in the Google Scholar coauthorship graph? Some hypotheses:

  1. Publications by women are cited less than those of men (Aksnes et al. 2011).
  2. Men tend to publish more with other men and there are many more men publishing than women (see, e.g. Salerno et al. 2019, Wang et al. 2019).
  3. Men who are “equally contributing” co-first authors are more “equal” than their women co-first authors (Broderick and Casadevall 2019). Google Scholar’s coauthor recommendations may give preference to first co-first authors.
  4. I am not privy to Google’s algorithms, but Google Scholar’s coauthor recommendations may also be biased towards coauthors on highly cited papers. Such papers will be older papers. While today the gender ratio today is heavily skewed towards men, it was even more so in the past. For example, Steven Salzberg, who is a senior scientists mentioned above and lists 18 men coauthors out of twenty on his Google Scholar page, has graduated 12 successful Ph.D. students in the past, 11 of whom are men. In other words, the extent of manlocked vertices may be the result of algorithmic bias that is inadvertently highlighting the gender homogeneity of the past.
  5. Many successful and prolific women may not be using Google Scholar (I can think of many in my own field, but was not able to find a study confirming this empirical observation). If this is true, the absence of women on Google Scholar would directly inflate the number of manlocked vertices. Moreover, in surveying many Google Scholar pages, I found that women with Google Scholar pages tend to have more women as coauthors than the men do.
  6. Even though Google Scholar allows for manually adding coauthors, it seems most users are blindly accepting the recommendations without thinking carefully about what coauthorship representation best reflects their actual professional relationships and impactful work. Thus, individuals may be supporting the algorithmic bias of Google Scholar by depending on its automation. Google may be observing that users tend to click on coauthors that are men at a high rate (since those are the ones being displayed) thus reinforcing for itself with data the choices of the coauthorship algorithm.

The last point above (#4) raises an interesting general issue with Google Scholar. While Google Scholar appears to be fully automated, and indeed, in addition to suggesting coauthors automatically the service will also automatically add publications, the Google Scholar page of an individual is completely customizable. In addition to the coauthors being customizable, the papers that appear on a page can be manually added or deleted, and in fact even the authors or titles of individual papers can be changed. In other words, Google Scholar can be easily manipulated with authors using “algorithmic bias” as a cover (“oops, I’m so sorry, the site just added my paper accidentally”). Are scientists actually doing this? You bet they are (I leave it as an exercise for the reader to find examples).

Yesterday I found out via a comment on this blog that Yuval Peres, a person who has been accused by numerous students, trainees, and colleagues of sexual harassment, will be delivering a lecture today in the UC Davis Mathematical Physics and Probability Seminar.

The facts

I am aware of at least 11 allegations by women of sexual harassment by Yuval Peres (trigger warning: descriptions of sexual harassment and sexual assault):

  1. Allegation of sexual harassment of a Ph.D. student in 2007. Sourcedescription of the harassment by the victim.
  2. Allegation of sexual harassment by a colleague that happened when she was younger. Source: description of the harassment by the victim.
  3. Allegation of sexual harassment of a woman prior to 2007. Source: report on sexual harassment allegations against Yuval Peres by the University of Washington (received via a Freedom of Information Act Request).
  4. Allegation of sexual harassment by one of Yuval Peres’ Ph.D. students several years ago. Source: report on sexual harassment allegations against Yuval Peres by the University of Washington (received via a Freedom of Information Act Request).
  5. Allegation of sexual harassment of a colleague. Source: personal communication to me by the victim (who wishes to remain anonymous) via email after I wrote a post about Yuval Peres.
  6. Allegation of sexual harassment of a graduate student. Source: personal communication to me by the victim (the former graduate student who wishes to remain anonymous) via email after I wrote a post about Yuval Peres.
  7. Recent allegations of sexual harassment by 5 junior female scientists who reported unwanted advances by Yuval Peres to persons that leading figures in the CS community describe as “people we trust without a shred of doubt”. Source: a letter circulated by Irit Dinur, Ehud Friedgut and Oded Goldreich.

The details offered by these women of the sexual harassment they experienced are horrific and corroborate each other. His former Ph.D. student (#4 above) describes, in a harrowing letter included in the University of Washington Freedom of Information Act (FOIA) disclosed report, sexual harassment she experienced over the course of two years, and many of the details are similar to what is described by another victim here. The letter describes sexual harassment that had its origins when the student was an undergraduate (adding insult to injury the University of Washington did not redact her name with the FOIA disclosed report). I had extreme difficulty reading some of the descriptions, and believe the identity of the victim should be kept private despite the University of Washington FOIA report, but am including one excerpt here so that it’s clear what exactly these allegations entail (the letter is 4.5 pages long):  

Trigger warning: description of sexual harassment and sexual assault

“While walking down a street he took my hand, I took it away with pressure but he grabbed it by force. I was pretty afraid of getting in a fight with my PhD advisor. He stroked my hand with his fingers. I said stop, but he ignored it. I started talking about math intending to make the situation less intimate. But he used me being distracted and put his arms around my waist touching my bud. I was in shock. We came by a bench. He asked me to sit down. I removed his hands and sat down far from him. He came closer and told me that I had a body like a barbie doll. I changed topic again to math, but he took my hand and kissed the back of my hand. I freed my hand with a sudden move, and saw him leaning towards me touching my hair and trying to kiss me. I felt danger and wanted to go home. Yuval was again holding my hand, but this time there was no resistance from me. I thought if I let him hold my hand it is less likely that he harms me. Arriving at my home he tried to give me a kiss. I was relieved when he drove away.”

The victim sent this letter to the chairs of the mathematics and computer science departments at the University of Washington and made a request:

“I am not the only female who was sexually harassed by Yuval Peres and I am convinced that I was not the last one. Therefore, I hope with this report that you take actions to prevent incidents like this from happening again.”

Instead of passing on the complaint to Title IX, and contrary to claims by some of Yuval Peres’ colleagues that appear in the University of Washington FOIA disclosure report that the case was investigated, the chairs of the University of Washington math and computer science departments (in a jointly signed letter) offered Yuval Peres a path to avoiding investigation:

“As you know from our e-mail to you [last week], your resignation as well as an agreement not to seek or accept another position at the University will eliminate the need for the University to investigate the allegations against you.”

Indeed, Yuval Peres resigned within two months of the complaint with no investigation ever taking place. This is the email the victim received afterwards from the chair of the mathematics department, in response to her request that “I hope with this report that you take actions to prevent incidents like this from happening again”:

“I believe this resolution [Yuval Peres’ resignation] has promptly and effectively addressed your concerns.”

At least 8 women have since claimed that they were sexually harassed.

Seminar and a dinner

As is customary with invited speakers, the organizer of the seminar today wrote to colleagues and student in the math department at UC Davis on Monday letting people know that “there will be a dinner afterward, so please let me know if you are interested in attending.”

Here is a description of a dinner Yuval Peres took his Ph.D. student to, and a summary of the events that led to him and his Ph.D. student walking down the street when he forcibly grabbed her hand:

Trigger warning: description of sexual harassment and sexual assault

“I tried to keep the dinner short, but suddenly he seemed to have a lot of time. He paid in cash in contrast to dinners with other students, and offered to take me home. In his car half way to my place he said he would only take me home if I show him my room (I was living in a shared apartment with other people). I thought it was a joke and said no. He laughed and grabbed my hand. Arriving at home I said goodbye. But when I got out of the car he said that I promised to show him my room. I said that I did not. However, he followed me to the backdoor of the house. Fortunately some of my roommates were at home. It bothered Yuval that we were not alone at my home, so he said we should take a walk outside. I felt uncomfortable but I still needed to talk about my PhD thesis work. While walking down the street he took my hand, I took it away with pressure but he grabbed it with force…”

I wonder how many graduate students at UC Davis will feel comfortable signing up for dinner with Yuval Peres tonight, or even be able to handle attending his seminar after reading of all the sexual harassment allegations against him?

The challenge is particularly acute for women. I know this from comments in the reports of sexual harassment that I’ve read, from the University of Washington FOIA disclosed report, and from personal communication with multiple women who have worked with him or had to deal with him. Isn’t holding seminars (which are an educational program) that women are afraid to attend, and are therefore de facto excluded from and being denied benefit of, in a department that depends heavily on federal funding, a Title IX violation? Title IX federal law states that

“No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance.”

An opinion

It’s outrageous that UC Davis’ math department is hosting Yuval Peres for a seminar and dinner today.

[Update November 10th, 2019: after reading this post a former Ph.D. student at UC Berkeley wrote that “Another PhD student in Berkeley probability and I both experienced this as well. About time this is called out so no more new students are harassed.“]

The arXiv preprint server has its roots in an “e-print” email list curated by astrophysicist Joanne Cohn, who in 1989 had the idea of organizing the sharing of preprints among her physics colleagues. In her recollections of the dawn of the arXiv,  she mentions that “at one point two people put out papers on the list on the same topic within a few days of each other” and that her “impression was that because of the worldwide reach of [her] distribution list, people realized it could be a way to establish precedence for research.” In biology, where many areas are crowded and competitive, the ability to time stamp research before a possibly lengthy journal review and publication process is almost certainly one of the driving forces behind the rapid growth of the bioRxiv (ASAPbio 2016 and Vale & Hyman, 2016).

However the ability to establish priority with preprints is not, in my opinion, what makes them important for science. Rather, the value of preprints is in their ability to accelerate research via the rapid dissemination of methods and discoveries. This point was eloquently made by Stephen Quake, co-president of the Chan Zuckerberg Biohub, at a Caltech Kavli Nanonscience Institute Distinguished Seminar Series talk earlier this year. He demonstrated the impact of preprints and of sharing data prior to journal publication by way of example, noting that posting of the CZ Biohub “Tabula Muris” preprint along with the data directly accelerated two different unrelated projects: Cusanovich et al. 2018 and La Manno et al. 2018. In fact, in the case of La Manno et al. 2018, Quake revealed that one of the corresponding authors of the paper, Sten Linnarsson, had told him that “[he] couldn’t get the paper past the referees without using all of the [Tabula Muris] data”:

Moreover, Quake made clear that the open science principles practiced with the Tabula Muris preprint were not just a one-off experiment, but fundamental Chan Zuckerberg Initiative (CZI) values that are required for all CZI internal research and for publications arising from work the CZI supports: “[the CZI has] taken a pretty aggressive policy about publication… people have to agree to use biorXiv or a preprint server to share results… and the hope is that this is going to accelerate science because you’ll learn about things sooner and be able to work on them”:

Indeed, on its website the CZI lists four values that guide its mission and one of them is “Open Science”:

Open Science
The velocity of science and pace of discovery increase as scientists build on each others’ discoveries. Sharing results, open-source software, experimental methods, and biological resources as early as possible will accelerate progress in every area.

This is a strong and direct rebuttal to Dan Longo and Jeffrey Drazen’s “research parasite” fear mongering in The New England Journal of Medicine.

 

 

I was therefore disappointed with the CZI after failing, for the past two months, to obtain the code and data for the preprint “A molecular cell atlas of the human lung from single cell RNA sequencing” by Travaglini, Nabhan et al. (the preprint was posted on the bioRxiv on August 27th 2019). The interesting preprint describes an atlas of 58 cell populations in the human lung, which include 41 of 45 previously characterized cell types or subtypes and the discovery of 14 new ones. Of particular interest to me, in light of some ongoing projects in my lab, is a comparative analysis examining cell type concordance between human and mouse. Travaglini, Nabhan et al. note that 17 molecular types have been gained or lost since the divergence of human and mouse. The results are based on large-scale single-cell RNA-seq (using two technologies) of ~70,000 human and lung peripheral blood cells.

The comparative analysis is detailed in Extended Data Figure S5 (reproduced below), which shows scatter plots of (log) gene counts for homologous human and mouse cell types. For each pair of cell types, a sub-figures also shows the correlation between gene expression and divergent genes are highlighted:

F12.large

I wanted to understand the details behind this figure: how exactly were cell types defined and homologous cell types identified? What was the precise thresholding for “divergent” genes? How were the ln(CPM+1) expression units computed? Some aspects of these questions have answers in the Methods section of the preprint, but I wanted to know exactly; I needed to see the code. For example, the manuscript describes the cluster selection procedure as follows: “Clusters of similar cells were detected using the Louvain method for community detection including only biologically meaningful principle [sic] components (see below)” and looking “below” for the definition of “biologically meaningful” I only found a descriptive explanation illustrated with an example, but with no precise specification provided. I also wanted to explore the data. We have been examining some approaches for cross-species single-cell analysis and this preprint describes an exceptionally useful dataset for this purpose. Thus, access to the software and data used for the preprint would accelerate the research in my lab.

But while the preprint has a sentence with a link to the software (“Code for demultiplexing counts/UMI tables, clustering, annotation, and other downstream analyses are available on GitHub (https://github.com/krasnowlab/HLCA)”) clicking on the link merely sends one to the Github Octocat.

Screen Shot 2019-10-16 at 4.01.58 PM

The Travaglini, Nabhan et al. Github repository that is supposed to contain the analysis code is nowhere to be found. The data is also not available in any form. The preprint states that “Raw sequencing data, alignments, counts/UMI tables, and cellular metadata are available on GEO (accession GEOXX),” The only data a search for GEOXX turns up is a list of prices on a shoe website.

I wrote to the authors of Travaglini, Nabhan et al. right after their preprint appeared noting the absence of code and data and asking for both. I was told by one of the first co-authors that they were in the midst of uploading the materials, but that the decision of whether to share them would have to be made by the corresponding authors. Almost two months later, after repeated requests, I have yet to receive anything. My initial excitement for the Travaglini, Nabhan et al. single-cell RNA-seq has turned into disappointment at their zero-data RNA-seq.

🦗 🦗 🦗 🦗 🦗 

This state of affairs, namely the posting of bioRxiv preprints without data or code, is far too commonplace. I was first struck with the extent of the problem last year when the Gupta, Collier et al. 2018 preprint was posted without a Methods section (let alone with data or code). Also problematic was that the preprint was posted just three months before publication while the journal submission was under review. I say problematic because not sharing code, not sharing software, not sharing methods, and not posting the preprint at the time of submission to a journal does not accelerate progress in science (see the CZI Open Science values statement above).

The Gupta, Collier et al. preprint was not a CZI related preprint but the Travaglini, Nabhan et al. preprint is. Specifically, Travaglini, Nabhan et al. 2019 is a collaboration between CZ Biohub and Stanford University researchers, and the preprint appears on the Chan Zuckerberg Biohub bioRxiv channel:

Screen Shot 2019-10-18 at 10.57.52 PM

The Travaglini, Nabhan et al. 2019 preprint is also not an isolated example; another recent CZ Biohub preprint from the same lab, Horns et al. 2019,  states explicitly that “Sequence data, preprocessed data, and code will be made freely available [only] at the time of [journal] publication.” These are cases where instead of putting its money where its mouth is, the mouth took the money, ate it, and spat out a 404 error.

angry meryl streep GIF

To be fair, sharing data, software and methods is difficult. Human data must sometimes be protected due to confidentiality constraints, thus requiring controlled access with firewalls such as dbGaP that can be difficult to set up. Even with unrestricted data, sharing can be cumbersome. For example, the SRA upload process is notoriously difficult to manage, and the lack of metadata standards can make organizing experimental data, especially sequencing data, complicated and time consuming. The sharing of experimental protocols can be challenging when they are in flux and still being optimized while work is being finalized. And when it comes to software, ensuring reproducibility and usability can take months of work in the form of wrangling Snakemake and other workflows, not to mention the writing of documentation. Practicing Open Science, I mean really doing it, is difficult work. There is a lot more to it than just dumping an advertisement on the bioRxiv to collect a timestamp. By not sharing their data or software, preprints such as Travaglini, Nabhan et al. 2019 and Horns et al. 2019 appear to be little more than a cynical attempt to claim priority.

It would be great if the CZI, an initiative backed by billions of dollars with hundreds of employees, would truly champion Open Science. The Tabula Muris preprint is a great example of how preprints that are released with data and software can accelerate progress in science. But Tabula Muris seems to be an exception for CZ Biohub researchers rather than the rule, and actions speak louder than a website with a statement about Open Science values.

Blog Stats

  • 2,532,062 views
%d bloggers like this: