http://plantbiologyreview.blogspot.com.es/2015/07/fund-my-research-and-win-world-cup.html

]]>what are the unnoticed damages of NHST paradigm for `omics analyses? I am using NHST almost every day, so really interested in potential pitfalls with this paradigm.

]]>kj,

I am not sure I got your point.

In fitting process, one assumes that he has the right (or at least good enough) model to the data, and he just trying to infer parameters given this model.

However, a necessary step after it, is to check how likely is that your best parameter could have generated the data. Which is called goodness of fit.

I have seen examples of cases that people did not bother to do this step, and published the results of a best fit, while the best fit is unlikely to explain the data.

Thus they used a NULL model that should be rejected.

[I am not interested in putting here examples, as I am in touch with the authors, and I do not want to be exposed]

]]>well that’s kind of the point isn’t it? all you can say is the model is consistent or inconsistent with my particular null model or it isn’t.

The most you can infer from NHST is that “my model was not reasonable”. That’s what a “positive” result means.

]]>See here: http://www.nature.com/ncomms/2015/150609/ncomms8412/full/ncomms8412.html?WT.ec_id=NCOMMS-20150610

Many of their p-values are ~ 1e-100.

]]>I think it is reasonable to suspect the red circled p-value for the first paper as the box plot clearly shows overlap between the two groups. The p-value can still be close to zero if the sample sizes of the two groups are very big. The author probably round it to 0.0. A better way should be written something like

<10^{-5}.

However, I'm trying to understand red circled p-value in the second paper. Do you mean we should never see p-value 1.0? Looking at other entries in the table, some of them are very tiny (less than 10^{-16}), so it is reasonable to think 1.0 is probably the rounded number. Very few people will write p-value something like 1.0-1.8*10^{-14} (or impossible 0.9999999999999….).

I don't understand your points on the red circled p-values. Yes, it's impossible to have p-value at exactly 0.0 or 1.0. However, with rounding, 1.0 p-value is not problem, while 0.0 p-value should be better to be reported something like <10^{-5}.

]]>What about p-value’s little brother – Goodness of fit?

]]>People infer parameters from fitting a (mathematical) model to some real data, but do not bother to analyze how reasonable is it to describe the data with their model.

Unfortunately, I have seen some ridiculous cases even in the most prestige journals, that a one quick look on the figures\data can show that the model is inadequate for the data.

]]>