Share good ideas and conversation.   Login, Join Us, or Take a Tour!
comment by joelthelion

Note that as bad as this is, it only affects some statistical methods used to analyze fMRI scans, not fMRI as a whole. fMRI is an amazing tool.

I think once again we are seeing the failure of the "publish or perish" model of academic research.




jadedog  ·  881 days ago  ·  link  ·  

Could you explain this a little more? How does the inaccuracy of some statistical methods used to analyze fMRI scans not affect fMRI as a whole?

Without knowing a lot about it, it would seem like if something affected a subset of something, then the whole thing is affected.

Devac  ·  881 days ago  ·  link  ·  

    Without knowing a lot about it, it would seem like if something affected a subset of something, then the whole thing is affected.

That does not have to be true. The article talks about errors of the I kind (or alpha errors). It's just a fancy name for a false-positive result.

Now, when you are testing multiple hypotheses you have multiple statistical tests done on the gathered data set. Because false-positive can vary by hypothesis tested, you have to account for it. You do that by finding a control level (how strict adherence to hypothesis we want to have) for this error. If not accounted for it, you just did what is called a Family-wise error.

jadedog  ·  881 days ago  ·  link  ·  

OK, I think I might be getting it. You can tell me if this sounds right.

This is the example I was thinking of (from the wiki you linked on alpha errors).

    Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive mammograms is up to 15%, the highest in world. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90–95% of women who get a positive mammogram do not have the condition. The lowest rate in the world is in the Netherlands, 1%. The lowest rates are generally in Northern Europe where mammography films are read twice and a high threshold for additional testing is set (the high threshold decreases the power of the test)

The false positives are higher in the US than in Northern Europe, not because of the equipment or procedure used, but because of the stringency of the test.

Because of that, false positives might be high for a certain set of data where fMRI technology is used but not for all data where fMRI technology is used.

Devac  ·  881 days ago  ·  link  ·  

It could be that, but there are also procedural factors as in the one where you have cited.

For example, to my knowledge (at least in Poland) when you are tested for HIV or other infections that can linger undetected they take two samples. That way you need a follow-up only if both of them show different results, since this is how you can usually catch false-positive. There is still a fraction of people with both samples being false-positives, but it is order(s) of magnitude less than with one sample. This actually a great example of Bayesian Process, one of the more important concepts in probability and applied sciences.

I can try and write an example, but I think that it will save both of us some time if I'll just link this:

and here is some extra footage that is also relevant:

I can explain it further if you would have follow-up question after video, so don't take it as me brushing you off ;).

Devac  ·  883 days ago  ·  link  ·  

Perhaps you are right, but to my knowledge sloppy stats must be shown, demystified and corrected. Kept as a sort of warning for future reference so to speak. In the end, reason behind publication (the publish or perish that you have mentioned) is less important than the results it brings to the field.