__Login__or

__Take a Tour__!

I'm lucky that this has been stressed to me repeatedly during my studies (Bayesian statistics FTW).

We had a tea time about this recently, too! Let me share this cool website he showed us. You can play around with it, and you can see the effects a p-value has on your various errors. It's really enlightening to see how different factors influence each other.

This opposition to the p-value is nothing new, either. One of the first papers I read was ["The earth is round (p<.05)"](http://ist-socrates.berkeley.edu/~maccoun/PP279_Cohen1.pdf) by Jacob Cohen. I heartily recommend reading it if you'll ever get anywhere close to a t-test. It should be required reading.

- “The p-value was never intended to be a substitute for scientific reasoning,”

This is a far better example than I could come up with:

- One of the most important messages is that the p-value cannot tell you if your hypothesis is correct. Instead, it’s the probability of your data given your hypothesis. That sounds tantalizingly similar to “the probability of your hypothesis given your data,” but they’re not the same thing, said Stephen Senn, a biostatistician at the Luxembourg Institute of Health. To understand why, consider this example. “Is the pope Catholic? The answer is yes,” said Senn. “Is a Catholic the pope? The answer is probably not. If you change the order, the statement doesn’t survive.”

And I have fallen prey to the idea that the p value is the probability that the results are due to chance. It's a simplification that just *feels* true.