- Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”.
The best I can say is that there are a lot of perverse incentives in the business of science. I think 'untrue' is a bit harsh, but I also think that we're often put in a position where we have to speculate too much on what results 'mean'. I would say that when I review papers, I generally recommend that at least two thirds be rejected out of hand, due to the fact that the results look sloppy, unfinished, etc. Sloppy results abound, sadly.
This problem has a lot of sides, most of which it's hard to see how to address as a concerned layman. The one thing we non-scientists definitely can affect is funding, both by lobbying our political representatives to support funding for reproducibility and, for those of us with the financial means, by directly funding it ourselves. The Center for Open Science is a 501(c)(3) nonprofit organization that, among other things, funds the Reproducibility Initiative. The goal of the initiative is pretty simple: to independently validate significant research results. They're focused primarily on biology at the moment, and there is already work underway to replicate results in cancer cell biology. I'm a donor. This doesn't fix all the systemic problems with science that contribute to the high error rate, but it's at least a concrete step we can take to address one of the problems.
The problem may be a lot more complex than it looks. @tigrennaten@ has posted a link on a very similar topic here.
I've recently started to do research work at my medical school. It's mostly done by statistical analysis of patient charts, trying to find a trend between poorly controlled diabetes and the likelihood of having a certain type of heart attacks. The literature on this issue heavily contradicts itself, and most of it is riddled with problems like poor sample sizes and errors in methodology. By the end of the summer I doubt I'll have accomplished anything more than contributing to this noise. I'm interested in a career in academics, and someday I hope to make real contributions to the scientific community. But right now I'm at the bottom of the totem poll and, unless I try to apply for some Ivy League residency, the quantity of my publications are far more important than their quality. Since we have a system that encourages this type of thinking, you end up with a large body of low quality science published in a large number of low quality journals.
Speaking in a scientific sense, I would say that is an underestimation. EDIT: Sorry, I was interrupted mid comment. IMO somewhere around 50% of the submitted papers contain sloppy science, if not some degree of deception. That is, the authors being selective about what data they report. For example, if you run an experiment 4 times, and three of those times it works in a similar way, how do you treat that fourth data set? Can you write it off to some sort of experimental mistake or oversight? Did you mix up the concentrations, or were the cells somehow different to begin with? When you get funded or get fired, you drop that data set, and hope the other three are meaningful. There is no time or reward for figuring it out. Actually, you are punished for doing so. If you include that data set, idiot reviewers will say that your data is not 'significant' and not let you publish your results. In addition to issues like these, studies are often underpowered, and poorly analyzed, and poorly designed to begin with. Often they aren't even asking the right question. I said to my boss the other day: If you knew a store was getting hit by shoplifters a lot, what would you say is the mechanism? We ask stupid questions like this in science all the time. Someone cuts off the hands off all the people entering the store, and sees that there is far less shop-lifting. Aha! It was their hands, those are shoplifting appendages! That said, in time, progress is made, and some sort of scientific truth prevails. The system is not optimized for its elucidation, however.
Yup. Null results are almost never publishable, which is ridiculous. Every lab I've been in has dozens of old experiments that were never published because they "didn't work", which makes you wonder about any paper you read. If there are plenty of unpublished versions of one study with a null result, then a published study of the same experiment with a result probably found a difference by chance. But we have no way of knowing whether an identical experiment has been done before and by how many people and how many times unless it's published. But you can't publish null results and the cycle continues.For example, if you run an experiment 4 times, and three of those times it works in a similar way, how do you treat that fourth data set? Can you write it off to some sort of experimental mistake or oversight? Did you mix up the concentrations, or were the cells somehow different to begin with? When you get funded or get fired, you drop that data set, and hope the other three are meaningful. There is no time or reward for figuring it out. Actually, you are punished for doing so. If you include that data set, idiot reviewers will say that your data is not 'significant' and not let you publish your results.
I wanted to go into psych research but stuff like this really turned me off of it and I decided to go into clinical psych, instead. "Publish or Die" is terrible for science.
100 years ago "race science" was accepted as fact by a lot of educated people and many scientists despite being complete bullshit because it aligned with people's prejudices. We should not fool ourselves and think we are above such prejudices, today. Research data must be interpreted to make sense, and how we interpret said data can be biased by many things. Or we can unconsciously set up experiments that "prove" what we already believe.