I wish the general trend of all research was something other than "everything is fucking bad for you, it's all your fucking fault, you are going to fucking die." Their net function in our society is they prophylactically prime assholes to preserve their precious empathy reserves for cats. A study with an n of 6 about how babies who were fed peanut butter in the womb meant that for the next six years, every other asshole I met told me the reason my kid carries an epi-pen is because she didn't get enough peanuts as a baby. When I tell them that she almost went to the emergency room when our babysitter gave her peanuts at nine months, they point out that obviously my wife didn't eat enough peanuts. When I tell them that she lived off of almonds and that my daughter is even more allergic to almonds than peanuts, they say well obviously the study only proved its point with peanuts. Then when I tell them that my daughter is the youngest patient in the AR101 protocol which allowed that very peanut allergy treatment to be available under FDA protocol, they talk about how shamefully expensive medicine is. The problem isn't the studies. The problem is that the only way to communicate science is by weaponizing it.
The problem is the studies and the only way to communicate is to weaponize it. So many of these studies are poorly thought out with no real hypothesis testing wherein a bunch of shit is measured and then they find something and report that. Any study with rats with an n of 6 should elicit skeptical reactions. Any study with humans with an n of 6 should be thrown in the garbage. But that's not what happens. Some journal will publish it, and if the conclusion is provocative enough, some good journal will publish it. Even if the authors are humble enough to say, "This is a small n but it justifies spending some real money to figure out if this is real," that's not how it will be positioned in the press. And then every dickhead with a Science Daily subscription is all of a sudden an expert on immunology. Same goes for big correlation studies, but they have gravitas because they have lots of people, so they must be true. But they suffer from the same basic problem that there's no controls, no hypotheses, no randomization, etc, etc. I think we've learned a lot about science literacy in these past 11 months (as if we needed any more data on that topic). And I can't say we've learned anything good.
It's a good cartoon, but I actually think it overcomplicates the issues. They hit the nail on the head in one single panel, which is having to prove that you prespecified what your analysis would be before you did any measurements. That's the only way to ensure that your a priori odds match with your statistical analysis. I also think that publishing per se is less of an issue than grant funding. If every scientist weren't on the hook to pay his/her own salary via federally funded grants, that immense pressure wouldn't exist. There's no university in America who gives a fuck what your publication rate is if you're bringing in a million per year.