- None of that stopped Makary and Daniel from taking this one study of less than 1,000 hospital admissions and extrapolating it to 400,000 preventable deaths in hospitals per year. That is the peril from extrapolating from such small numbers.
- I also note that, like Classen et al, Landrigan et al made no effort to extrapolate their findings to the whole of the United States. That was not the purpose of their study. Rather, the purpose of their study was to ask whether rates of adverse events were declining in North Carolina hospitals from 2002 to 2007. None of that stopped Makary and Daniel from extrapolating from Landrigan’s data to close to 135,000 preventable deaths.
- So just from the fact that this is a study of Medicare recipients, who are much older than non-Medicare recipients, you know that this study is going to skew towards sicker patients and a higher rate of adverse events, even if the care they received was completely free from error. Still, that didn’t stop Makary and Daniels from including this study and estimating 251,000 potentially preventable hospital deaths per year.
- How many deaths in the US are due to medical errors? The answer is: I don’t know! And neither do Makary and Daniels—or anyone else for sure. I do know that there might be a couple of hundred thousand possibly preventable deaths in hospitals, but that number might be much lower or higher depending on how you define “preventable.”
The real problem, from both "sides" of this debate, is that nobody looks at numbers, they look at the interpretations of the numbers to score their points. "Medical errors are the third most common cause of death in the United States" is one way to report these numbers. Another is "imperfect treatment more likely to kill you than COPD." Still a third is "as patients age, opportunity to fuck up their care increases." The last two aren't likely to make headlines. It's like the Oregon birth study: are you going to go with "planned home births drop c-section rate from 25% to 5%?" Maybe "planned home births increase risk of fetal death from one in a thousand to two in a thousand?" Or shall we go with the tried and true "home births twice as deadly as hospital births?" It's funny to me that whatever your persuasion, you demand perfection from your opponent but argue semantics when the data makes you look bad. Compare and contrast: the Dutch decided to integrate both sides and the home birth rate dropped by a third while complications dropped by half. But for some reason, in this country, we just can't go there.
My normal response to headlines like the one this source is referring to is trusty PhD comics cynicism: http://www.phdcomics.com/comics/archive/phd051809s.gif But it seems like this case was above and beyond the usual level of hastily controlled trials by: (1) Not actually collecting any new data, which wouldn't be so bad if not for (2) re-using data and metrics derived from a handful of small and poorly power surveys and (3) mis-representing the purpose / conclusions from those trials as being anywhere in the same ballpark as their own. I've got no dice in the game of home-births (past the occasional story from my mother who works in labor and delivery, and potential future ones from a close relative going into OB/GYN), and I'd half assumed the authors had some element of truth to their study when I first saw the headlines. But then I look and see the largest source of data was collected in 2002, almost a decade before JHU's infamous checklist study, the top comment on HN's story being an anecdote about a medical procedure gone very bad, and the author's point being completely ancillary to any of the online discussion: FWIW, the study's first author is a proponent of transparency in medicine which is next to impossible for me not to get behind. But it's hard not to see this as just another example of why medical / scientific communication with the media / public sucks.Yet, in every article I’ve seen about it, it’s described as a study. In reality, it’s more an op-ed calling for better reporting of deaths from medical errors, with extrapolations based on studies with small numbers. That’s not to denigrate the article just for that. Such analyses are often useful; rather it’s to point out how poorly this article has been reported and how few seemed to notice that this article adds exactly nothing new to the primary scientific literature.
I see shit like this all the time because the data on home birth is truly scarce. MANA has data but they're so sick and tired of Amy Tuteur and her ilk cherry-picking data for flagrantly wrong conclusions that they aren't public. So what you get are literature studies and nothing but literature studies.