My normal response to headlines like the one this source is referring to is trusty PhD comics cynicism:
But it seems like this case was above and beyond the usual level of hastily controlled trials by: (1) Not actually collecting any new data, which wouldn't be so bad if not for (2) re-using data and metrics derived from a handful of small and poorly power surveys and (3) mis-representing the purpose / conclusions from those trials as being anywhere in the same ballpark as their own.
I've got no dice in the game of home-births (past the occasional story from my mother who works in labor and delivery, and potential future ones from a close relative going into OB/GYN), and I'd half assumed the authors had some element of truth to their study when I first saw the headlines. But then I look and see the largest source of data was collected in 2002, almost a decade before JHU's infamous checklist study, the top comment on HN's story being an anecdote about a medical procedure gone very bad, and the author's point being completely ancillary to any of the online discussion:
Yet, in every article I’ve seen about it, it’s described as a study. In reality, it’s more an op-ed calling for better reporting of deaths from medical errors, with extrapolations based on studies with small numbers. That’s not to denigrate the article just for that. Such analyses are often useful; rather it’s to point out how poorly this article has been reported and how few seemed to notice that this article adds exactly nothing new to the primary scientific literature.
FWIW, the study's first author is a proponent of transparency in medicine which is next to impossible for me not to get behind. But it's hard not to see this as just another example of why medical / scientific communication with the media / public sucks.