There’s a major problem in health journalism: It’s wildly unreliable. As David H. Freedman points out in an excellent critique in the January/ February issue of Columbia Journalism Review, the rate of “overall wrongness” in top medical journals is as much as _two thirds_–something even the most seasoned science reporters don’t point out. The resulting information conveyed to lay readers, is, at best, confusing and, at worst, dead wrong.
In all areas of personal health, we see prominent media reports that directly oppose well-established knowledge in the field, or that make it sound as if scientifically unresolved questions have been resolved. The media, for instance, have variously supported and shot down the notion that vitamin D supplements can protect against cancer, and that taking daily and low doses of aspirin extends life by protecting against heart attacks. Some reports have argued that frequent consumption of even modest amounts of alcohol leads to serious health risks, while others have reported that daily moderate alcohol consumption can be a healthy substitute for exercise. Articles sang the praises of new drugs like Avastin and Avandia before other articles deemed them dangerous, ineffective, or both.
But one of those articles has to be the “right” article, doesn’t it? One of them has to have the best information. Well, sure, but good luck trying to suss out which one it is in your newspaper or blog of choice when it’s not even remotely clear in the medical journals.
And that’s only part of the problem, Freedman argues. Science journalists can report flawlessly on a study, painting an accurate picture with multiple, credentialed sources, and still end up transmitting to readers an incomplete message–maybe even a flat-out wrong message–by not letting them in on a fundamental fact: that there is no one tidy answer.
With so much wrong information, scientists and journalists pick whichever wrong study helps them the most, Freedman says. Scientists want their studies to be published and picked up by the media, while journalists want a story that’s clear and digestible. But science is messy, and a clear, digestible finding is often (to borrow from another kind of reporting that Freedman mentions) saying about as much as a politician at a press conference.
Given that published medical findings are, by the field’s own reckoning, more often wrong than right, a serious problem with health journalism is immediately apparent: A reporter who accurately reports findings is probably transmitting wrong findings. And because the media tend to pick the most exciting findings from journals to pass on to the public, they are in essence picking the worst of the worst. Health journalism, then, is largely based on a principle of survival of the wrongest. …
So there’s the upshot: pull a personal health study out of a hat, and it’s more likely than not to have major problems. But journalists aren’t picking random studies–they’re picking the clearest, most engaging, and thus the worst, studies. The system is broken, and this is Freedman’s explanation on how to fix it:
What is a science journalist’s responsibility to openly question findings from highly credentialed scientists and trusted journals? There can only be one answer: The responsibility is large, and it clearly has been neglected. It’s not nearly enough to include in news reports the few mild qualifications attached to any study (“the study wasn’t large,” “the effect was modest,” “some subjects withdrew from the study partway through it”). Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.
A tall order, maybe, but good advice for readers: stay skeptical, and reconsider any life-changing decisions you’re making based on studies, whether you read them in a respected journal or a newspaper or, sure, here at Popular Science.
But one last note on all of this. Freedman readily cites where he got his information on the wrongness of studies. He got it from studies. Even if health journalists and readers become more skeptical, there’s an old piece of wisdom you can check out the published science on: change is hard.