SHARE

For every complex scientific question, there’s an answer that’s clear, simple, and wrong. I’m bastardizing H.L. Mencken here, but the point stands: for every clicky headline offering you simple truths about your health, there’s a peer-reviewed paper that most people (including, in many cases, the folks who crafted that headline) haven’t read.

There’s a whole intricate media machine of supply and demand that drives these misleading headlines, but suffice to say that most outlets need clicks to keep the lights on, university press offices often write flashy news releases to boost their institution’s public recognition, and journalists, like all humans, can be unscrupulous or gullible (or perhaps simply overworked).

But when the media mill pumps out a story that says, for example, that you’re growing horns because you look down at your phone too much, it’s useful to be able to fact check what you’re reading. And the best way to do that is to take a peek at the scientific paper itself. Most reputable outlets should be linking to the paper in their article, though sometimes the study itself will be behind a paywall (if you’re still really motivated to read it, you can email the corresponding author, who is often happy to provide interested parties with a pdf). If you can read it, the good news is that you don’t have to be an expert to glean some key pieces of information, even if a lot of the technical jargon goes over your head.

Did the study test what it’s claiming to conclude?

Let’s say that, according to headlines, a paper suggests increased phone use is causing eye damage because of all the blue light exposure. Let’s also say that the researchers carried out their study by looking at some type of eye scan that can detect changes to the physical structures in human eyes. They found that the young people they looked at had a specific change, we’ll call it macular softening (a condition that does not really exist, just in case you’re a hypochondriac). Young people look at screens a lot, they figure, and this macular softening is a new phenomenon—so they suggest that it’s probably the blue light from these screens that’s doing the damage. Unsurprisingly, media outlets tout this as the conclusion of the study.

Here’s the problem: they didn’t actually measure blue light exposure, much less look at exactly where that blue light might be coming from. So how can they conclude that blue light is causing macular softening?

They can’t.

There are a multitude of reasons why a media story based on this study might lead with the headline “Your phone is causing your eyes to go soft.” One is that the journalist misread or simply didn’t understand the study, and wrote a story without really checking whether their understanding was correct. The paper itself may not even have drawn that conclusion. It’s possible that the university’s press office wrote a misleading release that connected dots not actually supported by the study, but it’s also possible that the scientist is misrepresenting what they found. Maybe the blue light from your phone thing was just speculation the scientist thought was worth mentioning to the press—it’s totally fair for scientists to muse on what kinds of unexamined details they might include in a future study—and they didn’t really mean for it to come across like a conclusion. Or maybe the scientist got carried away and really does think it’s the blue light, even though their study hasn’t proven it. They might have mentioned this theory in the closing section of their paper (either couched in lots of caveats or, if they’re publishing in a really sloppy journal, presented as if the connection is so obvious it might as well be supported by data).

Regardless of the reason, the first thing you should do when you crack that study open is a word search. Do the words “blue light” or “screen” actually appear in the paper? If so, do they appear in the main body of the study, where the scientists are explaining what they actually did and what data they collected? Or do those keywords appear in the abstract or conclusion, where the scientists are proposing some possible mechanism for their findings that they didn’t actually test? Often you can literally just search the web page or pdf to discover for yourself, and it’s one of the most basic tests you can do.

Is the study able to look at causation?

You’ve probably heard the ol’ “correlation doesn’t imply causation” bit more times than you’d like. But it’s a common adage because so many stories you hear about in the news fall prey to it. Almost every nutrition study you’ve ever heard about looked at correlations, while most of the media coverage you read or watched on it discussed the findings as if scientists had proven causation. It’s not to say that all those studies are wrong, but they’re limited in what conclusions they can draw.

Let’s return to our macular softening example. It may be that young people are exposed to more blue light than their elders, and it may be that young people have macular softening. Perhaps the study in question demonstrated this by asking participants how much time they spent on their phones, showing that young people had much higher screen exposure along with higher rates of macular softening. But that doesn’t necessarily mean that blue light from phones causes macular softening. It might—we just can’t know that yet. Testing for causation requires a different study design, one where you change a variable (the amount of blue light received) and look to see how the outcome (macular softening) changes. In this case, to really make a strong conclusion about blue light from phones causing macular softening, you’d have to take people in multiple age groups that lived many different sorts of lifestyles and play with raising and lowering their screen time, then show they had higher or lower rates of macular softening. You’d also have to design controls to show that without the blue light, staring at a phone wouldn’t increase rates of macular softening, and you’d want to study how increased exposure to other sources of blue light compared. Sound complicated? It would be. Such a study would take tons of time and money to complete, so it takes years of careful study design and data collection to really show that one thing causes another.

Correlation is tricky because lots of things can correlate with one another. Yes, young people tend to look at phones more. But there are lots of other traits they share. People who stare at their phones more may also be more likely to spend more time inside. They might tend to eat more processed food. Maybe they travel more. Maybe they exercise less. Any of these factors could be the real culprit behind their macular softening.

That doesn’t mean a single study can’t make a good case for a causative effect. Though it might be filled with jargon, take a look at the results or methodology section. Does the procedure call for simply collecting data about people’s health and habits, like how many servings of nuts they eat and how many of them got heart disease? If so, they’re looking at correlations. Or, does the protocol involve changing a variable to look at an outcome? If so, they’re looking at causation.

Another surprisingly helpful tip: just look at the discussion section. Most reputable journals ask scientists to include a paragraph or two on how their study is limited. That means most correlation-only papers have to note in their conclusion that they cannot say anything about causation. That might sound like a weakness, but it’s actually a really good sign when scientists openly discuss the shortcomings of their research. It means they’re probably being realistic (and honest) about their findings.

How broadly can these findings be applied?

Even if the study’s methods are spot-on and there’s nothing in there for you to get suspicious about, asking who the data came from is important as you evaluate its conclusions. Let’s say the macular softening research collected participants on a college campus. They got 1,000 subjects to study, which isn’t too shabby. But because it was a college campus, their participants skewed young (students) and middle-aged or older (professors in their fifties and sixties). In this case, we’ll imagine the breakdown is something like 900 college students and 100 adults, almost all of whom are over the age of 40. A thousand subjects sounds like a nice, reasonably high number, but it turns out only 100 people represent the demographic that’s reportedly less likely to have phone-related macular softening. This makes it difficult to know whether they’ve actually found a difference between younger people and older people with regard to macular softening, or if they’ve merely shown it’s a common phenomenon among college students—their sample of older subjects is too small to be sure it’s not randomly skewed by a few unusual subjects. The scientists also admit that, due to the college’s demographics, their participants are 90% American and 90% Caucasian (and 100% enrolled in an expensive Ivy League school). This makes it impossible to draw conclusions about young people as a whole, because the average young person is not well represented in the data.

Look in the methods section of the paper to see how many participants were studied and what demographics they represented. There’s no magic number that means you can definitely take the results as gospel, but more is generally better.

How significant (or relevant) are the results?

Okay, let’s just say our blue light study really did suggest that blue light causes macular softening. More specifically, we’ll say that every extra 100 lumineers’ worth of blue light (a totally made up unit!) you’re exposed to over a year causes your macula to soften by 20 percent more. That sounds huge, right? Here’s where you might have to venture outside the paper itself to get some perspective.

How much is 20 percent softening, for instance? If the average person only experiences one percent softening in a year, then those extra lumineers would still only mean a total of 1.2 percent softening, which isn’t very much. Or, maybe the average person’s macula softens a lot over their life, but it doesn’t really have an impact on their vision—an extra 20 percent might not actually matter that much.

Perhaps more importantly, how much blue light are you getting exposed to? Maybe sunlight gives you 1,000 lumineers of blue light in a year—it now seems like screens are a pretty small influence compared to the normal daylight you experience. Or, it might be just the opposite. Maybe looking at a screen for 10 hours a day every day for a year would still only expose you to 1 lumineer of blue light, so the connection displayed in the study is highly unlikely to affect most people.

This is the sort of context that a good news article will contain, but if you haven’t found one about the study in question it can be helpful to Google some other quick stats. Ask yourself two basic questions:

  • What’s the baseline exposure to this supposedly dangerous thing?
  • What’s the baseline risk?

Even a 50 percent increased risk of a rare event will still be a small total risk.

Do the authors have a conflict of interest?

In every single reputable journal, authors of a scientific study are required to disclose any competing interests they might have. If the folks who examined whether blue light damages your eyes also own a blue-light-blocking glasses company, you can safely assume their judgment is at least a little clouded. That doesn’t mean their results are necessarily invalid or faked, but you should be extra wary of how they translated that data into conclusions and how they presented those conclusions to the media.

Most scientists don’t own their own companies, but it’s far more common for them to receive funding from an entity with a vested interest in a particular outcome. Again, this doesn’t always mean something nefarious is going on; research has to get funded somehow, and perfectly respectable scientists can conduct tests that may or may not help a company sell a product. But unfortunately, it’s hard to know whether the study’s methods or results or presentation to the media were influenced by those biased parties. Lots of the research that found electrolytes were incredibly important for hydration in athletes were brought to you in part by companies like Gatorade, just as much of the research showing sugar wasn’t a culprit behind increased obesity rates was funded by food manufacturers that use a lot of sugar in their products.

You’d be surprised how often the conflicts of interest section gets overlooked, so take a peek for yourself.