We Know Nothing About The Correlation Between Videogames And Violence
Foo1;Foo2
SHARE

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

A report in the American Psychological Association tries to reconcile the scientific research on violent videogames with the ongoing debate. (The report is one of the best explainers on videogame violence we’ve ever read, by the way. Do check it out here.) But if the research is still being debated, with nothing definitive found, then why are so many scientists and pundits on both sides acting like we have a verdict?

When the Supreme Court struck down a California law regulating the sale of “violent” videogames to minors, they said the scientific evidence damning such games was, at best, “unpersuasive.” And they’re probably right: if a link between violent games and aggression does exist, it’s still debatable how strongly they correlate. Even talking to scientists who have studied this doesn’t clear up what’s actually going on here, and that’s largely due to external pressures. The media is ready to latch on to any study that indicates a conclusion, and media attention can often turn into grant money for research–and if there’s anything we don’t want mixing, it’s science with media and money.

Both sides of the debate can go to battle armed with studies, even if the evidence on either side is weak at best. Christopher J. Ferguson of Texas A&M International University lays it out, starting with that now-infamous Supreme Court ruling. The fault doesn’t isn’t on one group, necessarily, but rather on the scientific status quo. Research is not the objective procedural we’d all like to hope it is; results can be interpreted to make the case for certain findings, or they can be put out of proportion. In fact, it’s a confirmed phenomenon that may or may not affect research on violent games: publication bias. If data is more likely to be published–in this case, positive correlations between games and aggression–scientists may give it more weight.

So a certain study finds a correlation, or says there’s one, and it’s published. The media picks it up, explaining the (science-approved!) dangers of these games, and the debate gets even more polarized. But if one study finds a correlation, it’s not the last word (or vice versa). Both sides of the debate can go to battle armed with studies, even if the evidence on either side is weak at best. And even with the shield of peer review, scientists aren’t immune to falling into an advocacy camp. The media picks up those individual studies without looking at the big picture, more scientists drop back into corners, and the debate continues without any kind of clarity brought to the table.

Ferguson uses “moral panic theory” to explain what’s happening here:

According to moral panic theory, society begins to essentially select research that fits with the pre-existing beliefs. Science is made to act as a rationale for translating moral repugnance to moral regulation (Critcher, 2009). Essentially we might think of the opinions of scientists on an issue such as video game violence as occupying a kind of bell curve. Of course we might understand that scientists who have preexisting concerns about an issue such as video game violence may already self-select into the field (Grisso & Steinberg, 2005), creating an unintended bias within the scientific community where the scientists in a field don’t necessarily represent a plurality of opinions (Redding, 2001). Yet society itself may amplify this process through media outlets choosing to publicize only research that promotes the panic (Thompson, 2008) and government and advocacy granting agencies choosing to select which research to fund. I submit that it is much more difficult to secure grant funding by arguing that something isn’t a pressing social concern.

It also explains that when a particularly publicized event happens, like the Sandy Hook tragedy, the debate gets even more heated. It’s in the lab as often as it’s on the cable news.

The fact is, data isn’t as cut and dry as we’d like it to be. There are “sides” in this debate, even among scientists, however honestly they came to those sides. Both can stare at the same sheet of numbers, recoil, and say with complete certainty they see a correlation. Or not. Or that more research needs to be done.

Ferguson points out that there’s plenty that scientists wholeheartedly agree on: behavior is a messy subject, the roots of violence are complicated, and multiple sources need to be factored into any conclusion. But there’s also a lot more they don’t agree on, like: Do we see a link between violent games and “aggression”? Is “aggression” a fair metric? How much evidence will it take to say there’s a definitive link?

But mostly the question is: Where are we disagreeing? Says Ferguson:

At present we thus have two groups of scholars, approximately equal in number, who disagree vehemently about the data on video game violence effects. Differences between the two camps are complex but focus on both practical matters such as how to interpret small effect sizes, the validity of aggression measures, and the proper ways of controlling for third variables in analyses as well as theoretical issues involving the use of terms such as aggression and violence and whether social cognitive theories are adequate at explaining aggression.

We haven’t even agreed on what we’re talking about. We’re still a while away from making conclusions.