SHARE

Maybe you shouldn’t put too much stock in what four out of five dentists say? Scientists, even experts in the same field, don’t agree on which research studies are the most important, a new study (of course) found. On one hand, this sounds obvious—experts don’t agree. On the other hand, it suggests even scientists can’t pinpoint what great science is. The new study also found scientists are unduly influenced by the prestige of the journal in which a paper is published.

“It’s very difficult to assess merit. We’re all sort of stumbling around in a fog,” Adam Eyre-Walker, a biologist at the University of Sussex in the U.K., tells Popular Science.

Eyre-Walker and a colleague, Nina Stoletzki, examined more than 6,000 published papers that experts reviewed after publication. The papers came from two databases. One is that of a research funding organization, the Wellcome Trust, which asks experts to review papers it’s funded after their publication. The other was the Faculty of 1000, a website where biologists post papers they like, score them, and chat about them. In these two databases, Eyre-Walker and Stoletzki found that scientists assessing the same paper didn’t give the paper the same score much more often than they would by chance.

The pair also found reviewers’ scores depended too heavily on the prestige of the journal in which papers were published. Scientists rated more highly papers published in more prestigious journals even when two papers were equal in other measures of merit, such as how often they’re cited by other scientists.

‘There’s a huge stochastic factor in where a paper gets published,’ Eyre-Walker says.

The U.K. government regularly asks outside experts to review papers published about government-funded research, in a bid to check whether its money is going to the right places. Eyre-Walker and Stoletzki wanted to see if this was a worthwhile exercise. The U.S. doesn’t have an equivalent program, but the biologists’ findings have implications beyond the U.K.: They suggest the expert review that papers undergo before publication could be just as messy. “What we would infer is that in terms of assessing the merits or the importance of a paper before publication, that is also going to be subject to be a huge amount of error,” Eyre-Walker says.

Pre-publication peer review is a core part of how science is done around the world. Many scientists have argued the process could use some improvement, so it’s interesting to see some supporting math.

One caveat: Although he and Stoletzki did not assess this, Eyre-Walker thinks experts are good at determining whether a paper is likely true or accurate. Getting research published in a journal is often about more than just accuracy, however. Expert reviewers for the journal have to think the paper is important or interesting, especially if it’s a prestigious journal such as Nature or Science. Yet Eyre-Walker’s work suggests—just like your friends told you when you don’t get into that Ivy League school—getting into the top journals is a bit of a crapshoot.

“There’s a huge stochastic factor in where a paper gets published,” he says. “There may be two papers of equal merit and one of them is lucky and all gets the reviewers that all think it’s fantastic” while another is not so fortunate.

Eyre-Walker has an idea for a solution. “You should fund a diversity and publish a diversity of science, whether or not you think it’s interesting,” he says. Meanwhile, in an opinion piece published alongside Eyre-Walker’s work, a team of scientists and publishers argue science needs some better measures of what’s great.

At least one major journal follows Eyre-Walker’s suggestion. PLOS ONE, the flagship journal of the open-access publisher Public Library of Science, pledges to publish all papers that are “technically sound” without assessing whether the paper is interesting or important. Eyre-Walker and Stoletzki published their work in PLOS Biology, a journal from the same publisher that does make judgments about papers’ importance.