In a series of studies designed to assess two anti-tissue-rejection drugs, former University of Alabama–Birmingham surgeons Judith Thomas and Juan Contreras carefully detailed experiments in which they replaced one kidney in rhesus monkeys with a foreign one and, a month later, removed the remaining native kidney. The new organs took, they reported. The drugs worked.
But according to a July report from the federal Office of Research Integrity, that second kidney was never removed from at least 32 of the 70 animals. The scientists denied intentional wrongdoing, but the promising drug has been deemed bogus. The experiments also cost taxpayers: The National Institutes of Health poured $23 million into the work over eight years.
A recent report suggests that scientific misconduct like this is not uncommon. Daniele Fanelli, an evolutionary biologist at the University of Edinburgh in Scotland, analyzed 18 independent surveys covering 11,351 international scientists and papers and found that 2 percent of the scientists admitted to falsifying, modifying, or fabricating data at least once. Up to one third confessed to slightly less onerous cheats, such as failing to publish data that contradicted their previous conclusions. The scientific community’s reaction to Fanelli’s paper was equally unsettling: A collective shrug. “Nobody challenged the findings,” he says.
Such misconduct, whether it involves overlooking inconvenient data or outright lying, distorts the scientific record, props up weak theories and ineffective drugs, and wastes funding. Unfortunately, no patterns emerged from Fanelli’s work that would indicate the exact cause. Many scientists suspect that the publish-or-perish mentality of academia, and the scientific establishment’s tendency to reward productivity over quality, is to blame, in part. Universities and institutions, says Brian Martinson, a misconduct expert at HealthPartners Research Foundation in Minnesota, are increasingly evaluating scientists for promotion or tenure by asking, “How many grant dollars has this person brought in?”
Since the report, the scientific community has begun to consider solutions. Nicholas Steneck, director of the Research Ethics and Integrity Program at the University of Michigan, points out that researchers could better police their own fields. That could be as simple as studying suspect images. In 2005, Korean stem-cell researcher Hwang Woo Suk was caught fabricating data in part because of a single photo of a cell line that was reproduced in two separate journals and labeled as two distinct cell lines. The Office of Research Integrity offers software that helps scientists analyze images in their contemporaries’ papers for Photoshop-style doctoring. Turning scientists into fraud detectives isn’t such an odd idea, Steneck says, because research is supposed to be self-regulating. “In a fair number of the more serious cases,” he notes, “if the colleagues working with those researchers had simply paid more attention, they would have been caught earlier.”
Contreras and Thomas resigned from their posts, but less nefarious infractions often go unpunished. Publications sometimes uncover misconduct through the peer-review process, and although the editors reject the article, they do not report the potential fraud, most likely out of fear of getting sued. “That’s a violation of self-regulation,” Steneck says. “We need to take action if we see something wrong.”
Better training in ethics and proper research conduct could certainly help. “I honestly believe that a lot of people cross this boundary without really knowing it,” says University of Chicago computational biologist Andrey Rzhetsky. This notion is supported by Martinson’s 2007 survey of 3,000 researchers that found that young scientists learn from their mentors how to cut corners to advance professionally.
The next step may be clarifying the nature of the problem and determining if it’s even worse than Fanelli suspects. Science has a deservedly favored position in society, but to retain that standing, its practitioners need to weed out the bad behavior. If misconduct is as rampant as these studies imply, scientists must start reporting that behavior regularly, not merely when asked to fill out anonymous questionnaires.