Most Retracted Scientific Papers Are Pulled Due To Fraud
A recent study finds that only 21 percent of all retracted papers were due to legitimate error rather than scientific misconduct.
It feels like not a week goes by without a scientific paper getting retracted. The article authors issue apologetic statements of “mistaken” data or “submitted the wrong photo” or whatever, and everyone shuffles around feeling embarrassed on behalf of science for awhile and that seems to be that. Turns out, most of the time those “mistakes” are intentional.
A recent study of retracted papers by Arturo Casadevall, Ferric Fang and R. Grant Steen has uncovered that 67 percent article retractions — papers that the journal or researchers, or both, disavow — are due to scientific misconduct. Fang et al. looked through the PubMed database at 2,047 article retractions back to 1977, when a paper published in 1973 got the axe, then cross-referenced those retractions with investigations done by independent bodies such as the Office of Research Integrity. They found that only about one-fifth of retractions were due to mistakes rather than chicanery. Further, they found that authors of the retracted papers were not always forthright in the reason for the retraction: At least 158 articles whose retraction notices claimed unintentional errors or the like were actually cases of scientific fraud.
What I found most interesting is the time it takes to retract a fraudulent paper versus one for which the authors made a legitimately honest error. Legit errors took, on average, 26 months to retract. Fraudulent papers, on the other hand, took almost 47 months. The authors attribute this to the likelihood that investigations into fraud take time and one suspected case of fraud will make a journal look into a suspected scientist’s entire output, meaning that sometimes years-old papers end up being retracted in addition to newer scholarship.
Not much research has been done on scientific fraud, and despite some recent high-profile retractions (see the latest kerfuffle regarding XMRV), nobody really knows if fraud is getting more prevalent or if journals and colleagues are just better at catching it. Hopefully we’ll see more scholarship in this area. At the end of an earlier paper on the subject, Casadevall and Fang muse on the reasons why fraud happens: “It is not difficult to surmise the underlying causes of research misconduct. Misconduct represents the dark side of the hypercompetitive environment of contemporary science with its emphasis on funding, numbers of publications and impact factor. With such potent incentives for cheating, it is not surprising that some scientists succumb to temptation.” With any luck, this kind of finding will spur journals, universities and grant agencies to review their internal processes to try to prevent misconduct or stop a fraudulent study from ever seeing the light of day.