When you hear about well-regarded scientists making up data in their studies, it’s easy to wonder, What were they thinking?
A New York Times Magazine piece has one answer. The magazine profiled Diederik Stapel, a psychologist, former dean of the School of Social and Behavioral Sciences at Tilburg University in the Netherlands, and author of at least 55 papers with totally made-up data. He even made up data for the graduate students he supervised. He would tell them he was doing their experiments for them, an unusual move, as many professors prefer to leave that tedium for their underlings.
The profile described a couple instances during which he fabricated data, going into detail about what he did. The first time he did it followed a predictable story line. He tested a hypothesis, he didn’t find the answer he wanted, and then he didn’t want to have to redo the experiment or face the fact that he’d “wasted” all that time. “I said—you know what, I am going to create the data set,” he told the New York Times Magazine.
Later, he kept making up data to support hypotheses that were interesting, yet believable. The magazine described him as researching old studies thoroughly before making anything up. It seems he wasn’t avoiding hard work. He was avoiding the occasional (or frequent) failure that comes with honestly done science.
His frequent, high-profile studies brought him a great career, which his wife, Marcelle, said he might have been trying to share when he made up data for his students, too.
He has since been the subject of media scrutiny in the Netherlands and an unflattering university report about his personality. In the New York Times Magazine reporting, he was open about his fraud and culpability.
Meanwhile, his case has brought an uncomfortable light to the field of psychology. Each of Stapel’s fraudulent papers was peer-reviewed. Other psychologists had analyzed them and judged them of worthy of going to print. If they missed nearly 10 years of fraud from Stapel—and it was a couple graduate students who ultimately blew the whistle on Stapel, not a peer review panel—what else did they miss? Many researchers may not be as bold as Stapel, but may cherry-pick the data they want, or analyze it in a less-than-ideal way for their own ends. Their cumulative effect on what’s considered known and true in psychology could be grave.