Despite its lofty ideals, science isn't always impartial and unbiased. Scientists on occasion have fabricated data, or at least tweaked it to suit their needs. They've got a career to make, after all, and boring findings don't bring fame and fortune.
So how often are scientists prone to exaggerating or cherry-picking their results? When it comes to U.S.-based soft sciences, kind of a lot. According to a study published in the Proceedings of the National Academy of Sciences this week, behavioral sciences researchers from the U.S. in particular are more likely to overestimate their findings, compared to researchers from other countries. And it was only in behavioral research--the same "U.S. effect" did not show up in nonbehavioral studies.
Behavioral science encompasses anything that studies the way people (or animals) act and interact in the world, usually through observation. So that covers fields like psychology, sociology and anthropology.
"Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings," the researchers write.
And why do they think U.S. scientists are so prone to exaggeration?
The study's author, Daniele Fanelli, has some more to say about the phenomenon over at Retraction Watch.
Maybe it's because those who practice "soft" science are not trained to be "hard" scientists. Where I went to school sociology was the major they sent all the jocks to because they (like everyone else) were guaranteed to get passing grades.
Behavioral scientists don't accurately define the words which they are using and the result is vague and inaccurate correspondences with test results which heavily depend on a question answer format whose trustworthiness also comes into question.
An example is psychiatry where DISORGANIZED SCHIZOPHRENIA, PARANOID SCHIZOPHRENIA, RESIDUAL SCHIZOPHRENIA, AND UNDIFFERENTIATED SCHIZOPHRENIA are words used and have no definition according to the Mayo Clinic or the Psychotherapy manual of terminology. Loose associations of symptoms which are also not defined accurately are made and the results are severely biased subjective personal diagnoses.
According to vague descriptions and associations there is a probability of about 90% that most people have a psychopathic personality.
The accuracy of the symptom, the frequency of the symptom, its duration, and trustworthiness of the patient are never considered in a patient diagnosis resulting in much malpractice and over prescription of drugs as a form of treatment for something which needs radical behavior modification and no drugs. Only the most severe forms of "mental illness" need medication such as catatonic schizophrenia which has very severe physical symptoms associated with it which make a patient's life totally dysfunctional without drug treatment.
Basically behavioral scientists use vague words with no clear definitions or standards and try to make one to one correspondences with clinical test results which are statistical norms with a hundred and one variables affecting the results of the experiment. The experimental results are usually bullshit or unprovable statistically unreliable conclusions with no rigorous cause and effect linkage.
Bottom line. Behavioral science is not a science which should have clear one to one causal correspondences between the variables used and vague words with inaccurate definitions can't be reliably used. Behavioral studies shouldn't be called a science at all but merely social normative mythology.You can't experiment with behaviors statistically and make iron clad conclusions about the results which you only hope to prove but can't prove realistically.
The fundamental problem lies with American scientific journals which have a bias towards so-called "positive" studies, which prove a link between two things. This is causing huge problems with American science advancement, because the one study linking a certain chemical with increased cancer rates will be published, while 30 studies demonstrating no link between them will not be published. This essentially creates a lie, wastes much time and energy from researchers, and can, in the case of drugs and medicine, cost lives.
The results of behavioral science experiments are just easier to fudge than others, that's all there is to it. The bottom line is that for a study to be published, it's almost a requirement that it prove some kind of link, rather than disprove such.
Ben Goldacre gave a TED talk about this very problem in medical research. As he said, it's only the sensational, one-off coincidences that make headlines or get published. "We don't hear about all the times that somebody got stuff wrong."
@uldissprogis Disorganized Schizophrenia, Paranoid Schizophrenia, Residual Schizophrenia and Undifferentiated Schizophrenia are all defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM) and nowhere else. Inacurrate diagnoses because of time, budget or training limitations is a problem sometimes but not always, as for example some diagnoses are quite straightforward (like Ophidiophobia). Behavioral studies have the same problems as studies in other fields, they only have a somewhat high noise ratio by necessity but is still a valid field of science. The problems can be alleviated by for example raising the quality of science reporting (how about waiting for the review-studies or meta-analyses?).
"....Behavioral science encompasses anything that studies the way people (or animals) act and interact in the world, usually through observation....."
Since it is based entirely on subjective data it cannot be considered a real science.
Is anyone else as amused as I am that this article is essentially a behavioral study of behavioral scientists?
Can't wait for the follow-up next week: "New study finds that behavioral studies of behavioral scientists show bias"
Nisi credideritis, non intelligetis.