Strong Publication Bias Found in Psychology Research

Rob Wipond
2
162

University of Salzburg researchers analyzed the results of 1,000 randomly-selected published psychology articles from 2007 and found strong evidence of publication biases, according to a study in PLOS One.

Theoretically, the researchers wrote, the size of positive outcome effects and the number of people involved in a study should be independent of each other. However, their analysis “found a strong negative correlation… That is, studies using small samples report larger effects than studies using large samples.”

They also found that about three times as many studies just barely reached having statistical significance for positive outcomes, compared to the number of studies that just barely failed having positive outcomes of statistical significance. “This indicates that it is the significance of findings which mainly determines whether or not a study is published,” they wrote.

“This pattern of findings allows only one conclusion: there is strong publication bias in psychological research,” wrote the researchers. The researchers did not know the exact reasons for their findings, but they proposed various possible solutions and concluded, “Publication practice needs improvement. Otherwise misestimation of empirical effects will continue and will threaten the credibility of the entire field of psychology.”

Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size (K├╝hberger, Anton et al. PLOS One. September 05, 2014. DOI: 10.1371/journal.pone.0105825)

Previous articleHow Can We Spread the News?
Next articleExplorations in “Post-Traumatic Growth”
Rob Wipond
Rob Wipond is a Victoria, British Columbia-based freelance journalist who has been writing on mental health issues for fifteen years. His research has particularly focused on the interfaces between psychiatry, the justice system, and civil rights. His articles have been nominated for three Canadian National Magazine Awards, six Western Magazine Awards, and four Jack Webster Awards for journalism. He can be contacted through his website.

2 COMMENTS

  1. Well, it’s obvious why that is – try to publish negative results, especially publish them high (in terms of impact factors and other bs measures). And if you don’t publish you’ll perish. It does not only happen in psychology. There are wrong incentives all over the place and scientific publications are full of junk.

    • “…scientific publications are full of junk.” As a person with no financial motives, although a desire to medically figure out how I was personally made sick with drugs, who has read thousands of medical journal articles and thousands of patient accounts, I must agree. (My particular area of study was iatrogenic bipolar, not psychology, however.) There is an enormous disconnect between what’s being published in medical journals, and what actual patients say about medical advise, drug effects, and outcomes. Honestly, it’s night and day.

      I understand financially why the doctors are publishing their perspectives. The patients, however, are just lost souls searching to find truthful answers, they do not have financial motives. Who’s actually more credible – the doctors or the patients?