A new review of publicly available data shows that papers published in top psychology, economics, and general interest journals based on non-replicable studies are cited more than those that can replicate their results. Further, awareness of the failure to replicate appears to have no impact on citation rates and is rarely acknowledged in the citing publications. In other words, there appears to be no impetus to self-correct this trend within these fields.
The research was carried out by the economists Marta Serra-Garcia and Uri Gneezy from the University of California, San Diego, and published in the journal Science Advances. Serra-Garcia and Gneezy explain their findings:
“Why are papers that failed to replicate cited more? A possible answer is that the review team may face a trade-off. Although they expect some results to be less robust than others, as shown in the predictions of experts, they are willing to accept this lower expected reliability of the results in some cases. As a result, when the paper is more interesting, the review team may apply lower standards regarding its reproducibility.”
A hallmark of good scientific research is replicability – the ability of other researchers, mimicking the study’s experimental conditions, to produce the same results. While replicability is commonly required in hard sciences such as physics or chemistry, several critics have raised concerns about questionable practices within psychology concerning scientific standards of replicability. Serious doubts have been cast upon, for example, the replicability of research on depression.
As Pascal-Emmanuel Gobry wrote in 2016, widespread acquiescence to replicability failure suggests:
“There is very good reason to believe that much scientific research published today is false, there is no good way to sort the wheat from the chaff, and, most importantly, that the way the system is designed ensures that this will continue being the case.”
In their new analysis, Serra-Garcia and Gneezy also determined, based on an analysis of journals’ impact factors, that papers citing nonreplicable publications had similar impacts to those citing replicable publications. In terms of impact, then, there is no straightforward way to distinguish replicable from non-replicable research. This, together with the fact that non-replicable studies are more likely to be cited, arguably constitutes a “replication crisis” in the social sciences.
Why are papers that fail to replicate more often cited? The authors hypothesize that reviewers may accept lower reliability when a paper is more “interesting.” This factor may be linked to the reviewers’ perception of a paper’s potential to create “hype” by using exaggerated or inaccurate claims regarding its findings.
Such studies “are more likely to receive media coverage and become famous … this exposure may make the papers more likely to be cited. The effect of the hype lingers even after a study is discredited.”
Serra-Garcia, M., Gneezy, U. (2021). “Nonreplicable publications are cited more than replicable ones.” Science Advances 2021;7. DOI: 10.1126/sciadv.abd1705 (Link)