In a new article, researcher Gerald Haeffel at the University of Notre Dame critiques psychology’s failure to follow the scientific method. Instead of rigorously testing theories and learning from the failures—a hallmark of the scientific method—“research” in psychology and psychiatry is often set up to confirm every pet theory. Yet, somehow, he writes, despite the complex and inscrutable nature of the mind and brain, psychological scientists “always win.”
“Psychological science still does not embrace the scientific method of developing theories, conducting critical tests of those theories, detecting contradictory results, and revising (or disposing of) the theories accordingly.”
He writes that “nearly 100% of the published studies in psychology confirm the initial hypothesis.” This amazing winning streak, according to Haeffel, is not typical of the scientific method—it’s indicative of a scientific failure and the dishonest research practices pervasive in the psych field.
“The legitimacy of psychology’s winning streak has been called into question,” he writes. “Major replication projects show that only about half of psychological findings replicate. Further, there is evidence that psychology’s winning streak may be due to cheating. Similarly to how steroids fueled baseball’s home run chase in the United States (e.g., Bonds, McGuire, Sosa) and Lance Armstrong’s Tour De France streak was aided by doping, psychology’s winning streak may be the result of questionable research practices like p-hacking, HARKing, and piecemeal publication.”
Haeffel isn’t the first to make this criticism. In another recent article, researchers argued that psychology is “incompatible with hypothesis-driven theoretical science” and that researchers in the field typically explain away any result they find—even negative ones—as still being consistent with their theories. Thus, the field does not actually test these theories scientifically—even if it appears to—because no matter the results of their studies, researchers never reject or revise their theories.
According to Haeffel, the hallmark of good science is having a specific prediction that can be tested in order to find its failure points. This leads to revisions (which improve the theory’s explanatory power) or rejection of the theory (which allows other competing theories to step into the spotlight and be tested in turn). This pattern in the other fields has led to significant scientific discoveries (Einstein’s general relativity, for instance).
But psych researchers are not concerned with rigorously testing specific predictions. Instead, they have vague predictions, and any result they find—even a contradictory one—is used to promote their pet theories. This enables a culture of publication, grant money, academic hierarchy, and pharmaceutical industry dominance, but it does nothing to advance our understanding of the mind.
Haeffel asks, “If creating a theory is as simple as making a risky guess, then why are there so few good ones in psychology? One reason is that psychology has stopped using the scientific method.”
The other scientific fields approach research entirely differently. For example, in contrast to psychology, Haeffel writes that in other fields, “journal articles regularly include sections in which the scientists specifically describe the conditions in which the theory would be invalid and then describe how the experiments were designed to rule out these alternative explanations. The idea is that theories develop greater levels of verisimilitude over time as alternative explanations (and even entire classes of theories) are systematically eliminated.”
In other fields, he adds, researchers are excited to discover the areas in which the theory fails to predict something accurately—because it could be the “crack” that opens up into a far more comprehensive understanding. But in psychology, researchers are only concerned with confirming their pet theories over and over again—and it’s all designed so that there’s no way they can be wrong.
But this is the exact opposite of science, according to Haeffel.
“What differentiates science from non-science,” he writes, “is falsifiability. If you can be wrong, then it is scientific; if you cannot be wrong, then it is not scientific […] In science, refutations are more useful than confirmations.”
He adds, “It is not always easy to accept data that disagrees with our beliefs, but scientific progress depends on it.”
Proponents of psych research may argue that the field is still in its infancy, so its failure to predict or come up with specific theories accurately should be overlooked. But Haeffel contends that these failures lead to specific negative outcomes in a society—like ours—where psychiatry is afforded a pulpit to preach and considerable influence in policy-making.
For instance, he writes, “the recommendation for widespread mental health interventions […] could do more harm than good. Research indicates that intervening too early or with people not at risk of mental health problems can disrupt the normal recovery processes that create resilience. For example, there is some evidence that interventions such as grief counseling and critical incident stress debriefing can be iatrogenic. As noted by Bonanno, ‘many individuals exposed to violent or life-threatening events will show a genuine resilience that should not be interfered with or undermined by clinical intervention.’ Supporting this claim, at least one study reported that nearly 40% of the individuals receiving grief treatments got worse relative to no treatment.”
Haeffel suggests that the psych field is influencing policy and carrying out interventions based on untested assumptions. Even if done with the best intentions, these interventions can have iatrogenic effects—causing more harm than helping—and at a high cost in terms of effort and taxpayer money. For instance, the D.A.R.E. program, carried out in schools across the United States to reduce drug use in youth, has been reassessed as a complete failure—a waste of billions of dollars.
So what is the solution? Haeffel argues that the first incremental step involves fixing a culture that incentivizes only positive findings. He suggests that there is an alternative. It’s known as “registered reports,” in which journals accept studies for publication before knowing the results based on the question being tested and the study design.
While the current culture is focused on publishing splashy positive findings, no matter how poor the question and methods are, the registered report method incentivizes researchers to craft meaningful hypotheses and test them rigorously.
“We contend the best option for fixing psychology’s ‘winning’ problem is the Registered Report format in which articles are accepted or rejected prior to knowing the results of the study. Scheel et al. compared results reported in published Registered Reports with those of standard publications; they found 44% positive results in Registered Reports and 96% positive results in standard publications.”
When registered reports were used, publication bias—in which only positive studies end up in journals—dropped by a huge amount.
“It is possible that making Registered Reports the default publishing option will eventually move psychologists towards a more problem-focused (rather than method-oriented) mindset,” Haeffel writes.
He concludes, “Psychologists are willing to be wrong as long as they can still get a publication.”
Haeffel, G. J. (2022). Psychology needs to get tired of winning. Royal Society Open Science, 9220099. Published online June 22, 2022. https://doi.org/10.1098/rsos.220099 (Link)