In physics, researchers use the scientific method to test specific predictions. If the evidence gathered in a well-designed study doesn’t fit the exact prediction of the theory, then researchers know the theory is at least partly incorrect and needs to be revised or discarded for a better one.
Psychology doesn’t work the same way, according to a recent paper in Perspectives on Psychological Science, authored by Stijn Debrouwere and Yves Rosseel at Ghent University in Belgium. Psychological theories are rarely rejected, no matter what results researchers find. Instead, researchers take pains to explain away inconvenient results and design their studies to be vague enough to “support” any theory.
Debrouwere and Rosseel write that psychology is “an experimental science in which interventions lack ecological validity, a science that theorizes freely but misses some of the basic facts, and a science that uses statistics to whitewash uncertainty.”
One of the sharpest critiques against psychological research is that striking, positive studies—which become accepted “common knowledge”—almost always fail to replicate when other researchers test the same effect again. For example, one study in which researchers tried to replicate psychological science findings found that only 36% of findings were confirmed after another test.
And according to the authors, the most well-known psychological experiments, such as Zimbardo’s Stanford prison experiment, run the gamut from unethically conducted to poorly documented to outright falsehoods. Little can be generalized from studies that are misleading at best and fraud at worst. In a survey of 2000 psychologists, more than half admitted to misleadingly presenting their results (such as switching outcome measures or “spinning” negative results to appear positive), so this problem is pervasive.
Some researchers have suggested that improving the quality of psychological research can solve this problem. Solutions have been proposed, such as trial preregistration, open access to data, documenting effect sizes (instead of just a binary measure of statistical significance), and alerting readers to bias from, for instance, financial conflicts of interest. And these methods would likely help avoid fraud, “spinning” negative conclusions into positive ones, and other forms of bias.
However, Debrouwere and Rosseel argue that these solutions don’t solve the underlying problem: the scientific method can’t truly be applied to psychological questions.
They write, “We believe that psychology is fundamentally incompatible with hypothesis-driven theoretical science.”
According to Debrouwere and Rosseel, predictions in physics are extremely specific and testable. For instance, Newtonian mechanics predicted that light would bend slightly around a mass like a sun, and Einstein agreed—but under Einstein’s theory of relativity, the specific amount that it would bend was different. When astronomers measured this during an eclipse, they found that light bent to the exact degree specified by Einstein, not Newton. This conclusively demonstrated that Newton’s theory was missing something and that Einstein’s theory of relativity was correct.
When it comes to psychological science, though, Debrouwere and Rosseel write that researchers make vague predictions and, even when they are contradicted, researchers explain away the fact that their study failed and indeed claim that the study was still a success.
They give this example: “A psychologist may posit that people with high social standing will tend to do what they can to maintain that standing (our theory) and that they might therefore be inclined to boycott social competitors or withhold access to important resources (our hypothesis). However, if it turns out that instead, those of high social standing tend to be magnanimous, this too is easily explained: Generosity is a great way to showcase one’s superior status. Our theory aligns equally well with both hypotheses, even though they are diametrically opposed to each other.”
In the example from physics, measuring the degree to which light bends around a large celestial body would inevitably prove that at least one of the two theories was wrong. Either it would bend to the exact degree predicted by Newton, predicted by Einstein, or neither. No matter the result, it would conclusively demonstrate that at least one (and possibly both) theory was wrong.
But in the example from psychology, no matter what the researchers found, it proved nothing. Any result was vague enough—and the theory itself vague enough—to be explained away.
So Debrouwere and Rosseel ask, what is the point of tests like these?
“What if competing theories lead to similar causal models? What if we simply cannot predict the magnitude of an effect with even the most sophisticated of theories? What if it is unclear when a phenomenon will or will not show up?”
The authors do present an answer, though. They write that the focus on hypothesis-testing science is doomed to failure in psychology. But psychology can focus on the descriptive, taxonomic science that forms the basis of disciplines like zoology, botany, mycology, and even meteorology.
“In order to learn about the world,” they write, “we may explore and document the great variety of phenomena, to organize them, to see whether there are any obvious regularities.”
This type of science requires researchers to put aside their biases about generalizable laws of human behavior and, instead, with curiosity, observe the facets of the human experience in as many different settings as there are people:
“This leads to a very different kind of research wherein we do not prove or disprove theories, but rather try to find the conditions under which a particular phenomenon or mechanism will or will not show up, what strengthens and weakens it. Through slow, careful mapping of the territory, we will start to see whether a behavioral or cognitive phenomenon is widespread, robust or ephemeral, whether it strongly affects our actions or life outcomes, or whether it is only a curiosity with limited impact.”
Debrouwere, S., & Rosseel, Y. (2021). The conceptual, cunning, and conclusive experiment in psychology. Perspectives on Psychological Science. https://doi.org/10.1177/17456916211026947 (Link)