In a new paper, Raymond Dolan—the second most influential neuroscientist in the world—reviews the three decades of failure of his profession.
“It remains difficult to refute a critique that psychiatry’s most fundamental characteristic is its ignorance, that it cannot successfully define the object of its attention, while its attempts to lay bare the etiology of its disorders have been a litany of failures,” he writes.
The paper, published in the journal Neuron, was coauthored by Matthew Nour and Yunzhe Liu, all at University College London.
The authors sum it up this way:
“Despite three decades of intense neuroimaging research, we still lack a neurobiological account for any psychiatric condition. Likewise, functional neuroimaging plays no role in clinical decision making.”
They add that over the past 30 years, more than 16,000 neuroimaging articles have been published—meaning that billions of dollars and decades of researcher focus has been paid to this line of inquiry, with nothing to show for it:
“Casting a cold eye on the psychiatric neuroimaging literature invites a conclusion that despite 30 years of intense research and considerable technological advances, this enterprise has not delivered a neurobiological account (i.e., a mechanistic explanation) for any psychiatric disorder, nor has it provided a credible imaging-based biomarker of clinical utility.”
So what are some of the obstacles impeding the success of neurobiological research?
One issue is the unreliability of MRI data. Researchers have to make thousands of choices when deciding how to run the statistics. None of these choices are technically “right” or “wrong,” but each choice can mean the difference between finding a positive result or not. Previous studies have found that up to 70% of the time, MRI data can create the illusion of brain activity even when there is none—as in the infamous “dead salmon” experiment that “found” brain activity in a dead fish. Worse, researchers often conduct multiple tests—raising the likelihood of chance results—and only publish the ones that come back positive. Researchers have described neurobiological research as hindered by “data pollution.” Ultimately, this type of study takes a massive chaotic mess of data points, and attempts to find a signal in that noise, even when no true signal exists—the technological version of pareidolia.
A second issue is that even when slight correlations are found, the explanatory effect is minimal. For instance, a recent study found that a polygenic risk score predicted less than 1% of whether a person would get a diagnosis of schizophrenia. For comparison, about 17% was explained by socioeconomic, family dynamic, and relational factors. This can be reported as a “statistically significant correlation between genetics and schizophrenia”—but it’s clinically useless, providing no real information. (In large studies, researchers have also determined that this tiny correlation is likely just due to chance.)
Another issue cited by Dolan, Nour, and Liu: psychiatry’s diagnostic labels have terrible reliability and validity. As Kenneth Kendler wrote recently, it is “implausible” that psychiatric diagnoses are even “approximately true.” (Kendler, an expert on psychiatric genetics, was the second most influential researcher in psychiatry in the 1990s).
As then-NIMH head Thomas Insel wrote in a now-infamous blog post in 2013, psychiatric diagnoses are invalid in a way that would not be tolerated by any other subset of medicine:
“The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever. Indeed, symptom-based diagnosis, once common in other areas of medicine, has been largely replaced in the past half-century as we have understood that symptoms alone rarely indicate the best choice of treatment.”
After all this criticism, some researchers have argued that psychological research is “incompatible with hypothesis-driven science.” Others have suggested that psychological researchers rarely even adhere to the scientific method, instead using the veneer of “scientific” language to hide unscientific practices.
In the end, Dolan, Nour, and Liu suggest that the solution is to simply double down on neurobiological research. The brain, they argue, is essentially just a computer whose programming has been disrupted. They do not mention any potential impact of social, cultural, or interpersonal factors, including trauma, on human emotion or behavior. Instead, they write that the best way to understand human distress is as a malfunctioning computer program:
“We contend that neuroimaging research in psychiatry, more than ever, needs to embrace theoretical frameworks derived from basic and computational neuroscience. This includes addressing how high-dimensional neural activity supports cognition, coupled with formulating testable predictions as to behavioral and symptomatic consequences of disruptions to these processes. Arguably, an urgent necessity is to view symptoms through the lens of computational models of cognition, bridging a gap between knowledge articulated at different levels of analysis (from neural to behavioral) and in different species.”
Nour, M. M., Liu, Y., & Dolan, R. J. (2022). Functional neuroimaging in psychiatry and the case for failing better. Neuron, 110, 2524-2544. https://doi.org/10.1016/j.neuron.2022.07.005 (Full text)