The Same Brain Scan Results Lead to Wildly Different Conclusions

When 70 different teams of researchers analyzed the same fMRI dataset, they found very different—even contradictory—results.

Peter Simons

In a new study published in Nature, 70 teams of neuroimaging researchers analyzed the same brain scan data, looking to verify the same nine hypotheses about the results. Every single team picked a different way to analyze the data, and their results varied wildly.

The neuroimaging data comprised results from functional magnetic resonance imaging, or fMRI brain scans, of 108 people performing a task. All but one of the 70 research teams had previously published research using fMRI data and could be considered experienced in analyzing neuroimaging results. Each of the nine hypotheses they were asked to test related to activation in a different brain region.

On average, 1 in 5 teams disagreed if each of the nine hypotheses were true or not. The researchers write:

“For every hypothesis, at least four different analysis pipelines could be found that were used in practice by research groups in the field and resulted in a significant outcome.”

That is, no matter what the hypothesis, there are always multiple, scientifically accepted ways to come up with a positive finding—even if there is likely no true difference.

NEURON NO. 3 is an etching and monoprint on paper by Laura Jacobson.

Even “teams with highly correlated underlying unthresholded statistical maps nonetheless reported different hypothesis outcomes” after finishing their analyses in different ways.

Perhaps even more concerningly, researchers consistently expected to find significant results. This was tested for all researchers who were involved in the study, as well as outside researchers who did not analyze the data.

The researchers found “a general overestimation by researchers of the likelihood of significant results across hypotheses—even by those researchers who had analyzed the data themselves—reflecting a marked optimism bias by researchers in the field.”

The researchers expect to find an effect and may design their study—making choices in their data analysis—in such a way as to ensure that they find one. This can also lead to publication bias, in which negative results are unexpected and considered wrong, while positive results are published without much thought.

fMRI data can be extraordinarily complex and contains a large amount of what is considered “noise”—random fluctuations that need to be removed for the data to make sense.

In order to do that, algorithms are created which guess at which data is extraneous and which data is vitally important. It is a multi-stage process, and researchers must make choices throughout. There is no consensus around the “proper” way to analyze a neuroimaging dataset. A study from 2012 found 6,912 different ways of analyzing at a dataset, with five further ways of correcting those, ending up with 34,560 different final results—all of which are considered “correct.”

One previous study found that almost every single study that used neuroimaging analyzed the data differently. That study also found that most publications did not even report the specific choices they made when analyzing the data. The researchers, in that case, concluded that because the rate of false-positive results is thought to increase with the flexibility of experimental designs, “the field of functional neuroimaging may be particularly vulnerable to false positives.”

“False positives” occur when the researchers find an effect or a difference, but it is due to random elements of the data (it is not a real effect or difference).

A previous study also found that when fMRI data were analyzed differently, the supposed “normal” brain development of children looked completely different. The researchers in that study weighted different parts of the population differently (for instance, giving more weight to the participants living in poverty). They found that brain development occurred differently. This demonstrates that averaging very different data (as most neuroimaging studies do) can lead to incorrect conclusions about what is “normal.”

Unfortunately, despite the high degree of uncertainty, incorrect conclusions (false positives), and the lack of transparent methodological reporting, people are more inclined to trust studies that present neuroimages—even when those images are unrelated to the actual research. Another study demonstrated that students of psychology were especially susceptible to these misrepresentations.

Perhaps one of the most prominent examples of this problem in real life is a controversial study from 2017 that claimed to find brain differences between “normal” children and children with a diagnosis of “ADHD.” That study resulted in calls for a retraction, and Lancet Psychiatry devoted an entire issue to criticisms of the study. Very high-profile critics, like Allen Frances (chair of the DSM-IV task force) and Keith Conners (one of the first and most famous researchers on ADHD), re-analyzed the data and found no actual brain differences. Other researchers found differences—but they were explained by intelligence, not by the presence of the ADHD diagnosis.



Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F., et al. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84–88. (Link)


  1. (This comment is not directed at Peter Simons. Thank you for your writing.)

    I don’t get it, maybe I should go back and read it again, or maybe the old “recovery” side effect is acting up again. It has to be my fault because 9/10 researchers can’t be wrong. Turns out 70/70 researchers really can be wrong.

    I’m sure this time also, no one will tell me what 70 teams of researchers cost. The closest I ever get to numbers is that it is a better deal than that other guy. Cheaper in comparison. Payment plans, sliding scale. You can buy the dumbed down model if you’re poor. JUST BUY SOMETHING! Like some late night TV program selling blenders.

    This is the research of medication sellers, but also of the sellers of psychotherapy and other “recovery” alternatives use to keep us buying? The sellers even showing up here at MIA sometimes? Where their ideal client hangs out? I’ve been taking a break from believing everybody and self blame to read the business models being marketed to the healers for hire instead.

    This research thing? Do we have to keep going now that it’s been researched enough to know it’s a scam?

    If this was industry or farming and you hired 70 teams to fence your property but sent them out to survey first and not one came back with an accurate read of the map you provided? hmmm what do you think would happen? Maybe instead of retracting papers we should be retracting pay checks?

    Fine don’t fire researchers! Half their pay and give a researcher’s salary to the person in the MRI machine (in this case). I’m sure they could tell you all about themselves. And a salary empowering them as the ultimate expert of the brain you are researching would go a long way towards really HELPING. Much farther than more of this dismembered SCIENCE! My brain doesn’t go anywhere without me. Would you (research team) please quit acting as if I’m not here!

    • Plus I’ve always wondered about control groups. Do they put up a “research study participants needed” poster? If you are NORMAL call this number? Brain scans for a fiver? How out of touch with reality do you have to be to believe you are THE measuring stick for Normal? Makes me a bit worried that my brain is being held up against the top predators in our species as if I am the one defective.

      • You made me think of “anxiety” and the research done on this, well to keep the idea alive that my “anxiety” is a bad thing. I also remember being offered some pills for my “hypervigilance”.
        My “hypervigilance” can cause me some “irrational” fears that I have not observed in my spouse. And there were the times while my spouse shoveled snow on the high boulevard, with the toddlers playing alongside the boulevard, and me staring out the window feeling very “nervous” at the possibility that “omg, they could slip and roll down in front of a car, AND Die”. Which of course, led me to get upset at my spouse for not watching and repeatedly yelling out the window for my spouse to “watch the kids”.
        I bet my brain was nicely lit up in the “anxiety center”. I bet that his were not.

        And so it went, steadily with my “anxiety” increasing, which of course is stressful.
        Then my spouse and my one and half year old were playing at the top landing and I watched as my spouse sat with his back against the wall and my son’s back towards the stairs.
        But my spouse always had it all under control. I looked up, I turned away angrily and behind me I hear my son falling down 17 steps. But as my chill spouse said “geez, thank god he’s fine”.

        Then there was the time my son came speeding over in his car a few years back because his dog got a fishhook in his mouth.
        My brain was on high alert, it did not feel great. But while everyone was looking at the fishhook and considering the problem, I ran downstairs, grabbed a dog muzzle I had (for emergencies lol) and put it on the dog and told my son to get to the vet.
        The vet asked who had the foresight to use the muzzle.

        But you see, to always feel as if something will go wrong if I don’t DO it, is what is considered my problem. I’m the problem, not the yahoos that are not in it WITH ME, to lessen the load.

        Ohh sure, there will be some therapists that try to “help” the “excess amount” of being vigilant, you know, to lessen that “cortisol” and that “overfiring” and it’s all very curious that they don’t want to “increase” the vigilance in those around me.

        I’ve come in handy. I serve to keep the false story going, I serve to prove to those dead things, that I’m “overfiring”. Someone has to be the absorber of chaos.

        So I’m sure they would love to see my brain in action. Cause of that stress you know.
        And we all know just how insane this path of scans is and how it is about keeping a segment of people in jobs, jobs that are more destructive than wars.

        I’m even bright enough to consider a possibility that my spouse and I were “balancing each other out”, which is mighty kind of me to consider.
        And this is the thing a shrink won’t consider, because he needs a customer and “research” needs to keep looking into the thing the shrink guided them to do.
        This elusive, fucked up lie.
        It matters to me not, because I’m still here, just as the shrink is. So much for Darwin.
        BUT, the shrink wants to be Darwinism. He wants to be the “natural law”. And that is so easy, for me to have the power to just plant seeds that these kinds are not the ideal, there are “parts wrong”. There is never a part wrong inside a shrink.
        Very curious.
        It makes me think of the marriage where one is the problem.

        BUllshit talk like suggesting that “once upon a time we needed that anxiety” “but it no longer serves us in this tech age”, is really about keeping the hoax alive. EVEN if it has a grain of truth, it is obvious we are not keeping up with the insanity within each of us.

        So O.O, you got zero to worry about as far as being up against a “top predator”. We are clueless as to who that is, things never seem as they are. It seems to me that “nature” supplied you with exactly the thing to keep you safe from “predators”. The question is, is psychiatry safe from “natural laws” that really do exist without us knowing the future. They are not safe and it is obvious that they know. The constant reinventing and arguing and use of force to accomplish their “hard earned” degrees is the thing at play. The thing they really are mostly trying to do is prevent a majority from discovering the trail of harm they are guilty of.
        Stuff like making “anxious people” think they are too anxious and that “anxiety” is not residing in ONE person, to be treated. The cure as they well know, is NOT to be “treated” or “fake scanned”.

        So good luck to them, in surviving in the “natural world”, where balance is not as they hoaxed.

  2. Peter:
    Re: the random fluctuations. If the image is a recording at a flat field across the surface at a certain dimension and scale of relationship back to the recording platform and the “optics” of a computerized system, then is the noise occurring from a different scale of relationship, where the fluctuation or wave might be trying to “break through” into the field being observed? How much noise is being injected into the observation by the technology engineered and how curious are those who are posing the questions. The issue may not be a high degree of “uncertainty” but rather a different context that embraces the idea, “The End of Certainty”. Time to rethink the concept of “random”?

  3. These kinds of articles are so important. So many people who believe in psychiatry and the US mental health industry as a whole say that they believe the “science.” These kinds of arguments and proofs against the bogus science that has been used to justify the medical model of “mental illness” can hopefully convince some to reexamine their fixed ideas about science and psychiatry.

  4. It seems to me that the conclusions, as usual, were dramatically soft-pedaled compared to the actual conclusions one ought to draw from this. If the most experienced professionals in the field can’t draw a common conclusion from the same data, there are only two possible conclusions: SPECT scans or PET scans actually tell us nothing significant or meaningful about the brain or its functioning, or that the people analyzing the data are either so incompetent or so utterly biased as to remove any possibility of gaining any reliable intepretation of ANY data from these scans, at least as far as “mental health” is concerned. Or both may be equally true.

    These people are taking in millions of dollars, including our tax dollars, doing worthless research and making claims that are not substantiated, otherwise known as LIES. This is a dire situation and calls for a complete reconsideration of the value of spending significant money on what is either a fraudulent field or one in such a stage of infancy that nothing of value can be expected for decades to come. Particularly given that such studies are granted such high value and yet are almost worthless, these studies are contributing confusion rather than knowledge, and should be discontinued or else relegated to a very cool back burner while somebody figures out if there is anything of scientific value that will ever come out of this kind of study.

    • I get what you’re saying, Steve, and I basically agree with you here, – you KNOW that! But you are simply not seeing it from THEIR point of view. They have no interest in “saving money”, that’s ludicrous to them! They are only concerned with money, power, and control. And they will gladly spend, ( or waste, from our perspective), any amount of money needed to maintain their power and control. Doesn’t the old line about “if you can’t dazzle ’em with brilliance, then baffle ’em with bullshit” ring a bell? Well, they are just as happy to baffle with brilliance, as dazzle with bullshit. Read that carefully. And no, they do NOT care how much money they spend, or waste. Why should they? They are RICH, and getting RICHER! Your “poverty consciousness” does you wrong here, Steve.
      Only poor people care about “saving money”….

  5. Every bad thing that happened to me had the backing of research. History is much worse than my story. We sterilized the “feeble minded” in Alberta right up until 1976. The last residential school didn’t close until 1996. All had the backing of science. When research is proved wrong we get an “oops” if anything. No one goes to jail. Did the guy who won a Nobel prize for putting an ice pick into his patients brains through an eye socket get prison time?

    Are we defunding the wrong department?