Recently, a number of studies have purported to use machine learning to identify people experiencing psychosis based on scans of their brains. However, numerous issues have plagued these studies, including tiny samples, overfitting of models, studies using wildly different techniques, and the potential confounding factor of antipsychotic medication-induced brain changes.
An international group of researchers wanted to determine the usefulness of these machine learning techniques—how accurate they are. They also tried to avoid some of the methodological issues in other studies. They found that these approaches were no better than chance at identifying people experiencing psychosis. They described their findings in an article in Schizophrenia Bulletin:
“Contrary to expectation, the performances of all methodological approaches tested were poor to modest across all sites. […] Current evidence for the diagnostic value of ML and structural neuroimaging should be reconsidered toward a more cautious interpretation.”
The researchers tested two types of machine learning on three different types of brain scans. They did their analyses on five different datasets to avoid overfitting, which occurs when an algorithm is very good at detecting something in the exact sample it was trained on, but terrible at doing so in any other situation.
They expected that they would find their methods to be 70-80% accurate at detecting who was experiencing psychosis.
Instead, they found that the approaches had a range of accuracies—but all of the ranges began at the low end: 50-51% accurate, which is the accuracy that would be expected by pure chance.
For instance, if 50 people had experienced psychosis, and 50 people had not (in a sample of 100), all you’d need to do is simply guess that every single person had experienced psychosis, and you would be 50% accurate. The researchers call this “poor” accuracy.
The researchers found that one method, a “deep learning” technique performed on a specific scan of “surface-based regional volumes and cortical thickness,” had an accuracy reaching 70%. Unfortunately, when they tested this method on their other samples (to prevent overfitting), they found that it also became no better than chance.
The researchers refer to this by saying the technique “generalized poorly to other sites.”
According to the researchers, “When methodological precautions are adopted to avoid over-optimistic results, detection of individuals in the early stages of psychosis is more challenging than originally thought.”
There were 956 participants, coming from five studies across four countries: China, Spain, the Netherlands, and the UK. Participants were either “healthy controls” or were identified as “experiencing their first psychotic episode.” The brain scans were magnetic resonance imaging (MRI) of three varieties: voxel-based gray matter volume (GMV), voxel-based cortical thickness (VBCT), and surface-based regional volumes and cortical thickness. They were analyzed with either “traditional” machine learning, or a type of machine learning called “deep learning” based on a “deep neural network.”
The researchers also discussed other reasons that previous studies may have appeared to find good accuracy. “Previous studies may have reported overoptimistic accuracies due to the use of inadequate sample size,” they write. Additionally, they performed an analysis that found a statistically significant publication bias, meaning that only positive results are being published, which skews the research literature.
They also write that even if these techniques were highly accurate, they would be “be of limited clinical utility. This is because, from a clinical translation perspective, the real challenge is not to distinguish between patients and disease-free individuals, but to develop biological tests that could be used to choose between alternative diagnoses and optimize treatment.”
They conclude, “We encourage researchers to continue pursuing the integration of ML and neuroimaging while exercising caution to avoid inflated results and ultimately a distorted view of the potential of this approach in psychiatric neuroimaging.”
Vieira, S., Gong, Q., Pinaya, W. H. L., Scarpazza, C., Tognin, S., Crespo-Facorro, B., . . . & Mechelli, A. (2020). Using machine learning and structural neuroimaging to detect first episode psychosis reconsidering the evidence. Schizophr Bulletin, 46(1), 17-26. (Link)
Wow, they call it “poor results” and encourage continued investigation? A true scientific analysis would call these approaches utterly useless based on the data, and recommend giving up on these approaches entirely!
Maybe it means that the investigators can still find angels to finance them.
I suppose that is an “effect” of a sort, though of course it is of no use to the actual “client.” Maybe they should use future financial reimbursement as their “primary endpoint?” It would eliminate a lot of confusion!
Well, this is “science medicine”, so the findings only have to benefit clients who are equipment manufacturers and the experimenters, themselves- those who will benefit financially and professionally. The “patients” are simply experimental animals, like mice, rats or monkeys. That they’re human is only important when publishing the results.
That’s too much of a common sense recommendation, Steve. No money in utilizing common sense.
Unless you could make common sense an illness and force drugs on the ‘sufferer’ and charge lots for the service your providing?
In a land where you can be arrested and fined for going out for a run (lawful) and stop to eat a kebab on a park bench (unlawful) it shouldn’t be too hard for the public to accept that common sense requires incarceration and forced drugging. If you go out for exercise, you better keep exercising while you consume your kebab.
Yes, lying in a public park, looking at the clouds, trying to mentally come to grips with the crimes previously committed against oneself by psychiatrists. After finding the medical proof that the antipsychotics do, indeed, create psychosis, via anticholinergic toxidrome, is now also a “psychiatric illness” in the USA, boans.
And I’m quite certain the globalist banksters want to take away all our God given rights, via these insane, and scientifically invalid, “mental health professionals.”
So it is now time for the “mental health professionals,” who claim to be ignorant of the common adverse effects of their drugs, to decide whether they want to be on the side of the masses, or the globalist banksters’ and big Pharma, who miseducated them.
Judgement Day they stand alone, looking around for the people who said “we’re all in this together” and find they have been left in the lurch SomeoneElse 🙂
And there will be no twisting of words as to punishment being ‘treatment’, their time is up and no opportunity to correct their error, despite their pleading for more time to make amends.
Short term gain, long term cook out.
I guess it becomes a habit when you continue to deny the truth and profit from falsehood. (much like washing 5 times a day before prayers is now seen as desirable, and a habit worth having. Ban the Burkha, make face masks compulsory lol) They hurt God not one iota with their denial, only themselves.
Simon thanks for another gem.
The curious monkeys are still at it.
I like the way they word results, Poor to modest. I think they mean that the results sucked.
You do realize how comical it is to imagine these yahoos looking at blobs, and colours. They will look back one day and realize that what they were looking for does not exist, because they made UP,what they are looking at, to start with.
Perhaps if they are looking at blobs, start with tumors. Like the ones that took out two kids I know of, both 2 years old.
Perhaps the public would be happy about that research. An actual growth, visible, and yet they cannot do a dam thing.
The only reason people believe in psychiatry and it’s pretend method IS because there is nothing they are actually looking for.
How do you “identify” a chimera?
It usually has a label saying “Canon” or “Nikkon” on it?