Fifty years ago, there was at least a pretense among academic psychiatrists to care about science. They at least paid lip service to criticism of the lack of validity of mental illness constructs, and they made futile attempts to solve their diagnostic unreliability problem. They knew that without validity and reliability, correlations between mental illness constructs and anything are “garbage-in-garbage-out” findings.
Today, however, academic psychiatrists have dropped even the pretense of being scientific. Not only do they disregard issues of validity and reliability, they make a mockery of data collection—more later on this with a recent example of academic psychiatry research on the relationship between psychosis and mass shootings. So today, academic psychiatry’s unscientific “garbage-in-garbage-out” research has deteriorated to “nonsense-in-nonsense-out.”
While Mad in America readers are well aware of drug company corruption undermining the legitimacy of psychiatry’s research claims, an interview with a leading academic psychiatrist, from one of academic psychiatry’s most prestigious institutions, reveals that academic psychiatrists are clueless about the scientific method; and this ignorance allows them to convince themselves that they are conducting scientific research when they are obviously not.
This deterioration from bad science to no science has created a dilemma for critical thinkers. On the one hand, society today takes psychiatry’s claims more seriously than ever, and thus many critical thinkers may feel a social obligation to debunk its claims. However, given the obvious nonscientific nature of these claims, critical thinkers who become consumed by these claims and by the social obligation to debunk them can find themselves intellectually impoverished.
Before examining that revealing interview with a leading academic psychiatrist, first a discussion about how academic psychiatry’s earlier research failed to solve validity and reliability problems, making its claims garbage-in-garbage-out.
Earlier Garbage-In-Garbage-Out Research
In scientifically examining any mental illness construct, the first problem is one of validity, and a second problem is reliability. If a construct cannot show to be valid in a scientific sense, there is no scientific value in correlating it with other variables. And even if the construct is a valid one, if it cannot be reliably assessed, then again there is no scientific value in attempting to correlate it with other variables.
There are several aspects to scientific validity with respect to mental illness. One obvious aspect of validity is labeling something an illness that actually is an illness. In every era, we see human conditions that are disturbing, especially for societal authorities, which are invalidly labeled as mental illnesses. Earlier well-known examples of this include categorizing enslaved people who attempted to flee from slavery as suffering from the mental illness of drapetomania; and more recently, categorizing homosexual individuals as suffering from a mental illness because of their sexual orientation.
Unlike drapetomonia, homosexuality, and many other psychiatric illness constructs, the construct of schizophrenia has enjoyed an unquestioned status as a medical diagnosis in psychiatry and much of mainstream society. However, scientifically, there are several levels of invalidity in the construct of schizophrenia, and its diagnostic reliability is poor.
The DSM criteria for schizophrenia consists of the presence of two or more of the following behaviors with at least one of them being from the first three listed: (1) delusions; (2) hallucinations; (3) disorganized speech; (4) grossly disorganized behavior; and (5) negative symptoms (which include apathy, lack of emotion, poor social functioning, and difficulty following instructions).
However, by 1968, it was obvious to psychologist Donald Bannister that the DSM criteria makes schizophrenia “a concept so diffuse as to be unusable in a scientific context.” Specifically, it is possible for one individual to be diagnosed with schizophrenia based on two symptoms that are completely different than the two symptoms of another individual similarly diagnosed. Bannister put it this way: “The two individuals are now firmly grouped in the same category—even though they do not specifically possess one common characteristic.”
Two of the most significant symptoms that psychiatrists use to diagnose schizophrenia are the presence of hallucinations and delusions. Are these valid evidence of a disease?
There has been a great deal of research examining just how common—or normal—auditory hallucinations (hearing voices) are. A major review (“A Comprehensive Review of Auditory Verbal Hallucinations: Lifetime Prevalence, Correlates and Mechanisms in Healthy and Clinical Individuals”) was published in Frontiers in Human Neuroscience back in 2013. The review analyzed multiple studies investigating the existence of auditory verbal hallucinations (AVH) in the general population and concluded: “Epidemiological studies have estimated the prevalence of AVH to be between 5 and 28% in the general population.” Most importantly, this review also concluded that among those with such hallucinations, the difference between those who enter medical treatment and those who do not is the “way in which each respective group processes their experiences.” Specifically, the more that the hallucinations or voices are experienced negatively, which is more likely in the U.S. and other Western cultures, the more likely such individual will be diagnosed with a serious mental illness; in contrast, some non-Western cultures accept and even celebrate people who hallucinate rather than pathologize them.
The presence alone of “bizarre delusions” was considered the most heavily weighted symptom of schizophrenia in DSM-IV (1994), and so Schizophrenia Bulletin in 2010 published the article “What is Bizarre in Bizarre Delusions? A Critical Review,” summarizing recent various DSM definitions of a bizarre delusion, one of which included “the content [is] patently absurd and has no possible basis in fact.” However, culture and politics dictate a great deal of what psychiatry considers to be “patently absurd and has no possible basis in fact.” For example, 20% of Americans believe that the Bible is the literal word of God, which means they believe that Moses wrote about his death and burial after he was dead, that Joshua made the sun stand still, and that a virgin could give birth to a child, all of which for scientists are “patently absurd and have no possible basis in fact.” Politically-astute psychiatry is well aware that if it classified these 20% of Americans as schizophrenic with “bizarre delusions,” that given that group’s political power, psychiatry would face a far larger assault than it faced from gay activists in the early 1970s for labeling their homosexuality as a mental illness.
Even disregarding these and other validity issues of the concept of schizophrenia as a mental illness, there are major reliability problems. In a 1995 American Journal of Psychiatry study “Interrater Reliability of Ratings of Delusions and Bizarre Delusions,” fifty senior psychiatrists were asked to distinguish between bizarre delusions versus non-bizarre delusions. A standard statistic used to assess reliability is called kappa. Kappa values between 0 to .2 mean no meaningful agreement; and a kappa of less than .59 is considered weak agreement. Among these fifty senior psychiatrists, the inter-rater reliability kappa of bizarre delusions was between .38 and .43, and the researchers concluded, “The reliability of ratings of bizarre delusions appears to be less than satisfactory for clinical practice, and the increased weight given to this symptom in modern diagnostic systems does not seem justifiable.”
To assess the reliability of the current DSM-5 (2013), its publisher, the American Psychiatric Association (APA) conducted field trials to assess the degree of agreement between clinicians diagnosing the same individuals. Even with special training that made agreement more likely, the kappa value for schizophrenia was only .46. (The agreement kappa value was even weaker for other so-called mental illnesses, for example: .28 for major depressive disorder; and .20 for generalized anxiety disorder.)
Just how clueless is academic psychiatry to the DSM-5 reliability disaster? Gary Greenberg, author of The Book of Woe: The DSM and the Unmaking of Psychiatry (2013), witnessed DSM-5 task force vice-chair, psychiatrist Darrel Regier, announcing the results of the DSM reliability field trials at the 2011 APA annual convention, and Greenberg reported the following: “Here he was, announcing a miserable failure, but if he grasped the extent of the debacle, nothing about his delivery showed it.”
Prior to its 1980 DSM-III, the APA had acknowledged that its previous DSM diagnostic manuals were scientifically unreliable, but then claimed that its DSM-III had solved this unreliability problem by removing the more subjective psychological notions (such as “neurosis”) and replacing them with behavioral checklists. However, in Making Us Crazy (1997), Herb Kutchins and Stuart Kirk documented that there had not been a single major study showing high reliability in any version of the DSM. Kutchins and Kirk reported on a major 1992 study conducted at six sites. Mental health professionals were given extensive training in how to make accurate DSM diagnoses and then assessed six hundred prospective patients. Kutchins and Kirk summarized the results: “Mental health clinicians independently interviewing the same person in the community are as likely to agree as disagree that the person has a mental disorder and are as likely to agree as disagree on which of the . . . DSM disorders is present.” This was true even though the standards for defining agreement were very generous.
Without a scientifically valid construct that can be reliably measured, there is no scientific value in examining associations with any variable. Thus, given schizophrenia’s lack of scientific validity and poor diagnostic reliability, efforts to correlate schizophrenia with anything are what scientists call “garbage-in-garbage-out” research.
Academic Psychiatry’s Deterioration to “Nonsense-In-Nonsense-Out” Research
Psychologist Roger McFillin, on his podcast earlier in 2025, interviewed Columbia University psychiatry professor Ragy Girgis about research authored by leading figures in psychiatry at Columbia University, including Girgis and two former APA presidents (Paul Appelbaum and Jeffrey Lieberman). Using the Columbia University Mass Murder Data Base, created in 2019-2020, Girgis and his co-researchers aimed to discover how mental illness, specifically psychosis, is related to mass shootings. Given that the Columbia University psychiatry department is considered extremely prestigious in academic psychiatry, what McFillin exacts from Girgis is especially illuminating. Not only does Girgis appear not to understand validity issues that McFillin attempts to explain to him, Girgis describes a method of data collection used in the Columbia University Mass Murder Data Base that makes a mockery of science.
Before examining how the data was collected for the Columbia University Mass Murder Data Base, first a look at: Girgis’s belief in DSM validity; his belief in the serotonin imbalance theory of depression; his belief in the effectiveness of SSRI antidepressants; his belief that antidepressants reduce suicide; his ignorance about psychiatric drug withdrawal, and his belief that psychiatry has reduced the burden of mental illness. Given Girgis’s beliefs and claims, even more troubling is Girgis response to McFillin’s question as to whether he is living in a bubble, for which Girgis responds: “I think the bubble in which I live and work encompasses about 98% to 99% of academia. . . of academic neuroscience, academic psychiatry, for sure.”
McFillin starts with the issue of mental illness validity, asking Girgis: “Is mental illness actually a valid scientific construct that one can measure?”
Girgis: “Yes, definitely.”
McFillin points out the huge increases in number of mental illness diagnoses since the original DSM, and the changes in criteria for mental illness that makes it easier to be classified as mentally ill, and he begins to delve into one of the longstanding construct validity problems of DSM mental illnesses, explaining to Girgis: “The fact that there are valid constructs has never been proven. In fact, there is diagnostic overlap in so many of the conditions.” Girgis ignores this point.
Later McFillin returns to this validity issue with an example, “How is Bipolar II a valid construct?”
Girgis: “I’m referring to the DSM criteria.”
McFillin: Do you see how circular this gets? . . . . You end up saying . . . a psychiatric condition is valid. Why? Because the DSM exists.”
Girgis: “But the DSM is based on something else. It’s based on a gold standard.”
McFillin: “You believe the way that the DSM is constructed is gold standard science? Be honest, there’s a lot of people listening.”
Girgis: “Definitely.”
McFillin explains to Girgis that even Allen Frances, DSM-IV task force director, has argued against the scientific validity of the DSM, and how the DSM was never meant to be a scientific instrument that could be used to identify discrete medical conditions, but was an attempt to try to cluster symptoms together for the purpose of research and communication, and that the labels and the criteria are arbitrary. (In a 2010 Wired interview, Frances criticized the concept of mental disorder employed in every DSM including his own DSM-IV, stating that “there is no definition of a mental disorder. It’s bullshit. I mean, you just can’t define it”).
Girgis: “I would disagree. I believe the DSM is valid.”
An obvious example of the arbitrary nature of the criteria for mental illness was the inclusion and then the elimination of the “bereavement exclusion” for depression. Specifically, in DSM-III, if one had depression symptoms following the loss of a loved one, this was considered a normal reaction, and one received a so-called “bereavement exclusion” and was not diagnosed with the mental illness of depression. In DSM-IV, a time limit was imposed, as the griever could have symptoms of depression for two months before being considered mentally ill, but if depression symptoms persisted longer, then it was a mental illness. In DSM-5, this bereavement exclusion was removed altogether, so that if one had the required depressive symptoms immediately after the loss of a significant other, one was seen to have the mental illness of depression.
McFillin’s interview continues with other nonsense claims by Girgis.
With respect to the serotonin imbalance theory of depression, McFillin explains to Girgis, “The idea that depression is related to low serotonin or deficiencies in serotonin was never reliably proven.”
Girgis: “It has.”
McFillen then brings up Joanna Moncrieff’s 2022 study, “The Serotonin Theory of Depression: A Systematic Umbrella Review of the Evidence,” in which she and her co-researchers concluded: “Our comprehensive review of the major strands of research on serotonin shows there is no convincing evidence that depression is associated with, or caused by, lower serotonin concentrations or activity.”
Girgis claims that Moncrieff did find a relationship between serotonin and depression, as he distorts what Moncrieff and her co-researchers actually reported, which is as follows:
“Most studies found no evidence of reduced serotonin activity in people with depression compared to people without, and methods to reduce serotonin availability using tryptophan depletion do not consistently lower mood in volunteers. High quality, well-powered genetic studies effectively exclude an association between genotypes related to the serotonin system and depression, including a proposed interaction with stress. Weak evidence from some studies of serotonin 5-HT1A receptors and levels of SERT [serotonin transporter levels measured by imaging or at post-mortem] points towards a possible association between increased serotonin activity and depression. However, these results are likely to be influenced by prior use of antidepressants and its effects on the serotonin system.”
Then Girgis, revealing how little he understands the scientific method, offers what he considers to be a “evidence” for the serotonin imbalance theory of depression: “So if a selective serotonin reuptake inhibitor [SSRI] were to be effective for depression [which Girgis claims is true] that would be kind of consistent evidence of the serotonin imbalance theory.”
McFillin points out that even if SSRI antidepressants were effective, this is not scientific evidence of the serotonin imbalance theory of depression. Any genuine scientist would recognize that Girgis’s “evidence” is analogous to the laughable notion that if alcohol makes one less shy, then shyness is caused by an alcohol deficiency.
Girgis then makes this claim about SSRIs: “I think they’re effective. There’s no doubt about it. There’s no doubt. There’s no doubt about the fact that they’re effective.” Girgis claims that this is “not a belief . . . It’s based on the data.”
McFillin responds, “It is absolutely not,” and he attempts to explain the data to him.
Then the discussion shifts to antidepressants and suicide.
Girgis: “The data are clear. Antidepressants decrease suicide.”
McFillin: Trying his best to restrain his irritation and anger, “You’re doing it again . . . you’re saying the data is clear. . . .You can’t come on a show like this and say the data is clear when your own field is going to dispute that.” McFillin then reminds him that regulatory agencies around the world, including the FDA, have required black box warnings about suicidality.
Girgis: “So the black box warning is only about suicidal ideation. . . . So there’s no doubt that antidepressants decrease suicide completions.”
First, Girgis is wrong that the black box warning is only about suicidal ideation, as the FDA black box warning on antidepressants specifically alerts to an “increased risk of suicidality (suicide thinking and behavior),” with suicidal behavior including suicide attempts. Second, Frontiers in Psychiatry reviewed the antidepressant data and concluded in 2020 that “more recent data suggests that increasing antidepressant prescriptions are related to more youth suicide attempts and more completed suicides among American children and adolescents. . . .The Black Box warning is firmly rooted in solid data whereas attempts to claim the warning has caused harm are based on quite weak evidence.” Third, Girgis cannot see how illogical it is for him to acknowledge that “there are lots of studies showing or suggesting that antidepressants increase suicidal ideation,” but at the same time claim with certainty, “There’s no doubt that antidepressants decrease suicide completions.”
McFillin then attempts to discern what Girgis knows about psychiatric drug withdrawal and safe tapering. Girgis acknowledges he has never heard of the Maudsley Deprescribing Guidelines and that he has never heard of hyperbolic tapering, but Girgis assures us that he knows how to safely taper patients. McFillin questions him several times about how quickly he would taper people, with McFillin offering Girgis specific examples of drug dosages and time on the drug, but Girgis gives vague answers or problematic ones (for example, his approach for people who want to taper off Effexor: “First, cross titrate them to something like sertraline [Zoloft] or fluoxetine [Prozac]”).
Finally, McFillin points out how establishment psychiatry’s paradigm “just hasn’t produced results. . . . In fact, if anything . . . it’s much, much worse. . . . It certainly hasn’t reduced the burden of even severe mental illness in this country.”
Girgis: “Sure, it has. . . .There’s no doubt about that.”
McFillin asks Girgis: “Doc, you think we’ve reduced the burden of mental illness in the United States?” A flabbergasted McFillin tells him: “You are the first person who actually stated that we’re actually doing a good job and we’re reducing the burden of mental illness in this country.”
Many Mad in America readers and others knowledgeable of the research will laugh at the ludicrousness of Girgis’s preceding claims. However, one need have only an elementary understanding of science to grasp how unscientific the method of data collection in the Columbia University Mass Murder Data Base of mass murders worldwide from 1900 to 2019, which Girgis and his co-researchers used to discover how mental illness, specifically psychosis, is related to mass shootings.
Girgis states early in the interview: “We basically just looked online and found as many court records, police documents, and other reliable media sources from which we could obtain data on any type of mass murder.”
McFillen, somewhat in disbelief that this would be how data was collected, asks a clarifying question: “So, if you’re going to . . . label one of the groups as having mental illness, what criteria are they meeting and how are you gathering that information?”
Girgis repeats: “We gather from police records, court records, media reports, those sorts of things.”
McFillen: “Is that valid? Is that reliable?”
Girgis offers an explanation as to why such invalid and unreliable data can be used: “That is then the point of using the comparison group because that same bias like the same problems with validity, the same problems with reliability apply to the comparison group. So that’s why we use groups because then that bias cancels out.”
McFillen tries to explain to Girgis that the problem is that the data itself is invalid and unreliable, as McFillen states: “I would argue that’s not necessarily bias. There’s not enough evidence to support a designation. So if you don’t if you don’t have clear discrete evidence, it’s very hard to provide that person with a diagnosis.”
Girgis appears not to understand McFillen’s point.
McFillen tries again: “Can you scientifically scrutinize it when you don’t have the objective data?”
Again, Girgis does not appear to understand, “This might be a semantic issue.”
McFillen tries again: “There’s real methodological concerns with being able to draw any conclusions because it’s difficult to ascertain whether that person legitimately even has those symptoms because you have no way of being able to reference that other than case reports from somebody else. . . . We’re talking about . . . overwhelmingly news media and reports from law enforcement. Correct?”
Girgis: “It’s exclusively media, law enforcement, and court records. Exclusively, of course.”
What were Girgis and his co-researcher’s findings using media, law enforcement, and court records?
Girgis: “We found that about 5% of mass shootings is related to mental illness and in particular psychosis. . . .The conclusion was that 5% of mass shootings is directly caused by psychosis. . . . So people with psychotic illnesses are over represented among mass shooters. There’s no doubt about that. There’s no doubt about that. . . . definitely people with major mental illnesses are at higher risk for violence. I mean, there’s no doubt about that.”
Girgis’s definitive claim that psychosis has any relation, even a small one, to mass shooting has significant public policy ramifications in terms of forced incarcerations. So McFillin, feeling some urgency, again at the end of the interview attempts to get Girgis to understand how problematic his data collection is.
McFillen: “I actually think it’s dangerous to say that you can apply a blanket medical diagnosis without ever evaluating somebody, and then trying to trying to infer about who a person is from media reporting or even law enforcement.”
Again, Girgis does not appear to understand why it is scientifically highly problematic to use media reporting, court records, and police records to determine a psychosis diagnosis.
High-Profile Example of Psychosis Claim by the Legal System and Media
While any psychosis diagnosis is fraught with the previously discussed invalidity and unreliability problems, there are many reasons why media reports, court records, and police records of psychosis labels are even more unscientific.
One of many examples of why the legal system and media reports of psychosis should not be trusted in any scientific sense is what occurs when the insanity defense is used to mitigate punishment. Here, the defense routinely pays “hired gun” mental health professionals to provide a psychosis diagnosis, which can in some instances meet the needs of both defense and prosecution, resulting in the legal system declaring someone as psychotic with little or no evidence; and the media then reports the psychotic designation as fact. This is exactly what happened with Ted Kaczynski, who came to be known as the Unabomber.
Between 1978 and 1995, Kaczynski’s bombs killed three people and injured 23 others. Kaczynski’s biographer Alston Chase (A Mind for Murder, 2004) reported that much of what the world heard about Kaczynski’s mental status was not true. Chase documents how Kaczynski was psychopathologized for two reasons: the concerns of his family, who wanted to spare him from the death penalty; and to meet the needs of societal authorities who wanted to dismiss his societal critiques.
Against Kaczynski’s wishes, his defense attorneys launched a “mental illness” defense for him. A defense expert psychologist concluded that Kaczynski exhibited a “predisposition to schizophrenia,” citing his anti-technology views as having cemented her conclusion.
Before his capture, Kaczynski’s manifesto, which was known as the Unabomber Manifesto, was published in newspapers in 1995. The manifesto begins this way: “The Industrial Revolution and its consequences have been a disaster for the human race.” He then discussed how the increasing growth and worship of technological and industrial systems have subverted individual freedom and destroyed our natural environment. Many of Kaczynski’s points had previously been made by respected technology critics discussing the tyranny of giant industrial-technological systems. For readers familiar with respected technology critics such as Lewis Mumford and Kirkpatrick Sale, Kaczynski’s work is unoriginal and unenjoyable to read—but in no way insane.
In addition to Kaczynski’s views on technology, other so-called “evidence” for his mental illness included his personal habits and unkempt appearance living alone in a cabin in Montana. But as his biographer Alston Chase—who like Kaczynski was a former Harvard student, former professor, and Montana resident—points out, “His cabin was no messier than the offices of many college professors. The Montana wilds are filled with escapists like Kaczynski (and me). Celibacy and misanthropy are not diseases.”
Kaczynski was indisputably violent. He had personal reasons—including the abuse and humiliation he suffered in a psychology experiment as an undergraduate at Harvard—for his rage and distrust of the elites who managed society. However, even by DSM standards, he does not qualify as psychotic. Yet if one takes seriously court records and the media, Ted Kaczynski was not simply an angry and violent man but a psychotic.
Does Taking Academic Psychiatry Seriously Impoverish Critical Thinkers?
Many Mad in America readers have experienced harm caused to them by psychiatric diagnoses and treatments; and some Mad in America authors have discussed the harm done by establishment psychiatry to society (for example, by pathologizing normal human reactions that are often the result of individuals feeling alienated from their surroundings, psychiatry enables a dehumanizing society that alienates us from our humanity and from one another).
Beyond the harm that establishment psychiatry causes its patients and society, how does taking it seriously impoverish critical thinkers?
Columbia University psychiatrist Ragy Girgis’s claims—which range from declarations unsupported by the data, to falsehoods, to nonsense-in-nonsense-out research findings—are equal parts laughable and enraging. It is especially enraging when he repeatedly tacks on to his nonsense claims: “The evidence is clear”; “The data is very clear”; “This isn’t controversial”; and “There is no doubt about that.” In all such cases, the opposite is true.
This presents a dilemma for critical thinkers. A genuine response to Girgis’s claims by many critical thinkers would be anger and mocking laughter, and one sees that psychologist and podcaster Roger McFillen has some of these genuine reactions. If I were interviewing Girgis, my genuine reaction would be that his belief that he is a scientist who is adhering to the scientific method is a “bizarre delusion,” which by definition of his profession is a highly weighted symptom of schizophrenia.
However, McFillen recognizes that in his podcast forum, mocking reactions would only gain Girgis sympathy from viewers, and so he recognizes that what is required is emotional restraint so that his audience can see as clearly as possible what Girgis is all about. It takes little in the way of critical thinking to win a debate with an academic psychiatrist such as Girgis, but it does take a great deal of emotional restraint, which is no easy feat. McFillen is successful in his emotional restraint for most of the over two-hour interview, but not for all of it.
Academic psychiatry as represented by Girgis and the Columbia University psychiatry department creates a larger dilemma for critical thinkers. At the same time that academic psychiatry has deteriorated from bad science to no science, society is taking the claims by academic psychiatrists more seriously than ever, and so it is understandable why critical thinkers may feel a social obligation to debunk these claims. However, critical thinkers who become consumed by these nonsense claims and by the social obligation to debunk them can find themselves intellectually impoverished.
By putting energy into debunking inane claims, and by putting energy into emotional restraint when debating laughable academic psychiatrists, critical thinkers cannot give their full energy to questions that they do actually find intellectually stimulating and worth thinking about. There are many such questions, but let me rattle off just a few of them:
Among scientists and philosophers, there is a saying (often misattributed to originating with Albert Einstein): “Not everything that counts can be counted, and not everything that can be counted counts.” How does this apply to describing human differences and helping emotionally struggling people?
In contrast to psychiatry’s harmful objectifications of human differences, are there non-pathological ways of considering and communicating our different temperaments and personalities? Correlatively, if we accept that there are human differences, does this mean that different types of helpers are more helpful for different types of people?
It has been repeatedly documented (such as in Bruce Wampold’s The Great Psychotherapy Debate, 2015, second edition) that the significant way that humans help emotionally suffering humans has little to do with medical-model techniques, but instead has to do with the quality of the relationship alliance, including the helper’s perceived warmth, genuineness, empathy, and capacity to listen and communicate. Do these helper capacities require natural talent, or can they be learned?
What value can peer support provide that a professional relationship cannot, and what can a professional relationship offer that peer support cannot?
There is an activist quote (often misattributed to originating with Mahatma Gandhi): “First they ignore you, then they laugh at you, then they fight you, then you win.” For individuals attempting to extricate from the harm done by establishment psychiatry, does it make sense to invert this saying to read as follows: “First you fight it, then you laugh at it, then you ignore it, and then you win”?
There are many other questions that are more stimulating, at least for me, than debunking psychiatry’s latest nonsense, but if you want to start laughing at academic psychiatry, you might want to watch Roger McFillen’s podcast interview of Columbia University psychiatrist Ragy Girgis.