Comments by Peter Simons

Showing 18 of 18 comments.

  • Hi Alanna, thanks for your questions. As I mention above in my response to Matt, I did indeed report on that secondary subgroup analysis, among others. As you note, that particular secondary analysis found that for adults, the difference between combined therapy and therapy alone was not statistically significant (although in terms of raw numbers, therapy alone was still better, and indeed the odds ratio was actually LARGER in adults [1.95] than in youth [1.86]). Thus, even in the subgroup analysis, and even going by statistical significance alone, there is no reason to add antidepressants to therapy when therapy alone is just as good in terms of outcomes. Why add antidepressants, taking the risk of worse outcomes, when it certainly adds no benefit?

    From the study: “Subgroup analysis revealed that the average treatment effect of lower suicide attempts and other serious psychiatric adverse events in psychotherapy-only over combined treatment was statistically significant in youths (OR 1.86 [1.07–3.23]) but not in adults (OR 1.95 [0.52–7.35]”

    Report comment

  • Hi Matt, thanks for your thoughtful questions.

    I would encourage you to read our entire article. This is a quote from Zainal’s piece: “On average, psychotherapies, especially those that integrate cognitive-behavioral and related theories, appear to be the best first-line intervention option to mitigate the risk of suicide attempts and other serious psychiatric adverse events in depressed youth and adult populations.” Again, the researcher says: psychotherapies are the best first-line option in youth AND adult populations.

    In my article about Zainal’s piece, I do mention that a secondary subgroup analysis found that the harms of antidepressants were much more severe in youth. However, I believe that it is best when reporting on research to focus on the primary analysis first, then the sensitivity analyses, and finally the secondary outcomes from multiple subgroup analyses, which are the most prone to bias. I don’t disagree with Zainal’s conclusion (antidepressants shouldn’t be used in kids). Just explaining why I covered the study in this format.

    Report comment

  • “However, the NIMH’s tight focus on funding genetic research has also prevented the exploration of the known psychological causes of schizophrenia, such as the impact of trauma, isolation, and poverty. It has also prevented the proliferation of non-biological understandings of psychosis, such as the Hearing Voices movement, and non-medical treatments, like Open Dialogue and Soteria.”

    Report comment

  • This article uses “antidepressants” and “Prozac” interchangeably because Prozac is the only antidepressant that is FDA approved for treating depression in adolescents. That is, even proponents of antidepressants agree that Prozac is the only one that might have a positive effect in this population. However, as this study and others demonstrate, there is plenty of evidence that Prozac is just as ineffective and harmful as the other SSRIs for adolescents.

    Report comment

  • Hi, Blogs Editor here. I just want to clarify that we are not able to pay authors for blogs or personal stories at this time, although I would love to be able to do so.

    I also want to add that blogs and personal stories appear in the same section on our front page, run as our featured story the day they are published, and are featured and promoted on social media in the exact same way. Personal stories is its own section to ensure that we are able to feature very important lived experiences every week, and we have a dedicated editor for that purpose because we consider this one of the most fundamental elements of Mad in America.

    Finally, I want to note that people with lived experiences can and do write blogs for us, and I am always happy to see further submissions, which you can send to the [email protected] email address.

    Report comment

  • Health & PE has been a standard part of school curricula for a very long time. That’s what makes it a control group; it’s the group that received their normal class instead of the new intervention. I am also not sure it was a “gym class” per se with actual physical activity.

    From the article:

    “Participants in the control condition attended their usual Health and Physical Education classes (matched for length and frequency). Content covered in these sessions included material regarding a) body changes associated with puberty; b) nutrition and dimensions of health; c) cyber safety; d) drug education and learning to manage risks. Participants in the control condition did not have contact with the research team outside of data collection.”

    Report comment

  • Hi Sandy,

    Thank you for your feedback regarding my article. I want to clarify that I am presenting the data from the study as I read it; MIA is not suggesting anything, nor am I. I am presenting the facts: the study found that psychiatrists delivered the worst quality healthcare of any medical specialty in all domains, and the authors suggest that the solution would be ceasing to measure the quality of psychiatric care.

    You are correct that the authors argue that these measures are not relevant to psychiatry, and in fact that there is no way to measure whether psychiatrists are delivering quality care at all (which seems problematic to me personally, but I reported it as is).

    I notice that one of the measures psychiatry failed at is documenting patients’ medication, on which psychiatrists scored almost 10 points lower than other specialties. Surely you don’t mean to suggest that documenting patients’ medication is irrelevant to psychiatry as currently practiced?

    Best,
    Peter

    Report comment

  • Cabrogal, I actually have one more comment. I re-read the study yet again since I was thinking about this today, and I discovered that your point about the low dose of ayahuasca is not actually really a limitation.

    Yes, the researchers in this paper CALLED it a limitation. However, while the researchers called this a lower dose, there’s really nothing to compare it to. It’s not as if there is an established dose of ayahuasca for clinical work. So, the researchers note that their dose was just somewhat lower than in two other studies.

    However, the researchers also say that the amount they used was actually the *usual dose* of ayahuasca and they were not responsible for preparing it, the people in charge of the ritual were. So, really, this study used the standard dose of the drug—and those two other studies gave *excessive* amounts of the drug, which is actually a way of biasing a study. So, no, this is not a low/placebo dose, it is a standard dose, and it actually makes the study stronger.

    From the study: “Study participants received 7 capsules with the option of taking 3 additional ones as a booster, after about 2 h of the first dose. A dose of 7 capsules was portrayed by the host organization as similar to as [sic] regular volume of ayahuasca brew.”

    Thanks again for your questions. It’s an interesting topic.

    Report comment

  • Dear Cabrogal,

    Thank you for your critique. I would like to clarify some of the issues you brought up.

    1. I wrote “A study on the mental health effects of the psychedelic drug ayahuasca found that the drug was no better than a placebo”

    You wrote, “Err, no it didn’t.”

    Yes, it did. The drug was no better than placebo on the outcomes of anxiety, depression, or stress. Yes, the researchers tested all sorts of outcomes (for instance, measures of “mindfulness”; padding their study so that something—anything at all—is likely to be positive, just by chance) and there was one outcome that showed a difference. That, as you correctly noticed, was “emotional empathy,” which I considered to be irrelevant, poorly operationalized, and likely a false positive anyway, so I did not think it even worth mentioning.

    To be clear: They found one irrelevant outcome to be slightly better for ayahuasca in a study with many outcomes, including highly relevant ones, that were no different between placebo and drug.

    On the RELEVANT outcomes, the researchers wrote: “Compared to baseline, symptoms reduced in both groups after the ceremony, INDEPENDENT OF TREATMENT” (emphasis mine).

    2. I wrote: “In fact, both groups experienced about the same level of psychedelic effects, too. The researchers write that “participants in both groups experienced altered states of consciousness during the ceremony.”

    You wrote: “That’s not at all what it says either. ‘Contact highs’ are a thing, so you’d expect both groups to experience altered states. But unless the dose is quite small they would definitely not experience the same level of psychedelic effects. And sure enough, according to Fig 2 the ayahuasca group experienced significantly greater psychedelic effects than the placebo group in all categories except ‘Ego Dissolution Inventory’ and ‘reduction of vigilance’.”

    Nope. There were two measures of the psychedelic experience, the EDI (ego dissolution inventory) and the 5-Dimensional Altered States of Consciousness Rating Scale (5D ASC). You correctly identified that there was no difference between ayahuasca and placebo on the EDI. However, you wrote that there were significantly greater psychedelic effects in subscales of the 5D ASC. I’m afraid that’s just not true. There are 16 different subscales of the 5D ASC, all reported on in the supplemental materials, and only one subscale reached p<0.05 (the most liberal definition of statistical significance)—and that was “audio visual synesthesia.” The other 15 subscales did NOT demonstrate a statistically significant difference between ayahuasca and placebo.

    The researchers themselves admit this: “Mean ratings of EDI and total 5D-ACS (dimensions and subscales) did not significantly differ between conditions and did not significantly interact with ayahuasca use experience of the study participants.”

    So, yes, it is accurate for me to write that the two groups experienced the same level of psychedelic effect.

    3. You wrote: “That said, it’s important to remember that it’s not the drug that effects healing in psychedelic therapy. The drug (in sufficiently high doses) merely temporarily knocks down the ego so the sufferer can gain insights into her condition that were obscured by her own self-image and notions of how she relates to her suffering and the aspects of her self/experience/environment that give rise to it. It’s up to the sufferer herself to decide what to do with those insights.”

    Except that this study explicitly showed that the drug DID NOT AFFECT EGO DISSOLUTION any more than the placebo did (see my response #2, above).

    4. You wrote: “It’s also misleading to imply ‘ritual effect’=’placebo effect’. In fact the ritual has important socio-spiritual components that serve to reintegrate the sufferer with his community and environment, thereby addressing aspects of disorders typically neglected by Western medicine.”

    This is an interesting comment. In this case, I used the term placebo effect/response for two reasons: one, that is how the researchers used it in this paper; and two, because the study was about comparing ayahuasca versus placebo drug specifically.

    The term placebo effect is generally used to include a variety of things, including expectation effects, and usually it also helps control for things like regression to the mean which, obviously, is not an “effect” of the placebo. In its strictest sense there is no such thing as a placebo “effect” because by definition, placebos are substances without an effect. But that usage strikes me as pedantic. The term placebo effect, when used to encompass all of the aspects of the difference between a drug group and a control group, is helpful and, I think, operationalized well enough to be clear.

    In a comparison like this, the response of the group taking the placebo includes the expectation effect, and one could argue that for most Americans, the medical field carries more expectation of benefit than a mystic ritual would. So I’d suggest that all medical placebo effects are due at least in part, and I’d say in my opinion greatly, to faith in medical science.

    Thus, the argument that the ritual shouldn’t be called a “placebo effect” is a semiotics question, but not a helpful distinction when the point of the study is comparing the drug + ritual versus the ritual alone.

    4. You wrote: “In other words they were using sub-therapeutic doses, so it would have been quite surprising to see a strong drug-mediated response, especially as the subjects weren’t even suffering from the disorders used as response measures.”

    This is true, and I think it is a legitimate limitation of the study. Good job noticing that one.

    Report comment