STAR*D: Adding Fiction to Fiction

1
875

In my five plus years investigating STAR*D, I have identified one scientific error after another. Each error I found reinforced my search for more while fueling my drive to publish these findings in a top-tier medical or psychiatric journal. These errors are of many types, some quite significant and others more minor. But all of these errors—without exception—had the effect of making the effectiveness of the antidepressant drugs look better than they actually were, and together these errors led to published reports that totally misled readers about the actual results.

As such, this is a story of scientific fraud, with this fraud funded by the National Institute of Mental Health at a cost of $35 million.

Here are some of the scientific errors that I have published so far in peer-reviewed journals:

• In the STAR*D Research Protocol, patients who started on Celexa in step-1 but dropped out without taking the Hamilton Rating Scale of Depression (HRSD) were supposed to be counted as treatment failures. But in their published results, the STAR*D authors excluded from analysis those Celexa patients who did not return for a follow-up visit thereby inflating Celexa’s reported remission and response rates.

• In the STAR*D protocol, patients were supposed to have a baseline HRSD score of 14 or greater to warrant inclusion in data analysis. The STAR*D authors however changed the eligibility for analysis criteria in the published results for steps 2–4 and the summary articles without informing readers. These changes resulted in the inclusion of 607 patients who were initially excluded because their score of <14 on the baseline HRSD signified at most only mild depressive symptoms. Similarly, an additional 324 patients who were initially reported as excluded because they lacked a baseline HRSD were also subsequently included. Thus, 931 of STAR*D’s 4,041 patients (23%) did not meet STAR*D’s step-1 eligibility for analysis criteria but were included in the published results for steps 2–4 and summary articles. Including these ineligible patients inflated the published results.

• In the STAR*D protocol, the HRSD was the pre-specified primary research outcome measure and STAR*D’s authors themselves cite the HRSD as being the gold-standard measure in depression research. The STAR*D authors however switched to using the proprietary Quick Inventory of Depressive Symptoms—Self-Report (QIDS-SR) as the sole measure to report outcomes in the summary article. The QIDS-SR is copyrighted by Dr. John Rush, STAR*D’s principal investigator (PI). This switch in outcome measures had a dramatic impact on inflating STAR*D’s published remission and response rates. As I’ve previously documented, Celexa’s remission rate was inflated by 44.9% in the summary article, and in their disclosure statements, ten of STAR*D’s authors report receiving money from Celexa’s maker, Forest Pharmaceuticals (Pigott, 2011, p.19).

• In the STAR*D protocol, the clinic-version of the QIDS-SR was explicitly excluded from use as a research outcome measure. The STAR*D authors however used the PI’s proprietary QIDS-SR as the secondary measure to report remission rates and sole measure to report response rates in the six published “steps 1-4” articles. This use of the QIDS-SR had a significant impact on inflating STAR*D’s published remission and response rates.

• Having changed from the gold-standard to the proprietary QIDS-SR, STAR*D’s authors made false statements in their published papers to justify this change. These factual misrepresentations included falsely stating that “the QIDS-SR was not used to make treatment decisions,” (Rush et al., 1908) despite the fact that this assertion is directly contradicted by the authors themselves when they wrote in the step-1 article, “To enhance the quality and consistency of care, physicians used the clinical decision support system that relied on the measurement of symptoms (QIDS-C and QIDS-SR), side effects (ratings of frequency, intensity, and burden), medication adherence (self-report), and clinical judgment based on patient progress” (Trivedi et al., 2006, p. 30) as well as being contradicted in all of the primary source documents (e.g., see STAR*D Documents section of blog; Research Protocol and Analytic Plan pages 47–48 and STAR*D Clinical Procedures Manual tables 2-4 on pages 119-121 as well as the Controlled Clinical Trials article by Rush et al., 2004).

• In the STAR*D protocol, 11 pre-specified research measures were collected to evaluate outcomes at entry into, and exit from, each treatment step, and every three months during the 12 months of follow-up care. However, despite having published well over 100 articles, the STAR*D authors have not reported any of the outcomes from these 11 research measures other than the HRSD remission rates in the initial steps 1-4 articles. Instead, STAR*D’s authors have repeatedly used the sham QIDS-SR to report outcomes without disclosing the fact that it was explicitly excluded from such use.

• The STAR*D authors inflated the extent of improvement that patients obtained from achieving remission by making numerous false claims that STAR*D patients who scored as remitted achieved “complete relief from (their) depressive episode” and obtained “the complete absence of depressive symptoms” as well as having “become symptom-free” (Pigott, 2011, pages 17-18, 20). The STAR*D authors made such false statements despite the fact that a patient could have an HRSD score of up to 7 and still be classified as having obtained remission. An HRSD score of 7 or less is not synonymous with having become symptom-free. For example, on the HRSD suicide question, “feels like life is not worth living” is scored as only 1. Other significant depressive symptoms that are scored as only 1 include “feels he/she has let people down” and “feels incapable, listless, less efficient.” A patient scoring 1 on only these three HRSD questions would be counted as remitted with four symptoms to spare yet no professional would describe such a patient as having become symptom-free since each of these symptoms are used in diagnosing major depression.

• In the STAR*D summary article, there is a pattern of rounding-up errors, each time inflating the alleged remission and response rates for steps 1-3 (Pigott, 2011, p. 16-17). Six times STAR*D does not correctly calculate and report simple percentages and each of these rounding errors further inflated the reported outcomes. While admittedly small, I believe that the consistency of this pattern (6 out of 6) was very telling in terms of the extent that one or more of STAR*D’s authors went to inflate the reported remission and response rates.

• The STAR*D authors failed to disclose that all of STAR*D’s 4,041 patients were started on Celexa in their baseline visit and that after up to four treatment trials only 108 patients (2.7%) had a remission and did not relapse and/or drop out during the 12 months of free follow-up care. This is the STAR*D authors’ most damaging error; their failure to disclose the minimal effectiveness of STAR*D’s try-try-try-and try-again approach to antidepressant drug care. Instead, the STAR*D authors falsely calculated a theoretical 67% remission rate for these drugs. This false statement is profoundly misleading to depression sufferers, professionals, healthcare policy makers, and the general public since the 67% figure has been repeatedly used to propagate the widespread use of antidepressants.

 

Adding Fiction to Fiction

In my first blog, I referenced STAR*D authors’ most recent article using the sham QIDS-SR as its sole outcome measure. The article titled, “Residual Symptoms in Depressed Outpatients Who Respond by 50% But Do Not Remit to Antidepressant Medication,” was published in this month’s edition of the Journal of Clinical Psychopharmacology.

As I contemplated my second blog, I decided to read the JCP article in full versus relying only on its abstract to learn what new things the STAR*D authors could possibly say in this latest analysis using the sham QIDS-SR. So early on the morning of March 29th, I paid the $35 download fee and began to read it.

I gasped when I read the article’s Assessment Measures’ statement, “Within 72 hours of EACH CLINIC VISIT, a telephone-based interactive voice response system gathered the 16-item Quick Inventory of Depressive Symptomatology-Self-report (QIDS-SR)” (emphasis added; McClintock et al., 2011, p. 181). I stared in disbelief as I read and reread this sentence.

As I recouped, my first response was that I’d messed up, somehow missing this vital piece of information.

I live in fear of being found wrong in my research efforts which is why I obsess with multisource documentation. I then quickly scanned the tables listing—and clearly demarcating—STAR*D’s clinic-visit assessments and its research outcome measures that are stated in STAR*D’s primary source documents. Most important, these tables state both how and when each measure was administered and these statements are identical in STAR*D’s Research Protocol, Clinical Procedures Manual, and the 2004 Controlled Clinical Trials article.

I could breathe again. The McClintock et al. paper’s methodological description was pure fiction! It did not happen. I sat there immobilized by disbelief. How could this be? Rush, Trivedi, Wisniewski, Nierenberg, Stewart, Cook, and Warden were all among the coauthors; each of whom have been first authors on one or more of the STAR*D Reports funded by NIMH.

Next, I remembered the Nierenberg et al. 2010 article titled, “Residual Symptoms After Remission of Major Depressive Disorder with Citalopram and Risk of Relapse: A STAR*D Report” that was published in Psychological Medicine. I had cited statistics from this article in my STAR*D bias paper but I had gleaned these only from reading the article’s abstract not wanting to spend the $45 that was necessary to download it.

I downloaded this article and there it was. “The QIDS-SR was completed by participants at baseline and at every visit to assess depressive symptoms. The self-report FIBSER was completed by participants after every visit to assess side-effects. Both measures were gathered within 72 h of EACH VISIT using a telephone-based interactive voice response (IVR) system” (emphasis added, Nierenberg et al., 2010, p. 43).

Again stunned, I felt like I was in the twilight zone and it was only 5:30am! The Nierenberg et al. article included Dr. Maurizio Fava as well as Drs. Rush, Trivedi, Warden, and Wisniewski in its illustrious list of authors. How could this be?

Next, I pulled up STAR*D’s steps 1-4 and summary articles; seven in total. I searched these articles’ PDF files using the terms telephone, interactive, voice, IVR, etc. since each of these articles described STAR*D’s methodology in detail. As made crystal clear through this search, the few times that the QIDS’s IVR-version is referenced, it is specifying how this version of the QIDS was administered at entry and exit from each step, @ 6 weeks during each step, & monthly during follow-up. There simply was no administration of the IVR-version of the QIDS-SR “within 72 hours of each clinic visit.”

The only version of the QIDS-SR that was tied to each clinic visit was paper-and-pencil and non-blinded. The STAR*D authors’ false statements regarding the QIDS-SR’s administration appear designed to give it credibility as a research measure since by definition an IVR-version is blind to patients’ status and therefore should be an unbiased source of data; but this did not happen.

 

Why is this important?

It is critical for journal readers to understand the procedures STAR*D used when administering the clinic-version of the QIDS-SR since these procedures invalidated its use as a research measure.

First, STAR*D patients completed a pencil-and-paper version of the QIDS-SR at the beginning of each clinic visit that was overseen by non-blinded clinical research coordinators (CRCs). The Clinical Procedures Manual instructed the CRCs to then review the QIDS-SR results to make certain that all items were completed and then to see the patient to administer “the appropriate Patient Education material” (Trivedi et al., 2002, p. 75).

Next, the CRC administered a multistep educational program for patients and families that was based on the neurochemical imbalance theory of depression and included “a glossy visual representation of the brain and neurotransmitters,” consistently emphasized that “depression is a disease, like diabetes or high blood pressure, and has not been caused by something the patient has or has not done. (Depression is an illness, not a personal weakness or character flaw.) The [CRC] educator should emphasize that depression can be treated as effectively as other illnesses,” and “explaining the basic principles of mechanism of action” of Celexa to treat their depression [O’Neal & Biggs, 2001, pp. 4–7].

Finally, the CRCs administered the QIDS-C, the clinician-administered version of the QIDS with the identical 16 questions and response options as the QIDS-SR. The CRC was then instructed to discuss “any symptoms and side effects that the patient may be experiencing” and record both the QIDS-SR and QIDS-C information on the clinical record form for the treating physician’s review before he/she saw the patient (Trivedi et al., p. 75). This was done “to provide consistent information to the clinicians who use this information in the protocol” (Rush et al., 2004, p. 128).

In light of the above, it is clear why as pre-specified the clinic-version of the QIDS-SR was explicitly excluded from use as a research measure. First, there were significant demand bias effects given the conditions under which it was administered thereby biasing any reporting of QIDS-SR outcomes in STAR*D’s open-label study. Second, it is simply inappropriate from a scientific perspective to use a non-blinded self-report measure such as the QIDS-SR to both guide care in every visit as well as to evaluate the outcomes from said care in an open-label trial (or any trial for that matter). Furthermore, the fact that the QIDS were administered twice in every clinic visit only makes use of it as a research measure doubly absurd in terms of it holding any scientific merit other than documenting the demand bias effects that occur under these circumstances.

 

More Shock

After recouping from this series of shocks, I then read both articles to see what they reported. More shock! While there was the usual gibberish, both articles used the sham QIDS-SR to radically minimize the extent of emergent suicidal ideation in STAR*D’s Celexa patients who were reported to have remitted &/or responded to this drug.

For example, McClintock et al’s abstract states “Suicidal ideation was the least common treatment-emergent symptom (0.7%)” and then in the Discussion section they conclude, “Interestingly, suicidality very rarely emerged over the course of treatment and was a rarely endorsed persistent residual depressive symptom. Furthermore, suicidality was rarely endorsed even in the presence of other residual depressive symptoms. With conflicting research findings regarding the link between antidepressant usage and suicidality, this study provides new evidence to suggest little to no relation between use of a selective serotonin reuptake inhibitor and self-reported suicidal ideation” (p. 183). Nierenberg et al’s article made similar false statements.

Being a near idiot savant regarding all things STAR*D (far more idiot though than savant), I quickly checked and both of these recent articles were directly counter to STAR*D’s two 2007 articles published in American Journal of Psychiatry and Archives of General Psychiatry; each of which focused on genetic predictors of increased suicidal ideation during treatment on Celexa. Both of these articles were considered quite significant when published. In fact, patents were filed on these genetic predictors with Dr. Rush listed as one of those filing for said patents and thereby he also has a financial interest in their use (e.g., see Philip Dawdy’s Furious Seasons blog post disclosing these patent applications, and see also these disclosure statements. So clearly in some peoples’ minds—and I assume that this at least included Dr. Rush—the AJP and AGP studies’ findings were important.

The AJP article found that 120 out of 1,915 patients (6.3%) reported emerging suicidal ideation while taking Celexa based on a repeated measures’ analysis of the QIDS-SR data (Laje, Paddock, Manji, Rush, et al., 2007) while the AGP article found that 124 out of 1,447 patients (8.6%) reported emergent suicidal ideation while taking Celexa based on a similar analysis of the QIDS-C (Perlis, Purcell, Fava, Fagerness, Rush, Trivedi, et al., 2007).

Being coauthors on these earlier articles, Rush, Trivedi, and Fava clearly knew about the results from these two studies, both of which found significant rates of treatment-emergent suicidal ideation while taking Celexa yet chose not to disclose this countervailing evidence in either the McClintock et al. or Nierenberg et al. papers. Instead, the McClintock et al. paper states that “Suicidal ideation was the least common treatment-emergent symptom (0.7%).”

The simple fact is that 0.7% versus 6.3% to 8.6% is a significant difference in the reported rate of treatment-emergent suicidal ideation while taking Celexa. Why was this contradictory information not disclosed in these two articles? Can the QIDS ‘data’ be so easily massaged as to say anything? Could it be the fact that most of STAR*D’s core authors (e.g., Rush, Trivedi, Fava, and Nierenberg) list Forest Pharmaceuticals, the maker of Celexa, in their disclosure statements?

Even if there was not a quid pro quo between Forest and one or more of the authors (and with 99.999% certainty there was not), since this was an NIMH-funded study the mere fact of having so many pharmaceutical industry ties amongst the key authors is going to make even the most rigorous of them unconsciously inclined to look for, and report, the positive findings while minimizing the negative, particularly if this line of research constitutes the bulk of their professional life’s’ work.

A quick PubMed search reveals that Drs. Rush, Trivedi, Fava, and Nierenberg have published 1,697 peer-reviewed journal articles between them, the vast majority focused on depression and its treatment using antidepressants. Perhaps this is why these luminaries could report that, “this study provides new evidence to suggest little to no relation between use of a selective serotonin reuptake inhibitor and self-reported suicidal ideation.” Maybe they have actually come to believe this false claim OR they have published so many articles that they have forgotten what they stated in prior papers so publishing such contradictory articles on treatment-emergent suicidal ideation—and not disclosing said contradictions—is not uncommon. Basically, this is the Steve Martin defense for never paying taxes and a host of other crimes; two simple words: Judge, “I forgot…” (see: http://snltranscripts.jt.org/77/77imono.phtml).

This latter hypothesis is supported by how these key STAR*D authors in step-1 stated that the QIDS-SR was part of their clinical decision support system and then later when this fact became inconvenient, falsely claimed that the QIDS-SR “was not used to make treatment decisions.” Perhaps they just forgot? Further evidence supporting this hypothesis is how these key authors seem to keep forgetting so many specifics from STAR*D’s research protocol particularly regarding their use of the PI’s proprietary suite of QIDS products, the QIDS-SR, QIDS-C, and QIDS-IVR. Given these researchers monthly volume of authoring peer-reviewed articles for top-tier journals, it is easy to see how they could become so forgetful regarding this trinity of QIDS.

If you read my bio, you will see that I am a partner in a neurotherapy company, consult to for-profit companies, and am proud to have never held an academic position. My consulting has often involved me overseeing studies and/or analyzing lines of research including writing up summaries of same. I’m pretty good at this and have a particular knack at deconstructing research and identifying flaws in same while not necessarily applying the same level of scrutiny to the companies’ products and/or services that has hired me. I strive to do my work with integrity (and cannot remember ever having lied about or covered up findings—or being asked to do so), but like most people, I generally come to believe in the value of what I am researching and so I have to guard against confirmation bias; that is, the tendency we all have to mentally highlight information that confirms our biases while discounting contrary evidence.

It also doesn’t take being a rocket scientist on my part, or on the part of Rush et al., to realize that for-profit companies will not keep engaging us and paying us if we uncover and publish information that is damaging to the marketing of their products (in my bio you will see that several of the companies I’ve consulted for in the past have also hired Rush &/or other STAR*D authors). But STAR*D was supposed to be different. It was an NIMH-funded, once-in-a-generation study yet it has now been riddled with scientific errors in the reporting of its findings in well over 100 studies. This substandard science does not only fail to advance the field of depression and its treatment, it may also in fact directly harm patient care as evidenced by these two most recent articles.

 

A request for retracting the articles

Prior to the publication of “STAR*D: A Tale and Trail of Bias,” I had contacted Drs. Jon Jureidini and Leemon McHenry to enlist their consultation in how to seek the retraction of STAR*D’s summary article that was published in AJP along with several other articles. I had learned of their efforts to have the infamous Study 329 retracted from the Journal of the American Academy of Child and Adolescent Psychiatry after reading the editorial titled, “The Rules of Retraction” that was published in the British Medical Journal. 

Drs. Jureidini and McHenry were very helpful providing me with both their correspondence to JAACAP editors and also their extensive evidence of profound researcher bias in Study 329 as well as documenting multiple violations of this journal’s written editorial polices.

While thus far Drs. Jureidini and McHenry’s efforts have not been successful, my examination of their approach was critical to informing my own. In this regards, achieving the retraction of STAR*D’s house of 100+ cards was a primary reason for me starting this blog.

Based on information that I’d obtained through the Freedom of Information Act, plus other unsavoury things that I had stumbled upon, I knew that there is much damaging information that I have yet to disclose, and that once disclosed, this new information will further undermine confidence in STAR*D’s scientific integrity.

Since I do not believe that STAR*D was a pre-planned conspiracy involving over 20 authors, my hope was/is to get one or more of these authors to reflect on what I’ve uncovered, some of which they may not have been fully aware of when they agreed to have their names included as authors, and that on reflection, they would decide to have their names removed from one or more of STAR*D’s articles. This is how The Lancet’s autism/vaccine study unraveled resulting in its retraction many years after publication. Honest researchers decided they no longer wanted their good names associated with what in retrospection they knew was biased research. Such an admission takes integrity and courage because it means going against one’s peers, but I hoped that one or more within STAR*D’s deep pool of authors would have the integrity and courage to take this step.

While this was a nice plan, the JCP and Psychological Medicine articles changed everything since now false information was being distributed through leading journals that may directly harm patient care. So I wrote retraction letters to both journals that published these latest articles (see JCP and Psychological Medicine Retraction Letters in the Retraction Documents section). While I am biased, I think that these retraction letters are worth reading.

From the letter I sent to JCP: “Furthermore, while McClintock et al. report that suicidality “was a rarely endorsed persistent residual depressive symptom” this may in fact be due simply to the demand bias effects of how the QIDS-SR was administered such that some suicidal patients ceased endorsing the suicidality domain because they no longer wanted to discuss this symptom with the CRC while they were more willing to acknowledge other less evocative symptoms. Simply put, substandard science as evidenced in the McClintock et al. paper, and in many of the STAR*D articles, does not only fail to advance the field of depression and its treatment, it may also in fact directly harm patient care.”

Because of this risk of patient harm, I’m posting these letters and all subsequent correspondence. JCP co-editors responded promptly and appropriately (see JCP Editors Response in the Retraction Documents section). While I think the editors minimize both their investigatory responsibility and capability, they clearly acted with integrity and all due speed.

I’m taking the editors up on their offer to submit a letter to JCP that if accepted they will then forward to McClintock et al. for this paper’s authors’ response. Hopefully, this letter will go out within one week after I get clarification from JCP’s editors if my letter is limited to “6 doubled-spaced pages including references” as stated on their website.

I’ve also asked the editors to state what their criteria are that need to be proved for JCP to retract the McClintock et al. article. I strongly disagree with JCP editors when they state that the points made in my retraction letter merely “fall in the realm of commentary on scientific content” (see Pigott Response Letter to JCP). In my simplistic mind, the criteria for retraction are quite clear; that is, would JCP’s peer-reviewers and editors have published the McClintock et al. paper as written if the scientific errors as documented in my retraction letter are proven true? If proven true, these errors are not mere commentary on scientific content but rather document the fabrication of scientific content that may be harmful to patient care.

I was less pleased with Psychological Medicine’s co-editor Dr. Kendler’s response in which his opening sentence states, “With my staff, we have reviewed the material you sent and do not find any justification to withdraw this article” (see Reply Letter from Psychological Medicine Co-Editor in the Retraction Documents section).

Similar to JCP, I will take Dr. Kendler up on his offer “to write a letter to the editor, summarizing your concerns, we would be willing to consider it for publication using normal procedures (e.g. rigorous peer review). Our standard limit is 500 words and 10 references. The letter needs to focus on the research issues involved and avoid ad hominem attacks.” I first though want to get clarification from Dr. Kendler that if what I have documented is proven true, what is the basis for he and his staff’s collective judgment that this information does not provide “any justification to withdraw this article.” I have asked Dr. Kendler that if this information does not provide grounds warranting retraction, what does?

As you will note, my reply letter to Dr. Kendler is a bit more critical than my response to JCP, particularly challenging Psychological Medicine’s alleged rigorous peer review as applied to the Nierenberg et al. paper (see Pigott Response Letter to Psychological Medicine). Hopefully, Dr. Kendler will avoid taking this as an ad hominem attack but rather as a fact-based attack documenting the breakdown of any semblance of competent peer and editorial review, let alone rigorous peer review, on Psychological Medicine’s part.

As noted in this letter, a contributing factor to said breakdown may have been due to Dr. Rush being on Psychological Medicine’s Editorial Board and both the peer reviewers and editors knowing that the Nierenberg et al. paper was one in the well over 100 STAR*D Reports thereby resulting in a breakdown in their normal rigorous peer review process.

 

Postscript

My next post will document the numerous scientific errors of omission and commission that occurred in establishing the QIDS-SR’s psychometric reliability such that in the Rush et al. 2006 Biological Psychiatry article the abstract falsely concludes: “In nonpsychotic MDD outpatients without overt cognitive impairment, clinician assessment of depression severity using either the QIDS-C16 or HRSD17 may be successfully replaced by either the self-report or IVR version of the QIDS.”

This STAR*D Report is little more than NIMH-funded tripe for use by the STAR*D authors to justify their dropping the gold-standard HRSD as the primary measure to report outcomes in this study and replacing it with the PI’s proprietary QIDS-SR which all of the primary source documents explicitly excluded from any such use.

Switching from the gold-standard to tripe; how did any of this ever happen in a $35-million dollar taxpayer-funded study? Perhaps the STAR*D authors, its NIMH overseers, and the peer reviewers and editors of its 100+ articles all just forgot. Steve Martin would be proud to offer them his simple and fool-proof defense, ‘We forgot…’

References:

Laje, G., Paddock. S., Manji, H., Rush, A.J., et al. Genetic markers of suicidal ideation emerging during citalopram treatment of major depression. American Journal of Psychiatry 2007; 164:1530–1538.

McClintock et al. Residual symptoms in depressed outpatients who respond by 50% but do not remit to antidepressant medication. Journal of Clinical Psychopharmacology 2011; 31:180-186.

Nierenberg et al., Residual symptoms after remission of major depressive disorder with citalopram and risk of relapse: a STAR*D report. Psychological Medicine 2010; 40: 41–50.

Perlis, R. H., Purcell, S., Fava, M., Fagerness, J., Rush, A. J., Trivedi, M. H., et al. (2007). Association between treatment-emergent suicidal ideation with citalopram and polymorphisms near cyclic adenosine monophosphate response element binding protein in the STAR*D study. Archives of General Psychiatry 2007; 64(6): 689–697.

Pigott, H. E. (2011). STAR*D: A tale and trail of bias. Ethical Human Psychology and Psychiatry, 13(1), 6-28. This article is available by emailing me @: [email protected].

Rush et al., Sequenced Treatment Alternatives to Relieve Depression (STAR * D): rationale and design. Controlled Clinical Trials 2004; 25: 119–142.

Rush AJ, Bernstein IH, Trivedi MH, Carmody TJ, Wisniewski S, Mundt JC, Shores-Wilson K, Biggs MM, Nierenberg AA, Fava M: An evaluation of the Quick Inventory of Depressive Symptomatology and the Hamilton Rating Scale for Depression: a Sequenced Treatment Alternatives to Relieve Depression trial report. Biological Psychiatry 2006; 59: 493–501.

Trivedi et al. STAR * D clinical procedures manual. July 31, 2002.

***

Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.

***

Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.

1 COMMENT

  1. […] Falling STAR*D?: It is common use for psychiatrists to switch depressive patients between opposite antidepressants if their stream drug does not attest a symptomatic response. Despite clinical knowledge ancillary this, small empirical, tranquil justification exists to approach “switching” protocols (e.g. if a studious with Z characteristics is on drug X, is it customarily improved to switch to drug A, B, or C? Will switching assistance during all?) in a psychopharmacological diagnosis of depression. The NIMH-funded STAR*D (Sequenced Alternatives to Relieve Depression) study directed to residence these questions of diagnosis instruction in a really vast (n4000), “real-world” representation regulating a multi-phase diagnosis plan with opposite drugs (and cognitive therapy) during any step to maximize chances of contingent remission. Overall, a NIMH reported that about 67% of patients eventually achieved remission, with few differences in efficiency between opposite forms of diagnosis during any step. However, researchers and commentators have raised concerns per unsuitable stating of outcomes, after-the-fact changes in study pattern and analysis, and other issues that might have inflated, partially invalidated, or skewed widely reported diagnosis outcomes. These inequities might also have implications for a delegate judge analyses (i.e. does trait A envision switching to X or Y is better?) that were a vital reason for a study.Criticisms of STAR*D embody (but are not singular to): * Switching primary outcome during a study’s finish from a widely used (for improved or for worse) Hamilton Rating Scale for Depression to a Quick Inventory of Depressive Symptomatology, evidently due to high studious castaway preventing merger of HRSD assessments (though strong statistical methods for imputing/estimating blank information in this arrange of investigate exist). While a HRSD was achieved by blinded assessors eccentric of a evident diagnosis situation, a QIDS-SR was indeed taken as partial of a treatment-guiding process, heading to surmise that there was some-more direct on patients to respond definitely to a QIDS-SR contra a HRSD. (There is some peculiar difficulty over either a QIDS-SR was administered in-person as a investigate custom would indicate, or over a computerized write check-up system.) Notably, a QIDS-SR reported aloft rates of discount than a HRSD. However, some articles on sold stairs of diagnosis did news on HRSD outcomes. […]

    Report comment

LEAVE A REPLY