For those still interested in the recent antidepressant withdrawal debate, here is a new and important installment.
Before we get to the essential part, let us first recall that our systematic review in Addictive Behaviors (2018) showed, among other things, that around half of people who stop antidepressants experience withdrawal. This conclusion was critiqued in a blog by Joseph Hayes and Sameer Jauhar, to which we responded by pointing out the blog’s many serious errors and misrepresentations (see our response here).
Our response to that blog, however, did not deal with one of Hayes and Jauhar’s core criticisms: that our systematic review had failed to include five randomised control trials (RTCs). (These RCTs were: Baldwin 2004a1 & 2004b2; Lader 20043; Montgomery 20034 & 20045.) They alleged that these five trials, while primarily focusing on the effectiveness of antidepressants, also contained data on the ‘incidence’ of withdrawal — that is, on how common withdrawal actually is. Had we included this data in our review, Hayes and Jauhar contended, the number of people suffering antidepressant withdrawal would have been lower than we reported, perhaps by around 10% (we infer this 10% from the tables they produced in their original blog critique). It was therefore either remiss or dishonest of us, they implied, not to include data from these studies.
Today, we would like to deal briefly with this particular blog criticism, not merely to show how groundless it is, but more importantly because, by doing so, we gain crucial insight into how shadowy and ethically suspect antidepressant withdrawal research can get when viewed up close.
The first thing to notice when looking at these five ‘studies’ is that the pharmaceutical company, Lundbeck, funded all of them. Additionally, all five studies were undertaken and written (either entirely or in part) by employees of Lundbeck, who reached the conclusion that their antidepressants were superior to competitor drugs.
The second thing to note about these studies is that three of them were not published as full studies at all. Rather, they were published as short ‘research supplements’ — each at around 300 words. For those who do not know much about ‘research supplements’, they are basically industry-funded study-summaries that some journals will publish in return for an industry fee. Needless to say, the obvious conflicts of interests these supplements involve6 as well as the serious challenges they pose to anyone wanting to assess their methods properly (supplements don’t provide enough detail for that), are just two among numerous ethical and scientific reasons as to why many credible journals, such as Lancet, now refuse to publish them.7
The third and most disconcerting revelation about these five ‘studies’ and by extension the so-called evidence upon which Hayes and Jauhar base their critique, is that none of the five studies actually contain any data on the incidence of antidepressant withdrawal. To repeat, these five studies do not contain the very data that Hayes and Jauhar alleged we overlooked.
While this, of course, explains why we did not include these studies in our systematic review, it does not explain why Hayes and Jauhar claimed the data was there. We can only surmise that Hayes and Jauhar did not actually check these five studies. Rather, they simply quoted a Lundbeck-funded article, published three years later (by Baldwin et al. 20078), which somehow ‘cites’ data from these original five studies that were never included in them.
Two implications of this arise:
Firstly, and most obviously, by basing their arguments on such dubious foundations, Hayes and Jauhar invalidate many of their core criticisms, such as their view that the overall incidence rate from the RCTs is closer to 40% rather than our 50%, as well as their suggestion that we were not thorough (or even worse, were biased) by not including these five RCTs.
The second implication concerns why such research practices are ever permitted at all. How can a later article cite data from company-funded ‘studies’ that don’t actually report that data (let alone report the mechanisms by which that data was gathered)? And how can individuals, journals and professional communities permit or make use of these suspect practices while also receiving financial succour from the companies set to benefit from them?
Both implications can only add to the growing disquiet within the professional and service user communities as to the impoverished state of psychiatry’s withdrawal research. Where such research exists it is scattered and minimal (and, by design, minimises withdrawal effects). And where such research exerts influence it appears to do so less on behalf of patients (whose withdrawal often lacks proper recognition and support) than on those who promote, defend or evermore widely prescribe this class of psychopharmaceutical.
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.
Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.