On the Mad in America podcast this week, we hear from Dr. Michael Hengartner. Michael is a Senior Researcher and Lecturer at the Zurich University of Applied Sciences in Switzerland. His areas of expertise include psychiatric epidemiology, public mental health, evidence-based medicine and conflicts of interest in psychological and biomedical research.
He was an expert evaluator for the European Research Council and the World Health Organization and currently is a member of the Swiss School of Public Health, the German Society for Social Psychiatry, and the European Public Health Association.
In this interview, we discuss Michael’s recently released book entitled “Evidence-biased Antidepressant Prescription, Over-medicalisation, Flawed Research, and Conflicts of Interest.” The book addresses the overprescribing of antidepressants and it critically examines the current scientific evidence on the efficacy and safety of the drugs.
The transcript below has been edited for length and clarity. Listen to the audio of the interview here.
James Moore: Michael, welcome. Thank you so much for joining me today for the Mad In America podcast. We’re here to talk about your work and in particular, your new book entitled ‘Evidence-biased Antidepressant Prescription, Overmedicalisation, Flawed Research, and Conflicts of Interest’, which was published by Springer in 2021.
And first, I want to thank you for writing it, because, this is such an important area of the mental health industry to write about. My reflection upon reading the book first was that it’s comprehensive, it collects together an awful lot that I think needed to be pulled together in one place. I can’t imagine it was an easy task to do all the research for it.
You start the book by writing about how you got here, which seems like a good place to start to me. So can you tell us a little bit about you and what led to your interest in research work?
Dr. Michael Hengartner: Thank you for having me and giving me a platform to talk about my book, I really appreciate it.
As I write in the book, I was working as a Research Associate at the Psychiatric University Hospital in Zurich. We were doing common epidemiological research and one of my main tasks was data analysis.
So, I got really interested in the scientific process, the ambiguity of data and also the sometimes arbitrary decisions you make when analyzing data and reporting statistical results. That was always because I felt that interest makes me a better researcher. And then there was this replication crisis in psychology, where seminal studies were found to not replicate in independent evaluations and independent studies. That was really interesting.
So I started to look at depression and in treatment of depression, I was really interested in all these biases that were reported, like data dredging (also called P-hacking because the P-value represents the statistical significance of results). That’s when I discovered this whole universe of research done in one of the areas where I was most interested, the epidemiology of depression, which also includes treatment and outcomes. I discovered those studies where they clearly showed how selective results from antidepressant trials were reported and how studies with negative results just remained in a file drawer. All those questionable or problematic research practices really got to me.
So that’s why I dug a bit deeper into this literature and discovered so many things that were awfully wrong to me. I started to do more research and I also started to write about it and one of the main areas where this was actually documented and researched was in the domain of antidepressants. So that’s why actually I became a little bit focused on antidepressants. Not because I had the intention, ‘Oh, I must show the world that the evidence base behind antidepressants is debatable’, but because this was one of the best research topics, and that’s where I slowly and step-by-step got into this. And also you can say, where I got stuck.
Moore: I’ve talked with other researchers and academics who said they had a fairly mainstream view of the mental health industry before they got involved in this type of research. They thought ‘we’ve got things sorted and we’ve got good evidence-based treatments and diagnoses’. So, were you a person that had that belief before you started writing the book? And did what you find shock you?
Hengartner: Yes, I also had this prevailing view, because that’s how we were taught and what we were taught. And then especially at the clinic where I was working, this knowledge was never put into question because I had to learn from my surroundings. So that was actually never an issue that perhaps the drugs don’t work as well as the literature might suggest.
So for me, it was quite a surprise to see that half of all trials failed to even find an effect. And then to discover that, if you look at the literature, almost exclusively, you see just positive trials. For me, that was also quite shocking because I knew that this knowledge was a bit ignored. In my view, most people were simply not aware of these issues.
Then during discussions with colleagues, I found it was also quite shocking for them because they were convinced that the drugs work miracles, or perhaps not miracles, but they are really essential. And then you discover these studies demonstrating that, at best, the effect is rather small and of questionable value.
Moore: In the introductory section of the book you share some personal experiences and you describe a time when you experienced what you call pervasive and profound sadness. So I wondered if you wouldn’t mind sharing a little bit about that, and what your thoughts were about it at the time?
Hengartner: That was a very difficult time because I just finished college so I was 18 or 19. In Switzerland, we have mandatory military service, so I had to go there. I never wanted to go and play soldier, being yelled at from morning till evening and not sleeping. I am just absolutely not a military man. In the army, my mood got worse and worse and there was a lot of bullying because I expressed clearly also towards my officers that I’m actually not a pro-military guy. So you can say they hated me.
It was also a difficult time because I had a breakup shortly before I went into the Army and that’s the time when you make the transition from adolescence to young adulthood, so it’s always very messy. You ask yourself, Who am I? Where am I? What is my future, and all that? Today you would say, I was getting depressed.
Moore: It sounds like you thought that was a situational thing, did you consider that you might be depressed or was that not something that occurred to you at the time?
Hengartner: To me, it was clear that I was not feeling and behaving like I usually do. I had never experienced such a lengthy time of unhappiness. It was quite clear that I was feeling depressed, but also quite clear that this was due to the situation I was in and that it was a consequence of being in a difficult place at a difficult time.
It was also what I experienced near the end of my military service. So knowing that, okay, it’s over in two weeks or three weeks, I immediately felt a new growing optimism and my mood was improving then quite rapidly. For me, it was clear that it was situational and due to the circumstances.
Moore: Thank you for sharing that, I’m glad that didn’t last too long, and I’m glad that you found a way out of that situation.
So if we move on to look at some of the things that interested me in reading the book. The first part of the book is about using antidepressants in clinical practice. There’s a very clear thread in the book that the evidence for using antidepressants in mild to moderate depression is really very poor. We’ve seen some recognition of that in the UK particularly because our evidence body, the National Institute for Health and Care Excellence (NICE) has stopped recommending antidepressants as a first-line treatment for mild to moderate depression.
We quite often hear – spoken quite loudly actually – that antidepressants work better and have more utility in what’s called ‘severe depressive episodes’. So I wondered if that was something that was supported by the evidence from trials or real-world use when you looked at it?
Hengartner: That’s one of the biggest unanswered questions because there is no unequivocal or conclusive evidence that they work better because the scientific literature is quite mixed. Most large-scale individual patient data analyses actually do not find that the treatment effect is larger in what we call severe depression than it is in mild to moderate depression.
A few analyses did, one was very influential by Fournier and colleagues published in the Journal of the American Medical Association in 2010 but that was based on a very small sample of 700 people. Much larger studies that used individual patient data from several thousands of patients actually were not able to replicate that efficacy increases massively in severe depression.
So I would say, based on this literature, there is scant or at least very insufficient evidence for the claim that they clearly work better in severe depression. The issue is more complicated though because, in the end, what is severe depression? The distinction between mild, moderate or severe depression usually is made simply based on rating scales, like the Hamilton Depression Rating Scale, which actually gives equal weight to all items. So if you have a score, let’s say 24, you are considered moderately-severely depressed, if it’s less than 16, it’s mild depression.
That is very problematic, because, imagine that someone reports mostly sleep problems, appetite issues, problems concentrating and the person has a score of 24. Another person has severe anhedonia, severe psychomotor retardation or suicidal ideation, but has the same score of 24 because the person has no sleep problems, no appetite changes so it’s the same score. So people would say they have the same severity, which actually is quite absurd because there are clear symptoms that are more indicative of a severe disorder episode like especially suicidal ideation and behavior and also psychomotor retardation, which are clear indicators of a more severe episode.
That’s the problem, if we only categorize into mild, moderate or severe based on these scores, we come up with findings that actually lack sufficient validity. Then another issue is all the people firmly excluded from efficacy trials. The drug trials usually exclude people who are acutely suicidal, they exclude people who have psychotic symptoms, they exclude people who abuse substances. They also exclude people with comorbid mental or physical disorders and usually, these are the people with truly severe episodes.
So what we call severe depression in those trials is debatable. We don’t really know how the drugs work in these, let’s say, more truly, or more genuinely, severely depressed people. So that’s why I say it remains to be answered whether the drugs really work better in severe depression, but based on the evidence available, we can’t draw firm conclusions.
Moore: The subjective nature of rating scales is quite a big issue, isn’t it? You can see why academic psychiatry has spent such a long time looking for biomarkers or more tangible measures of what a disorder may or may not be, but they haven’t made that much progress have they? You get the same rating tool like the Hamilton scale and it could be applied by three different psychiatrists and you could get a different diagnosis or outcome from each of those three people.
Hengartner: Right, and different scores. But you also need to be aware, I don’t know whether it is mentioned in the book, but they preferably include people in trials with high baseline scores, because if people have low baseline scores it’s very hard to find the treatment effect, it’s already low, so it can’t get any lower. The aim was to have more positive trials and to include people with high baseline scores so that the recruiting centers were under pressure to sometimes inflate scores. Let’s say if the inclusion criteria were a score of at least 24, and then the application of the Hamilton Rating Scale gave a score of 22, sometimes it was ‘okay, just add up one or two points, and then it’s 24 and we can include the patient’.
That’s the whole thing about regression to the mean, whatever you do, if you reassess these people after two or three weeks, you see sometimes a really remarkable decline in symptoms, which probably doesn’t even reflect their true improvement because the scores were inflated at baseline. So you’ll eventually see a decline that actually does not reflect the true improvement in the illness or disorder.
Moore: In the book, you talk about the transformation of the concept of depression between the 1970s and the 2000s. So in the 1970s, you write that depression was characterized as a ‘rare but severe disorder that almost always improved with little to no intervention’. Yet now, of course, we see depression and anxiety as highly prevalent, even called a global crisis. People are sometimes quite surprised when you say it might not be a chronic or ongoing condition. Many people improve without any really aggressive treatment or intervention. So, I wondered what you found when you were writing about how we conceive of depression now compared to the 60s and 70s.
Hengartner: I think it’s important to stress that’s this is not just my take or my reading of the literature. That’s why I carefully cite experts in psychopharmacology, those considered the most important or most eminent experts in this domain who clearly state that, in most cases, it’s episodic. Whatever you do also without treatment most people will improve. People might say ‘Well, that Hengartner has a very odd reading of the literature’. So that was actually the common view until the early 70s. And then things started to change.
I think the most important driver was that the new approach to depression was the need to have a new, unified symptom-based definition and diagnosis of depression. And that’s also when organizations like the World Health Organization started to apply symptom questionnaires to larger populations.
So depression had more clearly specific ‘core symptoms’, like really low mood, or anhedonia but also many other symptoms that are completely unspecific. Of course, people with depression often have those other symptoms but most people who have those symptoms don’t have depression. Things like appetite change, sleep difficulties, problems concentrating, tiredness and so on. These are very common stress symptoms and can also be symptoms of another physical medical condition or due to a medical treatment.
So these are very unspecific and once they started to apply those symptom-based scales, of course, they came up with sometimes quite high symptom scores. But if you look at what symptoms are the most responsible for these high depression scores, you would see that these are sleep problems, appetite changes, problems concentrating, those unspecific symptoms.
And these are people who are likely more constantly in an environment or in a situation of high workload or job strain or constant relationship problems, marital issues. Or even if you have a newborn. I have three little kids and for six years I wasn’t really able to sleep. So for months and years, I had sleep problems. And of course, because I was always so tired, I had problems concentrating, and sometimes I also lacked appetite because if you’re so tired usually you’re not very hungry.
So if during this period, where actually I was one of the happiest men in the world because I had these little kids and they were so gorgeous, if you would apply a depression scale, it would result in ‘Oh, you have mild depression because you have sleep problems’. And so this approach, which was made fully based on symptoms, massively increased the prevalence rate of depression diagnoses.
It is made worse by depression questionnaires because at least with the diagnostic criteria, you require that one or two of the core symptoms must be present. But the depression questionnaires, you can basically indicate that you have no low mood, no anhedonia, but just sleep problems. The questionnaire completely ignores this, it just gives you a score, which indicates ‘Oh, you have mild depression’. That was this big move towards a symptom-based approach.
It was also heavily supported by the pharmaceutical industry which also advertised to GPs that they must always be on the lookout for ‘masked depression’ because there are many patients who don’t clearly present with low mood or anhedonia but with appetite change or sleep problems. So that’s ‘masked depression’, that’s why we need to assess those symptoms and the message put simply was as soon as someone has increased scores on these depression scales, that’s probably depression, even if you don’t feel that the person is depressed or has a depressed mood.
So that’s a very brief summary, but some of the most important developments that actually changed the whole definition and also the perception of depression during this crucial time.
Moore: There is a theme in the book about the pharmaceutical manufacturers meddling in or getting heavily involved in the mental health business. That period of the 1970s and into the 90s was characterized by active campaigning to redefine depression and anxiety and to aggressively treat them. The chemical imbalance theory arose and depression started to be seen as a very disabling but very treatable chronic condition that people might have to take drugs for life for. So the concept changed because it was pushed that way to an extent, didn’t it?
Hengartner: Yes and it was pushed by the pharmaceutical industry but there was really also true concern among psychiatrists and psychiatric associations that we were missing a terrible issue here because if we don’t look out for those symptoms we miss so many depressed cases. During the 60s and 70s the prevalence rates were mostly so low that probably there were people with depression who were not correctly detected and diagnosed. But the situation we have now is completely different. Now we have overdiagnosis, it is one of the biggest issues because as soon as you present to a GP with all kinds of non-specific symptoms, you get the depression diagnosis, sometimes prematurely, and sometimes it’s really also a false-positive diagnosis.
So now we have these awareness campaigns that followed in the late 80s, and they specifically address the public and the GPs to say ‘hey, you missed so many depressed cases because you have to look out for the unspecific depression symptoms like distress and we must treat them otherwise they have chronic depression’. Although there is absolutely no evidence that if you treat people with mild depression they have a better outcome.
In fact, there are studies that clearly show regardless of whether GPs detect depression or not and whether they treat it or not, the outcome after one year is almost the same. So it actually did not even make a difference whether they detect those mild or sub-threshold cases or not. But the message was clear, you have to diagnose more, you have to treat more, you have to prescribe more drugs and of course, that was very welcome for the pharmaceutical industry. But it was not just the industry that pushed this new narrative, it was also a deep fear within psychiatry that they are under serving so many people.
Moore: Moving on to another theme in the book which is flaws in antidepressant research. The tricks and game playing that goes on in the research is quite eye-opening, even the way the drugs are licensed. You write about the way that drugs are licensed by regulators such as the Food and Drug Administration in the U.S. and the Medical and Healthcare products Regulatory Agency in the UK.
I suppose I had this view that before a drug license is granted, it goes through many years of trials with hundreds of thousands of participants, and there are many positive trials that show a clear benefit. So it’s pretty surprising to find that actually only two positive trials are required to license a new drug, sometimes not even two and the most positive trials are selected and many are not selected.
So I wondered if we could talk a little bit about your views and what your research tells you about the way that drugs are licensed and whether we should be concerned about it?
Hengartner: This is a very important topic because early on one of the most frequent responses that I received when I submitted critical articles about the questionable evidence base supporting drug efficacy was that the whole discussion is unnecessary because the drug regulators would not have approved the drugs if they were not clearly working and if the effects were not practically or clinically meaningful.
I heard this argument even from very well-known psychiatry professors and that obviously reveals that those people apparently are not really aware of how the drug agencies license drugs. So that’s why I meticulously dissect and detail in the book how it happens. And as you said, the standard for drug approval is put very low. Put simply, if you can beat a placebo pill in one or usually two trials you get your license, regardless of whether the majority of trials were actually negative. And if in the selected trials there was a marginally small difference between the placebo and the drug but it was statistically significant and that was enough for licensing.
I quote a lot from the US FDA, which is considered the most important drug regulator agency, that they made clear that they are just looking at whether there is statistical evidence for an effect and not whether this effect has any practical relevance. So they made clear that if this effect is statistically significant, no matter how small just one or two points on the Hamilton scale, they consider it as evidence that the drug has demonstrated efficacy because it was statistically better than a placebo. If you look actually at the magnitude of this difference, you will discover that this is a very small difference, but that was enough to license the drug.
Moore: We talked earlier about the Hamilton Depression Rating Scale, which I think is 30 something points in total, is that correct?
Hengartner: There are several versions, you have the Hamilton 17 item, 19 item, 21, item, but the most widely applied measure is the Hamilton 17 item version from which you can score from 0 to 52 points.
Moore: And yet the drug-placebo difference in selected positive trials is often something like two points.
Hengartner: Yes, or even less. In more recent analyses it’s more around 1.7 or 1.8 on a scale from 0 to 52.
Moore: As a healthcare consumer, when you read that this drug is effective, you imagine that effectiveness is large or highly significant. But when you dig into the details, such as those in your book and you find out that the drug placebo differences are so tiny and that’s even given that the cards have already been stacked in favor of the trial drug by so many other ways of reporting the data, that’s quite staggering I think.
Hengartner: You always have to consider that this difference is likely to be inflated due to assumptions that the model has made such as how it deals with missing data. There is actually quite clear evidence that statistical approaches to data analysis, such as the last observation carried forward, where if a participant drops out their last rating is shown as the overall trial result or endscore, leads to an inflation of differences. The FDA has conducted its own analysis, and they show that this inflates the false-positive rate. Almost all drugs were approved based on this intention to treat using the last observation carried forward method. I don’t want to go into detail because it is a little bit statistical, but it is explained in the book.
In essence, what people need to know is that we can’t even be sure that the true effect really is two points. Perhaps it’s even smaller than that, because of the biases we know that are there that tend to overestimate differences between drug and placebo.
There are also other factors like using a per-protocol analysis instead of an intention to treat analysis, and sometimes also certain centers were excluded because the data didn’t look quite good enough in those centers. So you restrict the study population to those where there seems to be a bigger effect, you don’t report the results for all trial participants. So there are really systematic biases that suggest that perhaps this small difference is often an overestimate.
Moore: You explain really clearly in the book how these problems in research stack on top of each other and become additive and there’s a presumption to approve drugs by the regulator. So, you’ve talked about some of the issues; selective publishing, short term trials, changing statistical methods halfway through a trial, inadequate sampling, ghostwriting and so on.
Given all this, I wondered what you felt was perhaps the biggest driver of distortion in the evidence base supporting the use of antidepressants?
Hengartner: The biggest single factor in my view certainly is selective reporting, which also includes publication bias. So we know for sure that just about half of trials are positive but in the published literature this rate is close to 100%.
If you just look at the literature you have the impression that, in most trials, efficacy was demonstrated when in fact it’s really not, it’s just in about half of trials, which actually is already quite worrisome if the drug only works in every second trial.
But selective reporting also includes selectively only reporting the outcomes that were favorable. Even if the primary outcome is the Hamilton depression scale, you can use that scale in very different ways. You can dichotomize, you can make arbitrary categorizations between people who have improved or not improved, you can use different statistical modelling approaches to look at endpoint scores or changes from the baseline.
Then you can also use a combination of criteria, so for instance, people have a depression score lower based on another assessment method and they were then considered as responders. You can combine so many different methods, with just one scale you have plenty of different outcomes that you can define.
That’s what happened in the infamous Study 329, the Paroxetine trial. That was about reporting new scales that were not eventually declared as the primary outcomes. So there are many things that fall into the rubric of selective reporting. In the end, if you have two or three different depression scales and then you have some global scales of improvement, like the clinical global impression, sometimes you have a scale for global functioning, you might have a scale for quality of life. So if you have so many different scales, you can define the outcome in many different ways. So in the end, you might have 40 or 50 different ways you could define your outcome.
Even if the trial was negative on the pre-specified primary outcome, you can start digging into the data, and you can start transforming and changing everything. You will inevitably come out with some definition of outcome where you can demonstrate a statistically significant effect. But that’s just post-hoc change, which is mostly just randomness so that you capture false-positive findings, that happens a lot.
So besides selectively publishing the trials, those that are published are also selectively reported. Not all, but there’s a lot of selective reporting going on.
Moore: I think it all highlights how difficult it is for people to make an informed assessment about whether antidepressant treatment is right for them and whether the drug is efficacious enough to really make a difference. It seems to be trial and error for the person rather than us being able to rely on a consistent, reliable evidence base to support a decision about whether to take them or not.
Hengartner: True. And then there’s the other issue that we talked about in the beginning that even if we could say we have a clear, robust effect, you still have no guarantee that you will really improve or benefit from this effect.
There are many user surveys where it is made clear that, for some, this effect was helpful in the short term but then with time it became a burden or turned into an adverse effect. And for some, right from the beginning, it was an unpleasant adverse effect. So even if we had clear and robust reliable evidence that this drug really makes a difference, on an individual user level, we can’t be sure that you will really benefit from this effect.
Moore: I wonder if we could turn to the latter sections of the book, which talk about solutions for reform and also capture some of your experience of having written critically about these issues.
You share how difficult it is to give messages that contradict this mainstream narrative and you write that ‘there were times I felt exhausted and crestfallen, demoralized by the insults on social media and irritating, ad hominem attacks by anonymous reviewers’.
For someone looking into this world is not an academic, to me, there seems to be a real pressure to protect the reputation of antidepressants as safe and effective drugs by key opinion leaders, the senior voices in psychiatry. I wondered if you’ve felt that too and I wonder what it was like writing a book from a critical perspective like this because I know it will be welcomed by many, but I know it will be challenging too.
Hengartner: Yes, that’s why I hesitated for a long time because as I described in the book at the beginning, I was very naïve and I thought, ‘okay, this is an interesting scientific analysis of the evidence base’. But I quite quickly realized that it’s more than just about the science behind the effectiveness.
There are a lot of interests here that sometimes provoked very angry responses. As a practitioner, you always need to consider how these people were trained. They are not aware of those studies, over 1,000 studies that I reference in my book, because when I talked at clinics or hospitals, the audience is mostly very receptive. And I had great discussions after the presentations where psychiatrists and other physicians came to me to say, ‘oh wow, that’s really news to me. I never knew about all those studies regarding selective publication’ and they were also quite shocked.
So I don’t think that the majority of practitioners just try to defend something that the pharmaceutical industry wants, they are really convinced that the drugs work. What makes things even more difficult is they observe improvement in their daily practice but because they prescribe drugs to most people that they see, they can’t really judge whether this is a drug effect or if it would have occurred even without the drug.
They have been trained to see improvement. They often go to continuing medical education programs that are often sponsored or otherwise supported by the pharmaceutical industry with a key opinion leader giving marketing messages. And suddenly, there come along some strange people who challenge this worldview, this belief system. That’s why I was called a flat earth believer and that was one of the nicer ad hominem attacks. For them it’s just completely absurd, it’s so clear, there’s so much evidence. We were trained in medical school that this works and then we have those presentations and we have those educational events and we see it. Then suddenly someone comes and says the treatment effect is sometimes quite uncertain and they just can’t believe it. I think actually that most people are simply not aware of these issues. And for them, it’s just unimaginable that it could be quite different to what they were trained and taught and what they observed.
I think the main issue is to know that the critical appraisal of scientific data is not a crucial thing in the med schools. Most doctors are poorly trained in data analysis and statistics, so most don’t really understand it and for them, it’s ‘oh it was a statistically significant effect so end of the discussion’. They don’t see that methods are much more complicated than that.
I don’t want to denigrate the scientific knowledge of doctors, but most of their training is about practice and not about science. So they rely on what their supervisors tell them or the people who present lectures or medical education events and the visits from pharmaceutical sales representatives which mostly deliver marketing messages.
Moore: When thinking about reform and the future, I wondered what you felt was perhaps the biggest change that we could make to this whole area so that we could try and ensure that more people are helped and fewer people are exposed to potential harm. What could we do differently?
Hengartner: I think first and foremost we need to change the way we define and diagnose depression. So I make the suggestion that we make the definition more conservative to exclude more normal emotional reactions to stressful life events. We need a more stringent definition that sets the bar a little bit higher because now the diagnosis is so overinclusive. Just two weeks of feeling down for whatever reason qualifies as a major depressive episode. One of the most ridiculous things is that you can have major-mild depression, which is contradictory, so we need a different definition.
Then I think one of the most important factors is the whole drug licensing approach needs to be more stringent. Just beating a placebo pill in one or two trials is just not enough. I think a new drug should clearly demonstrate that it is better than established, cheaper drugs that are already on the market for years. A new drug should demonstrate that it beats an established treatment not a sugar pill, otherwise it has no added value when it comes with a similar adverse effect profile.
Also that you can conduct as many trials as you want, you only need to show that two were positive. It’s a very strange approach. As a penalty shooter in football, if I hit the goal just every tenth time or every eighth, that doesn’t make a good penalty shooter. If from ten shots two were successful, if I just hit 20% that’s a very poor rate. So I think that the majority of trials clearly need to be positive. Then, of course, improving trial pre-registration, clear adherence to study protocols to minimize the whole effect of selective reporting.
What we clearly need is a rather strict separation between industry interests and medical practice, so industry funding of continuing medical education and financial support of department chairs and whole medical departments that make most of their income from pharmaceutical money. We need a clearer separation because otherwise if you know who pays for your job and who pays for your research you are responsible to that person. You need to deliver because the person who pays expects something in return, so scientists will, even if it’s unconscious, try to present results that satisfy the paymasters.
We actually need to end the massive entanglement between clinical practice and the financial interests of the pharmaceutical industry.
Moore: You recently shared on social media that in addition to being a researcher you are starting to train as a psychological therapist. This is such welcome news, Michael. Can you tell us a little about your decision to become a clinician as well as a researcher?
Hengartner: I had different reasons. For me, it’s about development and having a new perspective. Doing more than just research, which sometimes can be rewarding, but most of the time can be a very frustrating and difficult process.
In doing all this research, I was always wondering, is there another way I could do more perhaps? Also doing something besides just sitting in front of a computer writing papers, analyzing data, searching the literature, which is very interesting but sometimes depending on the reactions that your research provokes, can also be quite challenging.
So I came to the decision that I also want to go into clinical practice to be directly trying to help people because science can be indirect and whether there really is a transfer from science into practice is a huge unknown. Whether my research really changes something, who knows, if it does, perhaps it’s just a tiny little bit. So to do more, I also need to work in practice and directly try to help people with what I can offer. So that would be psychotherapy since I’m not a physician.
Moore: Michael, thank you. It’s been a pleasure to talk today. Your book is highly readable. It’s compelling, it’s comprehensive and it’s pretty clear from the 84 pages of references quite how much research effort went into putting together this picture of a pretty dismally broken system that an awful lot of people rely on to help them get out of some very difficult places.
I hope that your book does open that critical conversation up to many more people and allows us to interact with each other with a bit more civility about how we improve because improvement is desperately needed. I’m so grateful to you for joining me today and also for your efforts in writing the book.
Hengartner: Thank you very much, James. It was a great pleasure for me talking with you.