We all learned as freshmen that science is objective. But in fact, the evidence base for psychopharmacology and indeed probably most of modern medicine is manufactured and shaped and spun by experts whose interests may not be coextensive with yours or mine. A recent systematic review and meta-analysis on ADHD medications by Joseph Biederman and his colleagues at Harvard University and Massachusetts General Hospital gives us some insight into how this process works.
A favorite rhetorical tactic for proponents of ADHD drugs—whether they are writing in the scientific literature or the popular media—is to cite a long list of negative consequences purported to result from ADHD, begging the question of whether drugging children or adults for this condition reduces the likelihood of any of these bad outcomes. However, this paper by the Biederman group concluded that ADHD medications do indeed reduce frequency of a number of important harms.
Probably no man alive has done more to promote the diagnosis and drugging of children and adults for something called “ADHD” than Dr. Biederman, so this article is worth examining in some detail.
Let’s begin by taking a look at the studies reviewed by Dr. Biederman and his colleagues. Not one of them was a randomized controlled trial. All of them were observational studies relying on data extracted from population-wide databases or large health insurance claim databases, and either compared individuals who were prescribed ADHD medication with those who were not, or else compared data from the same individual when he was adherent to treatment to when he was not. This kind of study is prone to bias, for a couple of reasons.
Firstly, individuals who seek treatment for a given condition are more likely to engage in other health-promoting behaviors than those who are not. This is called the “healthy user effect.” For example, women who seek hormone replacement therapy (HRT) are more likely to exercise, eat a healthy diet, avoid alcohol, and maintain a healthy weight than those who do not.
Secondly, individuals who adhere to a given treatment also are more likely to engage in other health-promoting behaviors. This is known as the “healthy adherer effect.” One study found that patients who adhered to statin therapy had lower rates of burns, falls, fractures, motor vehicle accidents, open wounds, poisoning, and workplace accidents than those who did not. Another study, of HRT for women, found that even adherence to placebo was correlated with reduced risk of hip fracture, myocardial infarction, cancer deaths, and all-cause mortality.
Apparently, just being the sort of person who has access to health care, and who takes her medication regularly, can have salubrious consequences, independent of any actual pharmacological effects.
These sources of bias can completely overwhelm the actual drug effects demonstrated in randomized controlled trials. While observational studies found that women who received hormone replacement therapy had one-third the risk of coronary heart disease of those who did not, randomized controlled trials found that hormone replacement actually increases the risk by twenty-nine percent.
Bearing these precautions in mind, let’s take a look at the specific findings of the paper. The conclusion as stated in the abstract informs readers:
“The majority [of articles reviewed] suggest a robust protective effect of ADHD medication treatment on mood disorders, suicidality, criminality, substance use disorders, accidents and injuries, traumatic brain injuries, motor vehicle crashes, and educational outcomes.”
That seems like an odd way of spinning the results, given that the purpose of a meta-analysis is to determine what the overall pattern of data is telling you, rather than the sheer number of studies supporting this or that conclusion. In fact, the authors’ own meta-analysis found no significant effect of ADHD medications on suicidality, criminality, substance use disorders, traumatic brain injuries, and motor vehicle crashes. That leaves educational outcomes, mood disorders, and accidents and injuries. How convincing is the evidence that ADHD drugs reduce the likelihood of these bad outcomes?
The conclusion that ADHD medication improves educational outcomes was based on all of two studies. Can you do a “meta-analysis” on just two studies?
Moreover, the authors’ own literature review uncovered three more studies with available continuous data that showed no drug effect.
The conclusion that ADHD medication reduces the likelihood of mood disorders likewise was also based on two studies. One of these had major depression as its endpoint and the other had bipolar disorder. Can you do a “meta-analysis” on two studies with two different endpoints?
The only significant outcome based on more than two studies was accidents and injuries—and the Biederman group’s own meta-analysis showed significant heterogeneity in these studies, which suggest that they may not all have been measuring the same endpoint.
At this point the reader could be forgiven for concluding that all this constitutes a rather puny haul, coming as it does after decades of research, and given the biases inherent in these studies. Aren’t there any randomized controlled trials on the subject?
Indeed there are. By far the most important was the MTA Study, in which nearly six hundred children, all of whom had been diagnosed with ADHD (Combined Type) were randomly assigned to one of four treatment arms: 1) Medication, 2) Medication plus behavioral therapy, 3) Behavioral therapy alone, and 4) Usual community care. This study was far and away the largest and longest of its kind, carried out by eminently credentialed researchers, who gave the drugs every chance to work.
All of the children were started on Ritalin, and doses were carefully titrated to find the optimum for each child. Children who did not respond well to Ritalin were titrated on the following alternative medications, in this order: dextroamphetamine, pemoline, imipramine, and, if necessary, others approved by a cross-site panel. Children attended monthly half-hour medication maintenance visits (as opposed to the standard one or two brief visits per year) with a pharmacotherapist who provided support, encouragement, and practical advice, and who prescribed dose adjustments if necessary. The pharmacotherapist also kept in touch with the child’s teacher by means of monthly telephone conversations. Parents were supplied with readings from an approved list in order to educate them about the importance of drug treatment for ADHD. Medication compliance was “facilitated” by monthly pill counts, saliva measurements of methylphenidate levels, and encouraging families to make up missed visits. The randomization phase lasted fourteen months, at which point all the children were released to “usual community care.”
And how did all this work out for the kids? At the eight-year follow-up, there was no significant difference between any of the treatment groups for any of twenty-four outcome variables.
No effect on ADHD symptoms. No effect on oppositional behavior or antisocial behavior. No effect on anxiety or depression. No effect on reading skills, math skills, grade point averages, or grade retention. No effect on social functioning, psychiatric hospitalizations, traffic tickets, or auto accidents—the list goes on and on.
These results were published way back in 2009. Three years ago, the young adult follow-up outcomes of the MTA study were made public. There was still no significant difference between any of the treatment groups in terms of ADHD symptoms—but the “consistently medicated” subjects were nearly two inches shorter than those who had received “negligible” treatment.
In plain English, the short-term benefits, such as they are, of ADHD medications fade over time—but the growth suppression caused by these drugs is permanent.
Of course, ADHD isn’t just for kids anymore. Are there any randomized controlled trials on the effects of medication for “Adult ADHD?”
There are. A 2012 study, again far and away the largest and longest of its kind, looked at 410 adults diagnosed with ADHD who were randomized to either the ADHD drug Strattera or placebo. At the six-month mark, the difference between the treatment group and the placebo group on the primary outcome, the Endicott Work Productivity Scale, was a paltry six-tenths of a point—on a scale of zero to one hundred.
Subjects were also assessed via the Clinical Global Improvement Scale. A difference of one point on this scale is deemed the minimum necessary to be noticeable to a treating clinician. In this case, the difference was a mere one-tenth of a point—with the difference favoring placebo.
It seems unlikely that Joseph Biederman and his colleagues at Harvard and Mass General could have been unaware of this study—which was carried out by Joseph Biederman and his colleagues at Harvard and Mass General.
A basic principle of science—not to mention common sense—is that the burden of proof rests on anyone making a claim. These drugs have been on the market for decades. After all this time, can anyone point to any convincing evidence that they produce meaningful long-term benefits for children or adults? The harms, however, are indisputable. Why do doctors continue to prescribe these drugs?
The disclosure statement of the Biederman group’s 2020 paper gives us a hint. Dr. Biederman has accepted research funding from Shire (the maker of the ADHD drugs Vyvanse and Adderall XR) as well as Genentech, Lundbeck, Neurocentria, Pfizer, Roche, and Sunovion—although, the reader is assured, Biederman’s interests “are managed by Massachusetts General Hospital and Partners HealthCare in accordance with their conflict of interest policies.”
In the past, Dr. Biederman has also been the recipient of largesse from Eli Lilly, maker of Strattera and also of Zyprexa, and from Janssen, the maker of Risperdal. Those latter two nostrums are neuroleptic drugs often given to children diagnosed with bipolar disorder—a condition that often follows after children are drugged with stimulants prescribed for ADHD.
In this country, we now spend over twenty billion dollars a year on treatment for something called “ADHD.” For that amount of money, we could pay the mid-career salaries of an extra 365,000 teachers or 827,000 teachers’ aides. At what point do we ask whether we wish to continue pouring money down this particular black hole? If not now, when?
There’s been a lot of concern as of late about something called “science denialism.” But the commentators wringing their hands over this sort of thing never seem to consider the effect of having the appearance of science and the materials and methods of science being hijacked by those whose sole concern seems to be to sell us all as many drugs as possible—and damn the cost to everyone else.
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.