Despite Claims, EPA Supplement Does Not Improve ADHD Symptoms in Youth

A new study reports that the supplement EPA improved ADHD symptoms but a closer look calls these results into question.

8
1266

A new study, published in Translational Psychiatry, claimed that the supplement EPA (eicosapentaenoic acid) ā€œimproves attention and vigilance in children and adolescents with attention deficit hyperactivity disorder (ADHD) and low endogenous EPA levels.ā€ However, an examination of the results shows that EPA did not improve ADHD symptoms, overall functioning, or overall cognitive performance in kids and teens.

The research was led by Jane Pei-Chen Chang and Kuan-Pin Su, affiliated both with Kingā€™s College London and with China Medical University, Taiwan.

Pixabay

The researchers tested Taiwanese kids, aged 6-18. One group took a high-dose EPA supplement, and another group took placebo pills. The researchers measured a number of outcomes after 12 weeks. Overall, they found no positive results for any of these tests. In fact, on one test, they found that the placebo group did better.

They used the SNAP-IV (Swanson, Nolan, and Pelham IV) to measure ADHD symptoms, and had it rated by parents, teachers, and the children/adolescents themselves (self-report). The researchers found no difference between those taking EPA and those taking placebo in terms of teacher-reported and self-reported symptoms.

However, there was a difference in the parent-reported symptoms: the placebo group did better than the EPA group.

They used the SDQ (Strength and Difficulties Questionnaire) to assess emotion, behavioral changes, and social functioning. There was no difference between those taking EPA and those taking a placebo.

They used the WISC-IV (Wechsler Intelligence Scale for Children-Fourth Edition) to assess multiple dimensions of memory function and reported that there was no difference between those taking EPA and those taking a placebo.

They used the CPT (Continuous Performance Test) to measure many dimensions of cognitive functioning. On the overall test, there was no difference between those taking EPA and those taking a placebo.

At this point in the analysis, there were no differences between the placebo groupā€™s functioning and the EPA groupā€™s functioning, except one: the placebo looked better to parents. Parents rated kids taking placebo as having fewer ADHD problems than kids taking EPA.

The researchers then separated out all the subscales within each of these tests to see if there were any positive results in very particular subscales. This could be considered over-testing: creating more and more tests until you find some sort of positive results, even if that means they might be more likely to be due to chance. Of note, some of these tests were not included in their initial, pre-registered study design as reported on clinicaltrials.gov. The researchers appear to have measured 34 items, including both total scales and subscalesā€”although some of these were poorly reported and are only found in supplementary tables.

The researchers found minimal effects on these tests. Additionally, the effects they did find werenā€™t always in the direction the researchers hypothesized.

However, they identified two results on CPT subscales: on focused attention (the ā€œvariabilityā€ subscale), the EPA group appeared to do better than the placebo group. But, on impulsivity (the ā€œcommission errorā€ subscale), the EPA group did worse and the placebo group did better.

In sum, out of 34 subscale tests, there was one test in favor of EPA (ā€œvariabilityā€), and there were two tests in favor of placebo (parent-reported ADHD symptoms and ā€œcommission errorsā€). The other 31 tests showed no difference between EPA and placebo.

In addition, the final scores for variability were very similar between the EPA group and the placebo group (see ā€œVARā€ for variability in the table). The EPA groupā€™s average was 133.81, while the placebo groupā€™s average was 132.91. These scores are not significantly different, but the placebo group started with a much lower score than the EPA group (117.36 compared to 139.21). So even though the placebo group and the EPA group were actually doing almost exactly as well at the 12-week pointā€”and the placebo group even had a slightly lower score than the EPA group, the amount that the scores changed was different, and came out in favor of EPA.

Interestingly, a similar statistical artifact worked against the researchers for the other score, that of commission error (COM in the table). At the week 12 point, the EPA groupā€™s average was 0.49, while the placebo groupā€™s average was 0.48. In this case, the placebo group started with a higher score (0.58 compared to 0.53), so its change-over-time looked better in comparison.

It is possible that both of these findings (the one in favor of EPA, and the one in favor of placebo) are examples of regression to the meanā€”a statistical effect where when someone starts with a higher (or lower) score, itā€™s likely to move toward the average over time no matter what intervention theyā€™re given.

After this analysis, the researchers then created subgroups based on whether the children and adolescents also had a diagnosis of ODD (about half of them did). Again, this could be considered an example of continuing to create new tests that were not part of the original study design (as evidenced by their pre-registered methods on clinicaltrials.gov)

They found that on that VAR scale (ā€œvariability,ā€ the CPTā€™s focused attention scale), those with ODD appeared to improve more than those without that diagnosis, after being given the EPA intervention. They do not provide a table for these numbers but based on the confidence interval (CI), it is likely this is again a statistical artifact. A CI is a statistical way of measuring how likely this is to be due to chance, and if it includes zero, it is statistically insignificant. This CI goes from 0.07 to 1.23, one end of which is very close to zero.

Next, the researchers created more subgroups based on the participantsā€™ own pre-existing EPA levels, which were tested before the intervention was given. Again, this is another analysis that was not part of their original study design on clinicaltrials.gov.

The researchers split the participants in what appears to be an arbitrary manner in order to create three roughly equal groups. They ā€œwere further stratified in groups based on EPA levels (EPAā€‰ā‰¤ā€‰0.91%, nā€‰=ā€‰29; >0.91 to 1.08%, nā€‰=ā€‰30; and EPAā€‰>ā€‰1.08%, nā€‰=ā€‰27).ā€ The researchers do not explain this stratification, nor do they cite any sources for why people with 1.07% EPA levels might be different from people with 1.09% EPA levels. In fact, at least in one study, EPA levels for adults were in the 1.31% to 2.37% range. Itā€™s unclear what the appropriate EPA levels for children would be, and it may vary by age. None of the articles cited by the researchers provide context for healthy childrenā€™s EPA levels.

It would be a flaw in the analysis if the stratification system appeared to sort the children by age. If the younger children had lower EPA levels, while the teens had higher EPA levels, the results might not be due to EPA levels at all, but age and maturity.

The results showed almost no differences between groups. The researchers report three CPT subscales that showed differences.

The results of one subscale (COM) were that youth with higher EPA levels did better if they took the placebo. On two other subscales (HRT and HRTISIC), youth with lower EPA levels did better if they took EPA.

It is important to note that these tests may be poor measures for assessing how kids actually behave. Slight differences in reaction time are responsible for the HRT and HRTISIC scores (the test is called ā€œhit reaction timeā€). These are basically intelligence tests (the WISC-IV is explicitly an intelligence test), which have been shown to be unreliable for predicting other outcomes for children unless there are large differences in scores.

Interestingly, the researchers stopped reporting their subscale results at this point. It could be argued that the only tests that really matter are the parent, teacher, and self-reported ADHD measures, since thatā€™s the concern that would likely lead to children being assessed for this treatment in the first place.

Those results are there, though, buried deep in the supplemental materials (available for download from the publisherā€™s site). This data includes all the subscales of the SNAP-IV ADHD measures (there are four subscales each for the parent, teacher, and self-report measures) and the subscales of the SDQ (seven total), all stratified by low, intermediate, and high EPA levels.

These results show that for youth with low and intermediate EPA levels, there were no differences in any of the SNAP-IV or SDQ measures. That is, kids with low and intermediate EPA levels had the same ADHD scores and emotion/social functioning scores, whether they took the EPA supplement or the placebo.

For youth with higher EPA levels, four of these subscales showed that they did better on placebo than on EPA. The other 15 subscales showed no difference between those taking a placebo and those taking EPA.

So the totalā€”when stratified by EPA levelsā€”is five subscales showing that youth did better on placebo (if they had high EPA levels), two subscales finding that they did better on EPA (if they had low EPA levels), and 27 subscales reporting that youth did exactly the same whether they took EPA or a placebo (no matter what their EPA levels were).

 

****

Chang, J. P., Su, K., Mondelli, V., Satyanarayanan, S. K., Yang, H., Chiang, Y., . . . & Pariante, C. M. (2019). High-dose eicosapentaenoic acid (EPA) improves attention and vigilance in children and adolescents with attention deficit hyperactivity disorder (ADHD) and low endogenous EPA levels. Translational Psychiatry, 9(303). DOI:10.1038/s41398-019-0633-0 (Link)

8 COMMENTS

  1. Maybe it is time for psychiatry, neurology, to use the correct terminology. Absolutely no need for fancy words. Simply use the term “abnormal”, all the terms that are being used, mean exactly that. “we have a ‘spectrum’ of what normal and functioning means and you are either within that spectrum, or on the low end or outside of it”. It is still the greatest embarrassment in society for one side to decide who fits the criteria to belong, and who fits the criteria to be ousted, singled out, treated. Because that system has found the greater majority to be outside of the spectrum, and the ones not outside of it, never entered the doors of the ‘normalcy testers’

    Report comment

  2. These contorted efforts to find a positive score for an experimental group lead me to suspect that this was a manufacturer’s sponsored experiment that needed to find something positive about the experimental subjects’ treatment experiences.

    Report comment

    • Do you think that being frightened is an important element of ADHD?
      Equally, do you think the problem can be fully managed by meditation?
      The info I have heard is that there are some promising preliminary studies.
      My own experience with meditation in ADHD patients was variable.

      Report comment

      • You seem to be coming from the assumption that all people with the “ADHD” label have the same problem or need the same kind of help. “ADHD” is just a name for a certain set of behaviors that have been identified as problematic. There are all kinds of reasons why people act that way, and hence all kinds of different things that might help different people. It makes total sense that some “ADHD” labeled people would do better with meditation and some would not, because they’re all different. Acting in a certain way doesn’t make people actually similar – it’s just a surface manifestation. Unless you know why it’s happening, you can’t say they are similar at all.

        Besides which, some people who act in ways that are called “ADHD” don’t believe they have any problem, just because other people have a problem with their behavior. And I tend to agree with them.

        Report comment

  3. I’m not surprised at this result. It is based on the idea that there is something globally wrong with the brain in ADHD. There is not – there are a number of functional problems (essentially being caught in a mental loop) in ADHD and they predictably result in the symptom pattern we see in ADHD. The basic problems are in vestibulo-cerebellar function- and result also in a number of conditions that have erroneously been called co- morbidities, as though they are separate. They are not.
    These problems include developmental coordination disorder, dyspraxia, dysarthria, sensory processing disorders (not as severe as autism), eye movement issues with convergence and eye tracking issues, and dysregulation of the autonomic nervous system (which is biolgically based and does not reflect a primary problem with threatening environment/trauma.

    Furthermore there are clear cut neurological signs visible if we care to do a physical examination. Normally we do no, because the way that we think about the problem misdirects our attention.
    Ive personally verified these signs in about 150 adults with ADHD- between 2014- 2015, but have had to retire so cant follow this up in patients myself.

    The underlying neurological dysfunction is remediable mostly through specialist neurological rehabilitation (ie chiropractic functional neurology.

    I offer up this case history- which clearly shows an enormous improvement in only 8 weeks with a patient who was not fully compliant.
    The presentation clearly shows the signs of autonomic overactivity, autonomic asymmetry-R pupil dilated more than L, difficulty with eye tracking and documents the treatment along with the results. That is a very strong result with no medications.
    https://www.youtube.com/watch?v=J92vg3nrgSY&t=25s

    Report comment

LEAVE A REPLY