A new study, published in Translational Psychiatry, claimed that the supplement EPA (eicosapentaenoic acid) “improves attention and vigilance in children and adolescents with attention deficit hyperactivity disorder (ADHD) and low endogenous EPA levels.” However, an examination of the results shows that EPA did not improve ADHD symptoms, overall functioning, or overall cognitive performance in kids and teens.
The research was led by Jane Pei-Chen Chang and Kuan-Pin Su, affiliated both with King’s College London and with China Medical University, Taiwan.
The researchers tested Taiwanese kids, aged 6-18. One group took a high-dose EPA supplement, and another group took placebo pills. The researchers measured a number of outcomes after 12 weeks. Overall, they found no positive results for any of these tests. In fact, on one test, they found that the placebo group did better.
They used the SNAP-IV (Swanson, Nolan, and Pelham IV) to measure ADHD symptoms, and had it rated by parents, teachers, and the children/adolescents themselves (self-report). The researchers found no difference between those taking EPA and those taking placebo in terms of teacher-reported and self-reported symptoms.
However, there was a difference in the parent-reported symptoms: the placebo group did better than the EPA group.
They used the SDQ (Strength and Difficulties Questionnaire) to assess emotion, behavioral changes, and social functioning. There was no difference between those taking EPA and those taking a placebo.
They used the WISC-IV (Wechsler Intelligence Scale for Children-Fourth Edition) to assess multiple dimensions of memory function and reported that there was no difference between those taking EPA and those taking a placebo.
They used the CPT (Continuous Performance Test) to measure many dimensions of cognitive functioning. On the overall test, there was no difference between those taking EPA and those taking a placebo.
At this point in the analysis, there were no differences between the placebo group’s functioning and the EPA group’s functioning, except one: the placebo looked better to parents. Parents rated kids taking placebo as having fewer ADHD problems than kids taking EPA.
The researchers then separated out all the subscales within each of these tests to see if there were any positive results in very particular subscales. This could be considered over-testing: creating more and more tests until you find some sort of positive results, even if that means they might be more likely to be due to chance. Of note, some of these tests were not included in their initial, pre-registered study design as reported on clinicaltrials.gov. The researchers appear to have measured 34 items, including both total scales and subscales—although some of these were poorly reported and are only found in supplementary tables.
The researchers found minimal effects on these tests. Additionally, the effects they did find weren’t always in the direction the researchers hypothesized.
However, they identified two results on CPT subscales: on focused attention (the “variability” subscale), the EPA group appeared to do better than the placebo group. But, on impulsivity (the “commission error” subscale), the EPA group did worse and the placebo group did better.
In sum, out of 34 subscale tests, there was one test in favor of EPA (“variability”), and there were two tests in favor of placebo (parent-reported ADHD symptoms and “commission errors”). The other 31 tests showed no difference between EPA and placebo.
In addition, the final scores for variability were very similar between the EPA group and the placebo group (see “VAR” for variability in the table). The EPA group’s average was 133.81, while the placebo group’s average was 132.91. These scores are not significantly different, but the placebo group started with a much lower score than the EPA group (117.36 compared to 139.21). So even though the placebo group and the EPA group were actually doing almost exactly as well at the 12-week point—and the placebo group even had a slightly lower score than the EPA group, the amount that the scores changed was different, and came out in favor of EPA.
Interestingly, a similar statistical artifact worked against the researchers for the other score, that of commission error (COM in the table). At the week 12 point, the EPA group’s average was 0.49, while the placebo group’s average was 0.48. In this case, the placebo group started with a higher score (0.58 compared to 0.53), so its change-over-time looked better in comparison.
It is possible that both of these findings (the one in favor of EPA, and the one in favor of placebo) are examples of regression to the mean—a statistical effect where when someone starts with a higher (or lower) score, it’s likely to move toward the average over time no matter what intervention they’re given.
After this analysis, the researchers then created subgroups based on whether the children and adolescents also had a diagnosis of ODD (about half of them did). Again, this could be considered an example of continuing to create new tests that were not part of the original study design (as evidenced by their pre-registered methods on clinicaltrials.gov)
They found that on that VAR scale (“variability,” the CPT’s focused attention scale), those with ODD appeared to improve more than those without that diagnosis, after being given the EPA intervention. They do not provide a table for these numbers but based on the confidence interval (CI), it is likely this is again a statistical artifact. A CI is a statistical way of measuring how likely this is to be due to chance, and if it includes zero, it is statistically insignificant. This CI goes from 0.07 to 1.23, one end of which is very close to zero.
Next, the researchers created more subgroups based on the participants’ own pre-existing EPA levels, which were tested before the intervention was given. Again, this is another analysis that was not part of their original study design on clinicaltrials.gov.
The researchers split the participants in what appears to be an arbitrary manner in order to create three roughly equal groups. They “were further stratified in groups based on EPA levels (EPA ≤ 0.91%, n = 29; >0.91 to 1.08%, n = 30; and EPA > 1.08%, n = 27).” The researchers do not explain this stratification, nor do they cite any sources for why people with 1.07% EPA levels might be different from people with 1.09% EPA levels. In fact, at least in one study, EPA levels for adults were in the 1.31% to 2.37% range. It’s unclear what the appropriate EPA levels for children would be, and it may vary by age. None of the articles cited by the researchers provide context for healthy children’s EPA levels.
It would be a flaw in the analysis if the stratification system appeared to sort the children by age. If the younger children had lower EPA levels, while the teens had higher EPA levels, the results might not be due to EPA levels at all, but age and maturity.
The results showed almost no differences between groups. The researchers report three CPT subscales that showed differences.
The results of one subscale (COM) were that youth with higher EPA levels did better if they took the placebo. On two other subscales (HRT and HRTISIC), youth with lower EPA levels did better if they took EPA.
It is important to note that these tests may be poor measures for assessing how kids actually behave. Slight differences in reaction time are responsible for the HRT and HRTISIC scores (the test is called “hit reaction time”). These are basically intelligence tests (the WISC-IV is explicitly an intelligence test), which have been shown to be unreliable for predicting other outcomes for children unless there are large differences in scores.
Interestingly, the researchers stopped reporting their subscale results at this point. It could be argued that the only tests that really matter are the parent, teacher, and self-reported ADHD measures, since that’s the concern that would likely lead to children being assessed for this treatment in the first place.
Those results are there, though, buried deep in the supplemental materials (available for download from the publisher’s site). This data includes all the subscales of the SNAP-IV ADHD measures (there are four subscales each for the parent, teacher, and self-report measures) and the subscales of the SDQ (seven total), all stratified by low, intermediate, and high EPA levels.
These results show that for youth with low and intermediate EPA levels, there were no differences in any of the SNAP-IV or SDQ measures. That is, kids with low and intermediate EPA levels had the same ADHD scores and emotion/social functioning scores, whether they took the EPA supplement or the placebo.
For youth with higher EPA levels, four of these subscales showed that they did better on placebo than on EPA. The other 15 subscales showed no difference between those taking a placebo and those taking EPA.
So the total—when stratified by EPA levels—is five subscales showing that youth did better on placebo (if they had high EPA levels), two subscales finding that they did better on EPA (if they had low EPA levels), and 27 subscales reporting that youth did exactly the same whether they took EPA or a placebo (no matter what their EPA levels were).
Chang, J. P., Su, K., Mondelli, V., Satyanarayanan, S. K., Yang, H., Chiang, Y., . . . & Pariante, C. M. (2019). High-dose eicosapentaenoic acid (EPA) improves attention and vigilance in children and adolescents with attention deficit hyperactivity disorder (ADHD) and low endogenous EPA levels. Translational Psychiatry, 9(303). DOI:10.1038/s41398-019-0633-0 (Link)