The recent surge in publications backing electrical brain stimulation (EBS) treatment has prompted researchers to investigate the quality and reproducibility of the data. New findings, based on survey data and inspection of existing publications, show cause for concern as researchers report knowledge of others tampering with results, and 25% admit to doing so themselves.
“Scientists agree that we are facing a crisis of confidence. Research results are irreproducible, from dozens of psychology findings to hundreds and even thousands of genetic and fMRI discoveries. Some have even argued that the majority of the published literature must be false. Neuroscience, a field filled with statistically underpowered studies, unfortunately is at the forefront of this reproducibility crisis.”
In the past decade, EBS has rapidly gained popularity as a method purported to treat a range of disorders and symptoms through brain modification. Its approach, deemed noninvasive and less expensive than alternatives such as transcranial magnetic stimulation (TMS), has captured media attention and resulted in EBS-related publications doubling in less than three years.
Yet, a number of “high-profile laboratories” have been unable to reproduce findings supporting EBS. Especially considering its claim to effectively treat depression, improve stroke motor recovery, facilitation language acquisition, and curb food cravings.
Heroux and fellow researchers have further investigated the controversy around these findings. They conducted a closer inspection of recent publications and gathered survey data from researchers in related fields with the aim of finding out if researchers were able to reproduce published EBS effects, and if scientists engaged in, but failed to report, questionable research practices.
A total of 154 researchers responded to the survey. Their responses, in conjunction with the 100 audited papers revealed several findings which highlight the discrepancy between what is considered sound practice and what is being published.
Almost all researchers reported using anodal or cathodal transcranial direct stimulation, and of these, 45-50% reported routinely reproducing findings, but the effect size was smaller 26-27% of the time. The remaining respondents used transcranial alternating current stimulation, transcranial random noise stimulation or multi-channel transcranial direct current stimulation, and pulsed transcranial direct current stimulation.
To determine the sample size of their studies, 69% of respondents indicated using the sample size of published papers, 61% reported using previously published power calculations, and 32% reported basing their decisions off of pilot data. However, the audit of papers produced different numbers. An estimated 25% used the sample size of published papers, 26% used power calculations and only 8% used pilot data. Only 6 out of 100 studies reported using power calculations and only one study used pilot data to determine its sample size. The rest failed to report the decision behind their sample size.
Regarding the use of questionable research practices, 43% of respondents reported knowledge of other researchers who had adjusted statistical analysis to optimize results, 41% were aware of those who had selectively reported outcomes, and 36% reported knowing other researchers who had selectively reported experimental conditions. Respondents were also aware of scientists conducting other “shady practices” (20%). Although fewer admitted to partaking in these practices themselves, 25% did admit to adjusting statistical analysis to optimize results.
The authors proceed to highlight the prevalence of low statistical power and publication bias within in neuroscience research. Meta-analyses have found that EBS research is of low quality. Underpowered studies are likely to reflect false positive results, a problem heightened by the pressure to publish novel discoveries.
“The lack of transparency and scientific rigor we have uncovered likely reflects the pressure on researchers to publish significant results in high impact journals. Thispressure drives a vicious cycle in which journals, institutions and funding agencies expect more, and, to survive and reach these expectations, scientists consciously or unconsciously adopt questionable or fraudulent research practices.”
In this study, 90% of audited papers reported positive primary findings, i.e. publication bias. Some studies interpreted p-values between 0.05-0.1 as statistically significant (30%). Some failed to plot individual subject data points to facilitate between and within subject observations (9%), most incorrectly used the standard error of the mean to plot data variability (68%), and a few failed to define the type of variability measure used in plots (17%).
Interestingly, the audit of papers found that only two researchers admitted fraudulent practices, despite 92% of respondents reporting their belief that these questionable practices should be disclosed in papers.
The following are statements from this study of respondents expressing concerns:
- “This field is in urgent need of both guidelines for research and clinical use, and regulations by law.”
- “I think there is a huge publication bias in this field and, in my opinion, the positive results of tDCS are highly overestimated. It would have been nice to have some questions on that topic.”
- “There does seem to be a suspiciously large number of positive tDCS trials published, and in almost any discipline it has been used in.”
- “Although the consensus within publications in that electrical stimulation works well and is reliable, my experience of talking to other researchers at conferences and within my department suggests that there is a huge amount of unpublished, unsuccessful attempts at using the stimulation. Many of which have no clear methodological issues.”
- “It would not be fair to have publication mentioning that ‘tDCS researchers have mentioned that are aware of other researchers that may adjust the statistics to optimize their results’ or something like this. In a publish or perish academia, these practices like that are used by researchers of many fields, unfortunately. These are not specific problems for the tDCS community. I urge to be thoughtful when reporting this data.”
- “I feel that a small ‘special group’ that can publish all their research even though they have a small sample size, lack of fidelity with protocol previously registered, sub-group statistical analysis, etc. On the other hand, other researchers have many difficulties to publish their works even though they followed all the requirements needed to conduct a trustful research.”
The threat to rigorous and ethical scientific conduct within EBS research seems connected to larger practices uncontained by proper guidelines or regulation of standards. As a result, the authors conclude their paper with these words:
“The clinical promise of EBS will remain illusory until the practice of neuroscience becomes more open and robust.”
Héroux, M. E., Loo, C. K., Taylor, J. L., & Gandevia, S. C. (2017). Questionable science and reproducibility in electrical brain stimulation research. PloS one, 12(4), e0175635. (Full Text)