A new paper in Schizophrenia Bulletin presents evidence that publication bias and outcome reporting bias in psychiatry research are common and concerning. These biases overestimate the efficacy of psychiatric treatments for schizophrenia and bipolar disorder. The researchers specifically examined clinical trials funded by the Stanley Medical Research Institute with which they are affiliated.
Their findings challenge the validity of the evidence base on which psychiatrists make clinical decisions:
“We conclude that publication bias and outcome reporting bias are common in papers reporting RCTs in schizophrenia and bipolar disorder. These data have major implications regarding the validity of the reports of clinical trials published in the literature.”
Publication bias is when the results of a study influence whether or not it is published. Publication bias undermines the scientific process through the misrepresentation of study outcomes. For instance, it is important that disconfirming evidence, such as evidence that a treatment has failed to result in improvement, is published and considered. Otherwise, publication bias gives rise to distorted claims of efficacy, as has been demonstrated in the research on treatment for depression (see MIA report).
Outcome reporting bias is when authors selectively report study outcomes. Both publication bias and outcome reporting bias undermine the scientific evidence base in that they are misleading and obfuscate the effects of a treatment or intervention. Bowcut and colleagues explain:
“Publication bias and outcome reporting bias may lead to an overestimation of treatment effects, thus exposing patients to ineffective treatments in daily clinical practice. From a research point of view, it exposes volunteering patients to the unnecessary risks and inconveniences of research procedures, and hinders drug development, as scientists are unable to learn from one another’s experiences and effectively utilize funding and resources.”
In general medicine, high rates of publication bias and outcome reporting bias skewed toward “positive” study results have been found. Studies were more likely to be published if they provided evidence that a treatment intervention was efficacious, and paper authors were more likely to report favorable outcomes selectively. Less was known about the prevalence of these biases in psychiatry research papers, so Bowcut and the team conducted a research audit investigation.
The research team examined publication bias and outcome reporting bias of research on schizophrenia and bipolar disorder treatment. This research audit was undertaken specifically on clinical trials funded by a US-based, nonprofit research organization, the Stanley Medical Research Institute (SMRI). Lead researcher J. Bowcut disclosed a conflict of interest regarding her employment at SMRI and acknowledged that SMRI also supported this project.
The research team had access to original trial protocols and final scientific reports. Therefore, they were able to assess reporting biases and selective outcome reporting and investigate the differences between published and unpublished studies (publication bias).
A total of 238 completed randomized controlled trials (RCTs) funded by SMRI during the years 2000 and 2011 were examined in this research audit. RCTs are considered to be the gold standard for assessing the impact of treatment.
The researchers categorized the RCTs as “positive” or “negative” based on whether the primary outcomes reported in the published manuscript demonstrated statistically significant improvement attributable to the treatment drugs.
The researchers found “concerning evidence of publication bias and outcome reporting bias among a large number of RCTs funded for schizophrenia and bipolar disorder” that favored positive findings.
RCTs with positive findings (i.e., findings that supported treatment efficacy) were more likely to be published. Whereas 86% of studies with positive findings were published, only 53% of studies with negative findings were published.
The researchers also examined the duration of time between study completion and publication to discern whether this manifested in the form of possible publication bias. Although 75% of the RCTs published more than 10 years after study completion reported negative results, this difference was not statistically significant.
Outcome reporting bias
Researchers identified major discrepancies in 70% (102/144) of the papers published regarding (1) study design, (2) outcome measures, and (3) statistical analyses. These were determined by comparing the original protocol of the RCTs and their final, published versions.
- Study design: Just over 27% (40/144) studies used a sample size that was 25% less than what was originally proposed. Over half (52.5%) of these studies with reduced sample sizes reported positive findings. Additionally, two RCTs ended up being reported as uncontrolled case studies, a complete divergence from an RCT study design. Finally, 21.5% of the original study designs proposed biological markers and lab procedures but did not report them in the published manuscript. Most of these (61.3%) reported findings that supported treatment efficacy.
- Outcome measures: A different primary outcome measure was reported in 27% (39/144) of the published manuscripts. Changes in outcome measures included changing the scale used to measure the impact of treatment, including a different behavioral marker to assess for treatment, or switching the secondary and primary outcomes listed in the original proposal. Of the published manuscripts that changed outcome measures, over half (53.8%) reported negative findings.
- Statistical analyses: A considerable portion (33%) of the published RCTs reported using different statistical analyses in the published paper than those proposed in the original protocol. Most of these (64.6%) reported positive findings. Bowcut and colleagues write:
“A third of the RCTs used a different statistical analysis to that envisaged in the protocol, two-thirds of which were reported as positive RCTs, one-third as negative, indicating that at least in some cases this seems to have been done to ‘achieve’ a positive result.”
Standardized procedures, such as declaring and maintaining consistency with a primary study outcome and registering trials at clinicaltrial.gov, are in place to prevent selective outcome bias.
The authors identified that most (56.3%) of the SMRI research protocols were not registered, and 23.9% were registered only after the studies were completed. Such post hoc alterations around designs, outcomes, and analyses are known to increase the chances of finding a positive result which can distortedly favor the efficacy of the study intervention (e.g., antipsychotics).
Overall, the researchers found that among 280 RCTs about schizophrenia and bipolar disorder funded by SMRI:
- Approximately 1 in 7 were never performed
- More than one-third were never published
- Studies with positive findings were more likely to be published, with approximately 4.5 greater odds than RCTs with negative findings.
- Among published papers, 70% had major discrepancies between the original study protocol and the final published manuscript.
According to the researchers’ estimates, this means that “$25 million was spent on unpublished RCTs, and $29 million was spent on RCTs with at least one major discrepancy between the original protocol and published manuscript.”
Bowcut and colleagues add:
“Publication bias and outcome reporting bias misleads pre-clinical and clinical research, leads to ineffective compounds being tested again, unnecessary exposure of patients to study procedures and waste of research funds.”
The researchers explore various reasons for the prevalence of publication bias, including authors’ reluctance to prepare manuscripts that are less likely to be accepted or disseminate findings that contradict their theories.
“Similarly,” Bowcut and colleagues write, “reviewers with a vested interest in positive results, on which grants and reputations depend, might recommend rejection of a negative findings study.”
Additional barriers, cited by authors who provided the researchers with feedback, included time constraints, statistical problems, recruitment problems, and non-significant findings that impeded pursuing publication.
An important limitation of this study is that the RCTs investigated were “proof of concept,” exploratory studies rather than large confirmatory studies, which tend to be controlled for quality and funded by pharmaceutical companies and governments, Bowcut and team note. Nevertheless, the importance of accurate and transparent reporting around clinical trials in the literature are underscored by the research team:
“These data have major implications regarding the validity of the reports of clinical trials published in the literature, upon which psychiatrists make clinical decisions often involving off-label use of medication approved for non-psychiatric indications, a practice particularly common in psychiatry.”
Therefore, Bowcut and colleagues included numerous action items pertaining to funding agencies, journal editors, and readers of papers. They call for study tracking and comparison across proposal and completion, full disclosure of findings, and favorable views toward publishing negative findings, to name a few.
“As Karl Popper, the philosopher of science, suggests, the use of negative findings helping the falsification of a hypothesis is important to distinguish science from non-science. Likewise, we agree with Iain Chalmers that the use of the terms ‘negative’ and ‘positive’ findings might be misleading because negative findings are just as important as positive studies.”
Bowcut, J., Levi, L., Livnah, O., Ross, J. S., Knable, M., Davidson, S., Davis, J. M., & Weiser, M. (2021). Misreporting of results of research in psychiatry. Schizophrenia Bulletin. https://doi.org/10.1093/schbul/sbab040