Questionable Research Practices Common in Randomized Controlled Trials

Study finds bias may be mitigated by female authorship, higher impact factors, registration of trials, and mention of reporting standards.

1
715

A new study set to appear in the Journal of Clinical Epidemiology finds a high probability of bias (43% – 63%) in academic publishing of randomized controlled trials.

A lower probability of bias and fewer questionable research practices (QRP) was associated with several factors, including a higher proportion of female coauthors, a more recent publication year, reporting of trial registration numbers, higher journal impact factor, publications from large publishers, last author from Oceania, and mentioning of the Consolidated Standards for Reporting Trials (CONSORT). The authors write:

“We investigated the association between trial characteristics and QRPs and found associations with QRPs for many of the studied indicators (e.g., gender, publication year, h-index, mentioning of  CONSORT). The most robust indicators that were consistently associated with a lower risk of several QRPs included: 1) a higher journal impact factor, 2) a journal from a large publisher (such as Elsevier or Springer), 3) having a trial registration, and 4) mentioning of the CONSORT reporting guideline. We  could not identify any association between the percentage of positive or negative words in an abstract and the risk of QRP.”

To investigate QRPs in randomized controlled trials (RCT), the current work used web scrapers to automatically download 163,129 PDFs of RCTs from the publisher’s websites and convert them to text data. The authors excluded articles published before 1996 (the year the CONSORT statement was published) and in languages other than English. In addition, the authors investigated four specific QRPs:

  1. Risk of bias in sequence generation, allocation concealment, blinding of participants and researchers, and blinding of the outcome assessment.
  2. Modification of primary outcome measures.
  3. Achieved versus planned sample size.
  4. Statistical discrepancy.

The present work identifies several possible risk factors for QRPs based on previously published evidence and expert input, including data on the makeup of the author team (number of authors, gender, affiliated countries, number and ranking of affiliated institutions, etc.), trial and publication (trial registration, publication year, financial support, etc.) and the journal in which the research was published (impact factor, field, the content of the journal, etc.).

The current work finds that the median probability of bias in all the articles investigated ranged between 43% and 63%. In addition, 43% showed bias in randomization, meaning the participants were likely not randomly selected. In addition, 59% were flagged for bias in allocation concealment, 63% showed bias in blinding patients and research personnel, and 55% in the blinding of outcome assessment.

Of the 16,349 papers with a clear primary outcome measure, 22.1% modified that measure during the course of the research. In other words, the primary variable they used to assess their research question changed in the middle of their research.

Achieved and planned sample sizes were consistent for most papers investigated, and only 1.7% of 21,230 articles showed a statistical discrepancy. The authors note this number is much lower than in some previous research. This discrepancy may be explained by the automatic nature of the assessment performed in the current study versus the manual review conducted in past research.

The authors recognize several limitations to the current study. For example, the automated nature of the data collection and analysis could lead to publications being classified as RCTs when they were not, the exclusion of some RCTs, and the inclusion of some papers multiple times. According to the authors, poorly written articles were especially prone to misclassification.

Articles with no PDF version were excluded, as were the studies not registered on Clinicaltrials.gov, possibly excluding some European-based research. The automated nature of the research could also have led to misclassified demographic information and QRPs. Finally, the large size of the dataset may have indicated statistically significant associations that were ultimately irrelevant. The authors conclude:

“The median probability of bias assessed using RobotReviewer software ranged between 43% and 63% for the four risk of bias domains. A more recent publication year, trial registration, mentioning of CONSORT-checklist, and a higher journal impact factor were consistently associated with a lower risk of QRPs. This comprehensive analysis provides insight into indicators of QRPs. Researchers should be aware that certain characteristics of the author team and publication are associated with a higher risk of QRPs.”

Previous research has found significant QRPs in other domains, such as ecology and evolution. Researchers have argued that pharmaceutical companies and unethical researchers commonly cherry-pick data to inflate the efficacy of their products. Academic misconduct in this area is rarely punished, with institutions choosing to defend the culprits rather than punish them.

Misreporting results is common in psychiatric research, as is publication bias in which effect sizes and treatment efficacy are inflated. For example, both antidepressants and psychotherapy are likely not as effective in treating depression as the data would indicate. Similarly, antipsychotic medication’s ability to treat psychosis may be overestimated.

 

****

Damen JA, Heus P, Lamberink HJ, Tijdink JK, Bouter L, Glasziou P, Moher D, Otte WM, Vinkers CH, Hooft L, Indicators of questionable research practices were identified in 163,129 randomized controlled trials, Journal of Clinical Epidemiology (2023), DOI: https://doi.org/10.1016/ j.jclinepi.2022.11.020. (Link)

Previous articleCelebrating Steps Toward Humane Approaches to Distress
Next articleA Brain for Our Emancipation
Richard Sears
Richard Sears teaches psychology at West Georgia Technical College and is studying to receive a PhD in consciousness and society from the University of West Georgia. He has previously worked in crisis stabilization units as an intake assessor and crisis line operator. His current research interests include the delineation between institutions and the individuals that make them up, dehumanization and its relationship to exaltation, and natural substitutes for potentially harmful psychopharmacological interventions.

1 COMMENT

LEAVE A REPLY