Outcome Reporting Bias in Antipsychotic Medication Trials

A new study in the journal Translational Psychiatry, an influential journal in biological psychiatry published by Nature, challenges the state of the research on antipsychotic drugs

Peter Simons

A new study suggests that the effectiveness of antipsychotic medications may be overstated due to biased publication methods. The researchers found that the vast majority of antipsychotic drug trials used biased reporting methods. These practices ensured that the published drug trials demonstrated positive outcomes by obscuring a number of negative findings.

“Sweep It Under The Carpet at CEPT University,” by Vaishal Dalal (Photo Credit: Wikipedia Commons)

Since 2006, when the World Health Organization mandated it, drug trials are required to pre-register their intended outcome measures on the clinicaltrials.gov database. This was designed to prevent selective reporting bias, a type of deceptive practice in which researchers conduct many outcome measures, then report only the ones with positive results.

Researchers at Utrecht University conducted the current study. The corresponding author was Jurjen J. Luykx at the Department of Translational Neuroscience. Luykx and his colleagues found that 85% of the published antipsychotic drug trials did not report on the outcome measures they indicated when they pre-registered their study. In fact, according to the researchers:

“81% of [antipsychotic drug trials] had at least one secondary outcome non-reported, newly introduced, or changed to a primary outcome in the respective publication.”

When researchers conduct studies to determine whether medications are effective, it is standard practice to declare a primary outcome (such as scores on a standardized measure of psychosis). However, when researchers use multiple questionnaires assessing many kinds of outcomes, it increases the chances that any one outcome may be improved simply due to chance. That is, the more tests you run, the more likely it is that one of them will find a significant result.

When researchers report on the test that came out positive, rather than the test they selected as their primary outcome at the beginning of the study, this is known as selective outcome reporting bias. This allows researchers to ensure that their results will be positive and publishable, but has the side effect of deceiving the public—and potentially other researchers—into thinking that the results were uniformly positive.

Luykx and his colleagues found 48 published randomized control trials of antipsychotic medications for schizophrenia or schizophreniform disorder since 2006 and compared their reported outcomes with the outcome measures they had registered in the clinical trials database before beginning their studies. However, 17 of the 48 trials did not register in the database until after their research was complete, defeating the purpose of pre-registering outcomes.

Of the 48 trials, four did not even mention their pre-registered primary outcome in their publication. Another three converted their pre-registered primary outcome to a secondary outcome when publishing (that is, making the primary outcome seem less important). A further ten studies failed to appropriately pre-specify their primary outcome.

Secondary outcomes showed even more discrepancies. Eighteen of the trials failed to mention their pre-specified secondary outcomes in their publication; four of the studies converted a secondary outcome to a primary outcome for publication (making it seem as though it was the target outcome, when in fact it was a minor part of the study); and 37 of the 48 trials failed to pre-specify all of their secondary outcomes.

Regarding the safety and tolerability of antipsychotic drugs, there were 74 pre-specified outcomes—of which 53 were included in the relevant publications. However, the papers added 335 new outcome measures regarding safety and tolerability that were not pre-registered. This may indicate that researchers conducted a myriad of safety and tolerability tests to selectively publish the ones that were favorable.

These findings are consistent with previous research that found similar results for both antipsychotic drug trials (see full article) and the antidepressant drug trial literature (see full articles here and here). These studies document other forms of bias as well, such as the tendency to leave negative results unpublished. For instance, in antidepressant trials, when published and unpublished trials are combined, 49% of the trials found that the drugs were no more effective than placebo.

Luykx and his fellow researchers suggest that the research community needs to be more careful. Authors must explain discrepancies in their published articles, and journal editors must demand that researchers hold to their pre-specified outcome measures.

Only when these pervasive sources of bias are reduced will the public be able to trust the reliability of the studies testing the efficacy and safety of these drugs.



Lancee, M., Lemmens, C. M. C., Kahn, R. S., Vinkers, C. H., & Luykx, J. J. (2017). Outcome reporting bias in randomized-controlled trials investigating antipsychotic drugs. Translational Psychiatry, 7, e1232. doi:10.1038/tp.2017.203 (Link)

Previous articleHow Culpable Are Educators and Psychologists in Youth Suicide?
Next articleIs Addiction a Disease?
Peter Simons
MIA-UMB News Team: Peter Simons comes from a background in the humanities where he studied English, philosophy, and art. Now working on his PhD in Counseling Psychology, his recent research has focused on conflicts of interest in the psychopharmaceutical research literature, the use of antipsychotic medications in the treatment of depression, and the general philosophical and sociopolitical implications of psychiatric taxonomy in diagnosis and treatment.


  1. Data mining, huh? How come this doesn’t surprise me. I would love to see if the clinical data and paper for this new high blood pressure threshold, never mind the financial disclosures of the doctors who wrote the paper. Oh yea, that isn’t required. Did anyone fail to mention in the paper that the actuary tables have now lost 7 years of lifespan on all age groups? But I should realize I should just LISTEN TO MY DOCTOR ANYWAY because he’s a doctor. And they know EVERYTHING! How stupid of me!

  2. A lot of this bias must come from the construction of the outcome measures themselves. If the illness they’re attempting to quantify is inherently subjective, any quantitative measures based on questionnaires is also going to be subjective. Basing quantitative calculations on such shaky qualitative foundations is bound to give spurious results that can be easily cherry-picked.

  3. Turning a blind eye to side effects of neuroleptics is common practice, as is diagnosing someone as relapsing when they attempt to withdraw from these drugs and run into difficulty.

    When a person eventually negotiates recovery as a result of careful drug withdrawal – the Institution is then left with a “problem”.

    In this event I can verify historically, Research Psychiatrists doctoring client records in support of their “product”.
    I can also verify with evidence
    present day doctors, doctoring client records in support of historically corrupt psychiatrists – instead of reporting them for Malpractice.

    • If we recover drug free and they can’t readily suck us back in, they squirm out of it. They usually mutter, “Uh, I guess they weren’t really mentally ill to begin with.”

      I know millions of people diagnosed as severely mentally ill who never actually were. That’s really no dumber than: I know millions of people whose lives were saved by psychiatric “mediicines!”