U.S Behavioral Research Studies Skew Toward Positive Results

4
269

Researchers from the Universities of Stanford and Edinburgh found, in a comparison of 1,174 primary outcomes from 82 meta-anlyses of biological and behavioral research, that U.S.-based behavioral researchers were more likely to exaggerate overall effect sizes in relation to individual results. “These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States,” say the authors, “Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.” The paper was published yesterday in Proceedings of the National Academy of Sciences of the United States of America.

Article →

From the abstract:

“Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.”

Fanellia, D., Ioannidis, J; US Studies May Overestimate Effect Sizes in Softer Research. Proceedings of the National Academy of Sciences. Online August 26, 2013. doi: 10.1073/pnas.1302997110

Of further interest:
US behavioural research studies skew positive (Nature)

Previous articleCyndi Roberts – Short Bio
Next articleSteindór J. Erlingsson – Short Bio
Kermit Cole
Kermit Cole, MFT, founding editor of Mad in America, works in Santa Fe, New Mexico as a couples and family therapist. Inspired by Open Dialogue, he works as part of a team and consults with couples and families that have members identified as patients. His work in residential treatment — largely with severely traumatized and/or "psychotic" clients — led to an appreciation of the power and beauty of systemic philosophy and practice, as the alternative to the prevailing focus on individual pathology. A former film-maker, he has undergraduate and master's degrees in psychology from Harvard University, as well as an MFT degree from the Council for Relationships in Philadelphia. He is a doctoral candidate with the Taos Institute and the Free University of Brussels. You can reach him at [email protected].

4 COMMENTS

  1. I haven’t spent the $30.00 to access the article itself. I would be very interested to know what others with more information have to say about this article. I can imagine a lot of counter arguments here, e.g., a) Why should one trust something come out of Brown University, when its recently-departed head of psychiatry was a driving force behind the notoriously deceptive Study 329? b) How can the hugely manipulated/biased/sometimes outright fraudulent psychiatric studies be profitably be used as a comparison with any other discipline’s study?

    But if this article really does present credible evidence of overstated conclusions drawn from psychosocial research, we absolutely should know about it and learn what needs to be done to create better studies. We can’t scream about psychiatry’s manipulations unless our own house is in order.

    Report comment

  2. An added thought: On second thought, I can imagine a lot of psychosocial research being overstated. The whole “evidence based practice” paradigm is barking up the wrong tree. Barry Duncan et al have critiqued this paradigm as based on shaky research, and overlooking the “common factors” that cross all the different “evidence based models.” There really isn’t any significant difference between the models – the real differences are explained by common factors such as connection between therapist and client, agreement on how they will work, and therapists really seeking and paying attention to client feed back about how the therapy is going.

    So there probably is a lot of puffery in the studies purporting to show each specific model is particularly effective. Is this the cause of what this article says it found?

    Report comment

  3. It concerns me that, like a Greek chorus, we all chime in, piling on whenever postings on this site reflect badly on biological psychiatry and PhARMA. But the reaction here to an article casting doubt on non-medical perspectives seems to draw no interest. That is the kind of uncritical approach that allows bio-psychiatry to roll along undisturbed by anyone on the inside.

    There is good reason to think the potential of psychosocial perspectives is immense. But that doesn’t mean there isn’t right now plenty of nonsense and wishful thinking in the psychosocial pot. It’s a bit over the top to say this, but it is also true: if we aren’t willing to think critically about ourselves, we are not much different from the lazy thinkers who let the senseless medical model dominate the mental health industry.

    Report comment

    • Well, in my case it’s just that I’m not very interested in current psychosocial approaches, or different types of psychotherapy, etc. I simply don’t want to spend time or energy in exploring, using or criticizing these practices because they are not interesting for me.

      Report comment

LEAVE A REPLY