The image is so familiar it is a stereotype: The physician’s desk, piled high with copies of medical journals, where she or he reads the latest research updates between patients. Medical science, it is said, progresses so quickly that if practitioners do not keep up their knowledge base will be obsolete within five years.
But is the reading of journals useful? Can it potentially inculcate misinformation as much as progress? Is the knowledge gained worthwhile?
Much has been written about the flaws of individual studies. In this blog I recently focused on an example: The infamous Study 329 on the use of paroxetine for adolescent depression. Reading studies such as these, with their swapped outcomes, hidden side effects, and often shockingly biased interpretations of data, may produce only misplaced beliefs in readers, and may actually result in less competent practice. If the physician reads only the summary abstract – as many do, being strapped for time – there may be little relationship between the reported outcomes and what the data actually say.
Imagine that such problems did not exist, however. Imagine that we live in a never-never land of balanced and dispassionate reporting of study results, that published work is competently done, that all outcomes are clear. This is hard to do, because it is so far from current reality. Could there still be a problem?
Well, yes. Several. But let’s just focus on one: Publication bias.
Most drugs and medical procedures are evaluated in multiple trials. Indeed, a single trial is seldom (-to-never) sufficient for a drug to be approved for use. Some trials are reported in the literature; others are not.
Imagine that you are a particularly diligent physician who reads all of the journals relevant to your field. Drug Z appears in four studies, and in all four it outperforms the inert placebo administered to the comparison group. Sounds good. Good enough, perhaps, to influence your practice. Here is a reliable treatment that generally seems to work very well.
The problem is that there are 10 trials of Drug Z, not 4. You can’t read the remaining six, because they have never been published. This isn’t a problem if publishing is something of a random process. Put 10 trials in a bag, then pull out four of them and print them: you will probably have something vaguely resembling a representative sample.
It has been amply demonstrated that this is not how publishing works, however. Submit two studies: one showing that Drug Z works, and one showing no difference from placebo. The study finding a difference between groups is much more likely to see print. Journals like publishing promising findings, not failures.
This understates the problem, however, because the entities carrying out the research have a vested interest (a clear, documented, and obvious conflict of interest) in seeing supportive studies published and negative trials suppressed. So the more accurate picture is that we give our bag of 10 studies to a person (or, say, a pharmaceutical company) who wants Drug Z to look good, turn our backs while they read them all over and hide the ones they don’t like under a mat, then pick four from the rest to submit to journals.
Now switch roles and become a practicing physician trying to do your best for your patients. You read the published literature and attempt to get an impression of the overall trend in results for Drug Z. You never hear the results of the negative trials and don’t even know the studies exist. As far as you are concerned, four trials have been conducted on Drug Z and the results are unanimous. Needless to say, you start including it in your prescribing habits, happy that you have been keeping up with recent developments.
Psychology is, to a great extent, the science of the disconnect between external reality and internal representations of that reality. Even with a perfectly balanced presentation of data, biases and distortions are bound to develop in any human mind. The last three patients to whom you gave Vitamin C recovered from their colds immediately, and as a result you have developed a gut conviction of its efficacy.
Added to the problem of fallible human information processing however, is a system of medical reporting that introduces extreme distortions in research before practitioners ever set eyes on it. In such a situation, erroneous convictions about treatment efficacy are inevitable. It is difficult to see how a system such as this could lead to reliable enhancements in practice – but easy to see how a reader could be misled by organizations that deliberately strive to slant the impression the reader gets.
So should physicians be reading journal accounts of pharmaceutical trials? It is difficult to see the worth in such an exercise – at least, as the journals operate currently. It is even possible that doing so will lead to decrements rather than improvements in clinical practice.
Is there a solution? Yes, an obvious one. Organizations carrying out clinical trials should have to register their study with journals prior to collecting the data, with a clear commitment to publish the results regardless of outcome. This wouldn’t solve the problems of distortion in the write-up or the downplaying of side effects, but it would help significantly with the publication bias problem.
If this is likely to help, why haven’t the journals already pledged to do this? Well, um, they did. Years ago. And in 2007 the practice was put into law in the United States. And then…very little changed. Journals have gone on publishing trials that were never registered, and null trials have continued to go missing.
A Hopeful Development
For many years this problem has been noted amongst scientists and practitioners in medicine and other health disciplines (clinical psychology is no less prone to this type of concern). More recently, it has gone beyond the health science nerd community and has begun to seep into the public consciousness. Medicine and other health disciplines have begun to feel the pinch of appropriate skepticism and disrepute.
As is often the case, shame motivates where ethics fail. With public exposure we may see improvements in the pipeline from laboratory to clinical practice.
One of the chief proponents of change has been Ben Goldacre, British physician, science writer (the excellent Bad Pharma, among others), and medical gadfly. He has been a chief proponent of alltrials.net, a website devoted to changing publication practice in the medical field. Here is a sample of his style, in a TED talk:
And the word is gradually seeping out. This week there are articles in The Economist (click to see this lovely piece of reporting) and elsewhere on the problem. As exposure continues, we can expect that the worm of shame may begin to do its work.
Is this important?
All of this returns us to a familiar two questions:
Are the lives of people with health concerns important?
Does the field of healthcare have any pretensions to being based on evidence?
If the answer to these two questions is “No,” then poor research, poor reporting, and poor practice are no great problem – though it would probably be more honest if we stopped pretending that healthcare believed in science or in the improvement of human welfare and treated it instead as a simple revenue-generating business. That’s not why I got into it (in the allied and every bit as fallible field of psychology), but without a firm adherence to the principles of science it is difficult to argue that it is anything else to skeptics.
If the answer to either (or both) of these questions is “Yes,” however, then the current state of affairs is clearly unsatisfactory and needs considerable change – not just a pledge designed to calm the waters, but an actual commitment to responsible research and the balanced reporting of evidence.
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.
Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.