When Medical Muckraking Fails

5
224

Everyone knows how muckraking is supposed to work.  An investigative reporter uncovers hidden wrongdoing; the public is outraged; and the authorities move quickly on behalf of justice and righteousness.  There can be failure at any of these points, of course.  Sometimes there is no outrage.  The timing of the story may be poor, or the media outlet might be too small to get any real attention.  If the target of the investigation has a skilled public relations team, it may be able to spin the story in a way that minimizes the damage.  Often, the relevant authorities simply don’t take any action.  And once the initial shock of the story has settled, the public demand for justice vanishes, like a bullet that has missed the target.

In American medical research, the model for successful muckraking is AP reporter Jean Heller’s investigation of the Tuskegee syphilis study.  When Heller’s story was published on the front page of the New York Times on July 26, 1972, headlined “Syphilis Victims in U.S Study Went Untreated for 40 Years,” public reaction to the story was immediate. Sweeping reforms instituted in the aftermath of Tuskegee laid the foundation for oversight of medical research today.  Fifteen years later, a similar sequence of events followed the “unfortunate experiment” at National Women’s Hospital in Auckland, New Zealand: a muckraking investigation by Sandra Coney and Phillida Bunkle in Metro magazine, swift condemnation, a governmental inquiry, and legislative reform.

But there it has ended.  If reform is the measure of success, a good case can be made that no genuinely successful muckraking stories of American medical research scandals have been published since the 1970s.  Not that there has been any shortage of scandals.  Anyone with a passing familiarity with research ethics can recite a long, depressing list of widely reported episodes, from the deaths of Ellen Roche, Nicole Wan, Jesse Gelsinger and Tracy Johnson through Pfizer’s Nigerian Trovan trial, the Eli Lilly homeless alcoholic studies, Protocol 126 at the Fred Hutchinson Cancer Center, the SFBC International scandal in Miami, the GAO sting operation of Coast IRB, the suicide of Dan Markingson at the University of Minnesota, and the recent neurosurgical scandal at University of California Davis, to mention only a few.  Several years ago I wrote about a Minnesota psychiatrist who was responsible for the deaths or injuries of 46 patients under his care, including 17 subjects in research studies, and whose misconduct had been exposed a decade previously by Robert Whitaker in the Boston Globe.  After a brief period of probation and a course in bioethics, the psychiatrist was back at work, conducting research studies and giving marketing talks for the drug industry again.  In America, medical muckrakers may win journalism prizes, but their work has led to little meaningful reform.

Why not?  One reason may be the context. Both the Tuskegee syphilis study and the “unfortunate experiment” in Auckland emerged during times of larger social upheaval.  The Tuskegee syphilis study on poor African-American men was exposed during the American civil-rights movement, while the “unfortunate experiment” on women at risk of cervical cancer in New Zealand came to light during the women’s movement. Both studies became symbols of a much larger pattern of exploitation of relatively powerless groups by a white, male medical establishment. None of the more recent American scandals have resonated with larger social movements in quite the same way.

But a larger reason, paradoxically, may be the establishment of a research oversight bureaucracy in the aftermath of Tuskegee.  At a time when Congress was insisting that medical research be regulated, preferably by a centralized federal authority, medical leaders put forward a system of “expert review” by local Institutional Review Boards (IRB’s) as a compromise solution. As Laura Stark writes in her recent book, Behind Closed Doors, IRB’s were a way of managing a crisis that threatened the budgets and reputation of medical researchers.

By any reasonable estimate, that oversight system has been a failure.  It is a matter for debate whether IRB’s were sufficient to protect human subjects in the 1970s, but they are certainly no match for the power and influence of today’s globalized, multi-billion-dollar research industry.  Yet so many people are professionally invested in the current oversight system that they cannot imagine replacing it, only tinkering with it.  When research scandals are uncovered they are treated as local, individual failures, not as flaws in the oversight system as a whole.

Of the many flaws in the current oversight system, perhaps the most dangerous is its secrecy.  IRB meetings are closed to the public, and their proceedings are confidential.  Many IRB’s are private, for-profit businesses, and thus not even subject to federal or state Freedom of Information Act requests.  Often it is difficult even to find out which IRB has approved a research study.  In fact, the very existence of some research studies is secret; research sponsors are not required to register Phase I clinical trials on Clinicaltrials.gov, the federal registry.  So if you are an investigative reporter and want to see a potentially troubling research protocol – what the consent form looks like, how much the subjects were paid, whether the investigators had any financial conflicts of interest, how risky the study is – chances are that you will be denied.

The problem with an oversight system that is both unreliable and secretive is that the public has no idea what is happening beneath the surface.  If there is no mechanism for making oversight failures public, and barriers are erected to prevent journalists from investigating, how can we judge whether the oversight system is working?  How certain are we that there are not many more hidden abuses, waiting to be uncovered?

Note: This was first posted on the Chronicle of Higher Education Brainstorm blog.

***

Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.

***

Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.

5 COMMENTS

    • Exactly. Like you, I trust no psychiatrist or any other medical doctor, simply because they are a doctor. They must first prove to me that they are worthy of my trust. Until then, nothing that they say or do means anything at all. Some of the psychiatrists posting here seem to have the feeling that we should welcome them and all of their ideas with open arms, simply because they are psychiatrists and they’re posting here on MIA. It doesn’t work that way, at least not for me.

      Report comment

  1. Re: Anonymous’s and Ted’s responses to this piece: They state that they don’t trust doctors (and clearly with good reason) but seemed to have missed the point of Carl Elliot’s article: that there is a flawed, secretive, profit-driven system in place, required by the federal government, that is supposed to protect human “subjects” in research studies, and does not do so. This harms everyone, especially those in the general public who take drugs approved by the FDA under this flawed system and have no idea that the drugs may have been approved based on tainted studies. It’s all well and good for those of us who take the time to learn about these issues, but most people don’t and are unwittingly at risk.

    Report comment

  2. Interesting article.

    Looking at the 1998 Boston-Globe paper from Robert Whitaker gives some sense of the long struggle that led to this website. It is interesting that in that paper, Dr. Torrey was Robert Whitaker’s chosen ally in denouncing the corruption of science by pharma.

    Report comment

LEAVE A REPLY