John P. Ioannidis is perhaps the world’s leading expert on conflicts of interest and research methods. Recently, he published an article that takes a critical look at the way professional societies, such as the American Heart Association or the American Psychiatric Association, write guidelines for treatment. The article was published in the American Heart Association journal Circulation: Cardiovascular Quality and Outcomes.
Clinical practice guidelines (CPGs) are written to ensure that professionals in a given field are aware of the evidence-based practices that can be delivered to their patients. CPGs (at their best) also promote informed consent by highlighting both the potential benefits and the potential risks of any medical or psychological intervention. Well-done CPGs also include the limitations of current research by ranking particular procedures as being “low evidence.” Ioannidis notes how influential CPGs can be:
“Changes in definition of illness can easily increase overnight by millions the number of people who deserve specialist care. This has been seen repeatedly in conditions as diverse as hypertension, diabetes mellitus, composite cardiovascular risk, depression, rheumatoid arthritis, or gastroesophageal reflux. Similarly, changes in prevention or treatment options may escalate overnight the required cost of care by billions of dollars.”
Ioannidis critiques the guidelines written by professional societies for a number of reasons. First, he notes the “clan-like group self-citation network” of guideline authors, which damages the integrity of independent research. Industry-funded authors become “opinion leaders” who are then tasked with writing the guidelines that promote the interventions of that industry.
Hundreds of co-authors make a name for themselves this way, and these guidelines bring a large readership to scientific journals—far more than are brought in by actual research studies. Thus, the guideline authors become “superstars” in the field, and are then able to write “expert reviews” which are cited further. However, very little in this process is evidence-based. Instead, it appears to be a method for creating an exclusive group of industry “insiders” who can influence the practice of medicine.
Ioannidis lists a number of “red flags” that permeate the CPGs written by professional societies.
“The list of red flags includes sponsoring by a professional society with substantial industry funding, conflicts of interest for chairs and panel members, stacking, insufficient methodologist involvement, inadequate external review, and noninclusion of nonphysicians, patients, and community members.”
Ioannidis writes in another accompanying letter that many guidelines purport to include evidence-based practices, while they might more accurately be said to reflect the opinions of professionals in the field.
“When there is no evidence, guidelines should just say that there is no evidence, period,” he writes. “Unfortunately, most current guideline recommendations pertain to questions and situations for which there is a dearth of evidence. As guidelines expand the spectrum of questions to cover, the proportion of questions with no or poor evidence increases. Thus, mere opinions masquerade as authoritative knowledge.”
Although many professional societies have taken steps to reduce financial conflicts of interest in their guideline authors, the other problems remain. Ioannidis suggests that a single patient or community member on a team of a hundred specialists may not be able to have their voice counted in the creation of these guidelines. Thus, he argues that guidelines should be produced by the professional societies, but the team should be composed entirely of methodologists, patients, community members, and even specialists in other fields and that the authors can call upon the specialist experts in the chosen area to provide evidence for their interventions.
If guidelines use this method, according to Ioannidis, the authors can better evaluate the risks and benefits of new interventions, and critically assess the evidence base for their effectiveness. After all, even without financial conflicts of interest, specialist experts in a professional society are inclined to overestimate the benefits of new technologies and interventions in their chosen field. Sometimes these new interventions are incredibly expensive and offer only marginally better outcomes with significant risk. Independent researchers—and community members who would be impacted by new policies—could provide a check against expensive and overestimated interventions.
“Professional societies should consider disentangling their specialists from guidelines and disease definitions and listen to what more impartial stakeholders think about their practices. Professional societies could still fund these efforts without their own experts authoring them.”
Ioannidis. J. P. A. (2018). Professional societies should abstain from authorship of guidelines and disease definition statements. Circulation: Cardiovascular Quality and Outcomes, 11(10). https://doi.org/10.1161/CIRCOUTCOMES.118.004889 (Link)
Accompanying letter: https://www.ahajournals.org/doi/pdf/10.1161/CIRCOUTCOMES.118.005205
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.
Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.