In his masterpiece on love and life The Unbearable Lightness of Being, Milan Kundera faces us with a dilemma about the important things in life “einmal ist keinmal” – “once is never”.
But Kundera doesn’t mean the same thing as academics, regulators, and others mean when they dismiss adverse events as anecdotal. Academics need lovers to show them how wrong their interpretations of once is never are.
In the last millennium it seems we understood love and adverse events better. Up to the development of the SSRIs, the standard approach to deciding what might be happening when a drug produced a change was to note if the change emerged after taking the drug, reversed itself if the drug was stopped, and reappeared after re-exposure to the drug.
Women or men making love on an SSRI were instantly aware of delays in orgasm, often within an hour of the first dose. Things went back to normal on stopping so that people could be advised to take a break from their drug for the weekend. It didn’t take a controlled trial to work out what was going on. It didn’t take a controlled trial for companies to work out they had a treatment for men here, only to decide that pre-Viagra cultural factors made marketing this a risky proposition. In fact controlled trials didn’t show the problem because companies didn’t collect the data, in some cases deliberately.
But today, when something goes wrong on a drug, when someone becomes suicidal on an antidepressant for instance, even if the problem clears when the drug is stopped and reappears when the person restarts, we are told this is anecdotal, and nothing is done with the report.
FDA say the hundreds of thousands of reports they get annually are essentially anecdotes. If there are no details on how many people are getting the drug, and no details on how many are having the problem, we have no denominator and no incidence rate. In the case of suicidality and antidepressants, as Paul Leber of FDA put it 50,000 reports would not give FDA pause for concern – FDA expect depressed patients to be suicidal.
At best a series of reports might produce a “signal”. These signals might be useful for “hypothesis generation” but they are never definitive. Unless a company chooses to investigate further, nothing happens, and companies often choose not to investigate further.
Controlled trials we are told show that even those on placebo can have a surprising number of adverse events – so how can it be said that a drug has caused the problem unless it is happening significantly more often on the active treatment than on placebo?
Some of the real difficulties with adverse event data are laid out in a particularly clear way by Science Based Pharmacy – see Goldmine or dumpster dive?, a closer look at adverse events. Science Based Pharmacy, however, adds one key point, which is that the quality of the data is more important than its quantity.
But if anecdotes and signals are just that, why should the quality of reporting of an anecdote make any difference? If garbage in/garbage out is the basic message, having good quality garbage is not much of a rallying call.
If everything is anecdotal it would become impossible to work out whether we were in love, or if drugs were interfering with our love lives, or any of the other important things in life. As this doesn’t seem to be an issue for most of us, the academics must have something wrong. What might that be?
First as outlined in Cri de Coeur and Once is Never, RCTs simply don’t cut it when it comes to many adverse events. Even the best RCTs, without a hint of fraud or manipulation are quite likely to hide rather than reveal the problem. This is an intrinsic property of RCTs done in illnesses (the picture is different in healthy volunteer RCTs – see Mystery in Leeds).
This is why the greatest benefit we get from RCTs lies in testing claims that treatments work rather than in testing for adverse events. Even when testing claims for a treatment benefit, RCTs may get it wrong, but in general by making it up to manufacturers to justify their claims for a benefit, RCTs can help keep us safe.
RCTs were added to the evaluation of drugs in 1962 for just this reason. If a treatment doesn’t work, it isn’t safe at any speed. So claims for efficacy were to be tested using a blunt instrument that should eliminate the possibility of chance, so blunt it risks missing real efficacy. Concealing a real adverse event was not part of the game plan.
Partly why lovers get it right and academics get it wrong is that the lover can tell the difference the drug makes, whereas clinical trial coding is likely to confound treatment induced and illness induced sexual dysfunctions. This is where Science Based Pharmacy’s reference to the quality of the data becomes so important. But data quality is only important if listening to individual cases makes sense.
In the case of suicidality, both clinicians and patients told FDA from early on that they had seen or experienced depression related suicidality before but that the treatment induced problem was qualitatively different. This should have made a difference to the academics or anyone aware of how precious a human life is. Words come before numbers. Unless the words are right, the numbers never can be. “Numbers without words never to Heaven go”.
Even hairdressers do better with adverse events than FDA now do. It was women and their hairdressers who picked up the change in hair quality produced by oral contraceptives not the regulators.
But far from attempting to enhance the observations of patients, doctors and others, fifty years of neglect of adverse events means that the reporting of events to FDA is now so appallingly bad that there can be legitimate concerns about how useful FDA’s adverse event database is. This points to a failure on FDA’s part, not a failure of adverse event reporting in principle.
There are compelling reasons to improve the quality of adverse event reporting. Adverse events are still the best way to discover new drugs, and it is perhaps no surprise as the quality of reporting has gone down, drug pipelines have dried up.
How do we improve the quality of reporting and increase the likelihood that good quality reports will lead to substantial findings and even new drug leads?
One way is to incorporate challenge-dechallenge-rechallenge, and dose-response relationships into the original observations – in the case of an SSRI the higher the dose the longer the delay to orgasm.
A second step is to follow people over time, which FDA does not do. Companies at present seem to be attempting to divert patient reporting to FDA, knowing that the lack of follow up will make the reported event anecdotal. In contrast, if a patient contacts a company, the company is legally obliged to follow up the report and attempt to make a causal determination (see American Woman, American Woman 2).
A third is to look for biological plausibility. When the biological case is clear, FDA have been prepared not just to warn but to remove drugs like Raptiva from the market after as few as 3 adverse event reports.
A fourth and critical element is to foster teamwork. When doctors, patients, pharmacists, nurses and others can bring their combined skills to the description of an event and to assessing whether it is linked to the drug or not, the likelihood of getting things right is enhanced. Filing events away in FDA filing cabinets isolates doctors from patients and all of us from each other and is exactly the wrong way to go.
What teamwork does is to introduce a range of biases into the picture. People contest linkages – the course of true love never did run smooth. This is a good rather than a bad thing. The singularity of an event is not where bias arises in science. It is when judgments are singular that the risk of bias is greatest. We are on much sounder grounds if a range of people with different biases come to the same conclusion. The effectiveness of any and all methods in science hinge on having input from a team for exactly this reason.
The final element is to get to grips with the propaganda that is inhibiting adverse event reporting.
For those who think RCTs are critical in establishing whether an adverse event is happening, that every other piece of evidence is anecdotal, or at best a signal, there is a key question – how do we get from RCTs to real life?
If RCTs show SSRIs can cause suicidal acts or sexual dysfunction, what does this tell me about whether this SSRI caused this suicidal act or that sexual dysfunction? Unless a judgment of this type can be made, no doctor or patient can ever know whether they have reasonable grounds to stop a treatment on the basis that it is actually causing a problem or not. Lives depend on being able to make these judgement calls.
Curiously, no-one seems to have a problem making judgment calls like these in the absence of RCT data when it comes to the off-label beneficial effects of prescription drugs.
It is all too easy to outline the problems with current adverse event reporting. The challenge is to improve it. This is what has prompted the development of Rxisk.org. Rxisk operates on the basis that no one knows a drug related effect like the person who is taking a pill, or those living with them, but it also pushes the person to involve their pharmacist or doctor in the assessment of what happens with challenge, dechallenge and rechallenge and when the dose is changed and it seeks follow up data.
As outlined in Pharmageddon, it is probably no accident that it is mothers, wives and daughters, who know how precious each life is and who have seen the changes in loved ones up close, who pose the greatest challenge to company propaganda – the repetition of “once is never”.
In FDA hands adverse event reporting has become so degraded that it differs from the real thing as much as love differs from a one-off event in a dark alley, when perhaps once approaches never. But while in science we look at the anatomy of an event in a way that love rarely does, in both love and science once is not ever never.