In a new opinion piece in The BMJ, Richard Smith argues that “the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported.”
Smith, who was the editor of The BMJ until 2004, has been a tireless fighter for ethics in scientific research. He co-founded of the Committee on Medical Ethics (COPE), chaired the Cochrane Library Oversight Committee, and was on the board of the UK Research Integrity Office.
Smith writes that about 20% of health research is flat-out fraud, according to recent data from researcher Ben Mol.
Smith goes on to describe the case of Ian Roberts, who learned that a review he co-authored included data from studies that never actually took place.
“They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that ‘I wouldn’t trust the data.’ Why, Roberts wondered, did he publish the trial? None of the trials have been retracted.”
Smith also cites a study from last year by J. B. Carlisle, who examined studies published in the journal Anaesthesia. Using studies that provided individual patient data, Carlisle was able to determine that 44% included false data. Carlisle wrote, “I think journals should assume that all submitted papers are potentially flawed and editors should review individual patient data before publishing randomised controlled trials.”
According to Smith, “Very few of these papers are retracted.” Thus, articles with falsified data make up a large part of the research literature, and it can be difficult for a reader to know whether to trust a study, even if it’s published in a respected journal.
Smith writes that peer review—the key quality-checking step in publishing a research article—doesn’t detect falsified data. Reviewers and editors begin with the assumption that the data is real and usually miss made-up data.
And there is no incentive for journals or academic institutions to detect fraud, retract fraudulent studies, or punish those responsible. Journals may find their reputations harmed if it comes out that they published fraudulent studies. Academic institutions face a similar risk, but also want to protect their researchers, who bring in grant money to the school (sometimes in the millions of dollars).
Even worse, it’s difficult to prove when a study uses fraudulent data. In many cases, other researchers do not have access to the individual data, and even if they do, it requires a lot of work to sift through for suspicious-looking numbers.
Smith writes, “Regulators often lack the legal standing and the resources to respond to what is clearly extensive fraud, recognising that proving a study to be fraudulent (as opposed to suspecting it of being fraudulent) is a skilled, complex, and time consuming process.”
One partial solution making all data available to the public so that other researchers can check the work. Another partial solution is the REAPPRAISED checklist (described here), which asks questions like who funded the work, how clear the methodology is, and whether there is anything suspicious about where the study took place or the participant recruiting process. For instance, if the dates given for recruited participants don’t match up or seem too short or too long, that can be an indicator that the data has been invented.
Smith writes, that in the past, “Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientific fraud. All those reasons for not taking research fraud seriously have proved to be false.”