High Rates of Questionable Research Practices Found in Ecology and Evolution

0
808

A new study, published online by the Open Science Foundation, suggests that questionable research practices (QRPs) are prevalent in the fields of ecology and evolution. 64.1% of researchers in ecology and 63.7% of researchers in evolution reported having engaged in at least one QRP. Almost half (44.6%) of the researchers expressed doubts about whether their research was ethical.

“We found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines.”

Photo Credit: Pixabay
  • Cherry picking involves reporting only positive results. Especially when researchers conduct numerous tests, and only report one or two results, leaving out potential areas in which the hypothesis was not supported by the data. This gives the impression that the evidence is much stronger than it actually is.
  • P hacking involves various methods of modifying statistical analyses to ensure that results appear significant. For instance, after analyzing data and finding no effect, researchers might decide to continue collecting data until they reach significance. Alternatively, researchers might either leave outliers in the data or remove outliers, based on whether that causes their results to achieve significance. Researchers also sometimes round their numbers when that can cause them to reach the significance threshold.
  • HARKing (Hypothesizing After Results are Known) involves presenting new findings as if they confirm previous expectations. Instead of treating unexpected results as new areas to explore, researchers might present them as if they are in line with the hypothesis.

All of these QRPs serve to increase the frequency at which researchers find effects that are not actually there—false findings, known in research jargon as “false positives” or “Type I Errors.” Then, when other researchers attempt similar studies, they are unable to replicate these findings.

Prior research has found high rates of these practices in the psychology and medical fields. A study of US psychologists found that more than 60% engaged in at least one QRP, while another study of Italian psychologists found that over 50% did so. Like psychology, the fields of ecology and evolution have also experienced a “replication crisis” in which many findings were not able to be replicated in later studies.

In the current study, 807 researchers (published in a variety of ecology and evolution journals) were surveyed about the QRPs they engaged in. They were from all career stages, including grad students, postdoctoral researchers, early career, and seasoned researchers. Career stage was not associated with the extent of engaging in QRPs.

The participants also had the opportunity to explain their answers, and sometimes gave excuses or explanations for their use of questionable research practices.

According to the researchers, “The most frequently offered justifications for engaging in QRPs were: publication bias; pressure to publish; and the desire to present a neat, coherent narrative.”

For instance, although some participants admitted that omitting non-significant results biases conclusions, others argued that “I think this is ok” if they are “not integral to the story.” In terms of HARKing, some participants argued that “editors and reviewers often demand that exploratory results are framed as a priori hypotheses.” This statement indicates that researchers are often under pressure from publishers to engage in questionable ethical research practices.

On the other hand, the current study also demonstrates that researchers tend to only rarely use QRPs—even those that admitted to doing so reported only doing so “occasionally.” In some cases, researchers argued that their use of QRPs was justified by finding an error in their use of another methodology, resulting in their changing of outcomes or statistical analysis after data was collected.

According to the current authors, “Researchers’ comments revealed the pressure they feel to present a short, cohesive story with statistically significant results that confirm a priori hypotheses, rather than a full (and likely messy) account of the research as it was conceptualised and conducted.”

The authors suggest that pre-registration (listing outcome measures and analyses in a public database before collecting data) could eliminate many of these concerns. They also suggest that the editorial and peer review process should be modified to reduce the pressure to engage in QRPs.

 

****

Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018, March 21). Questionable Research Practices in Ecology and Evolution. Retrieved from osf.io/ajyqg (Open Access)

Previous articlePsychiatry by Jenny
Next articleIs It Time to Let Go of One of Science’s Measures?
Peter Simons
Peter Simons was an academic researcher in psychology. Now, as a science writer, he tries to provide the layperson with a view into the sometimes inscrutable world of psychiatric research. As an editor for blogs and personal stories at Mad in America, he prizes the accounts of those with lived experience of the psychiatric system and shares alternatives to the biomedical model.

LEAVE A REPLY