New research shows that more connected and well-known researchers are more likely to be published, even when they receive negative reviews.
The research was led by Giangiacomo Bravo at Linnaeus University, Sweden, and the team hailed from Spain, Italy, and Germany. They used a Bayesian algorithm to determine the likelihood of publication in four different computer science journals, over eight years, and to control for a number of other factors. For instance, they took into account the grade given to a paper by reviewers and how connected the author was to others in the field (a measure of reputation).
They write that some “editorial decisions were not due to the quality of submissions – resulting in more positive reviews – but to the reputation of authors.”
The blinded peer review process is intended to reduce publication bias. If reviewers do not know whose papers they are grading, then they cannot base their decisions on whether they have heard of the person or whether their own careers might be impacted by criticizing someone important in the field. However, it is the journal editors, not reviewers, who have a final say in publication.
Bravo and the other researchers write that their findings suggest “that editors interpreted referee recommendations differently depending on the author’s reputation in the scientific community.”
They found that amongst authors receiving negative reviews—which should indicate poorly done studies or poorly written papers—well-known authors were more likely to be published anyway. Authors who were less well-known tended to have their papers rejected based on negative reviews.
“Bias was more prominent when review scores were relatively low, which suggested that more important authors more easily escaped a presumably deserved rejection when they did not submit brilliant papers.”
Bravo and the researchers have several explanations for why this publication bias might occur. They suggest that journal editors might believe that well-known authors would be more likely to be cited by other authors—thus increasing their journal’s impact factor. Additionally, editors might have more confidence in well-known authors and believe that the reviewers are simply less knowledgeable. However, both these factors run counter to the purpose of blind peer reviews.
The current study was limited to computer science journals, but the authors suggest that the results may apply to scientific journals in general. Publication bias, and reputation of authors, they suggest, are common pitfalls no matter the specific field.
Previous research has identified similar biases across fields. For instance, in a study last year, researchers found that journal editors were more likely to handle manuscripts from authors with whom they had previously collaborated. In addition, prior collaborators were more likely to have a speedier review process.
Ultimately, Bravo and the other researchers suggest that more attention needs to be paid to reducing publication bias from all sources. However, according to the authors, believing that publication can be objective is “naïve.”
Bravo, G., Farjamab, M., Moreno, F. G., Birukou, A., & Squazzoni, F. (2018). Hidden connections: Network effects on editorial decisions in four computer science journals. Journal of Informetrics, 12(1), 101-112. https://doi.org/10.1016/j.joi.2017.12.002 (Link)