Course corrections needed in the peer review publishing process

In early 2020, as the world grappled with the COVID-19 pandemic, scientists around the world were scrambling to characterize the novel pathogen. Search results were generated at an astonishing rate and in the first 6 months of the pandemic, nearly 61000 items had been published!

The flip side was that the number of submissions completely exceeded the peer review process, especially for medical journals. In their rush to publish the next big discovery about the virus, journals have apparently cut corners.

In May 2020, the Lancet retracted a paper who claimed that hydroxychloroquine was almost miraculously effective in treating COVID-19 (it is not). Then the NEJM, arguably the most prestigious medical journal on the planet, followed suit and retracted an article describing the cardiovascular effects of COVID-19 in people with hypertension who were being treated with a class of drugs called ACE inhibitors. Both articles had common lead authors and were based on a completely fraudulent dataset. How could such a scientific scheme go unnoticed by the editors and reviewers of the two most prestigious medical journals on the planet?
Publishing an article in the NEJM or the Lancet is the holy grail for most medical professionals. Retractions during this crucial time have shaken the medical community and society. Trust in medical research has been fractured, perhaps irreparably.

With the medical community reeling from this disaster and acrimonious debates raging over the reliability of published medical research, Dr. Venkatesh Madhugiri decided to turn his attention to this issue. Prof. Madhugiri is an Academic Neurosurgeon and Clinician Scientist at the National Institute of Mental Health and Neuroscience (NIMHANS), India’s premier neuroscience institute. He questioned the veracity of published clinical research as a whole, and more specifically, in the context of his specialty.

“Retractions are not a new phenomenon, of course. However, normally the process of reporting a suspicious article, investigating data concerns, and possibly removing the article if the suspicions are confirmed usually takes months or even years. But with the COVID papers, the data fraud was detected in record time. This may have been because the entire medical community was not only publishing, but also reviewing COVID papers on a war footing. But what happens in “peacetime”, when fraudulent papers could potentially go unnoticed? ” he asks.

Retractions

High impact journals (like The Lancet and NEJM) are considered the gold standard of medical publishing and should theoretically have very low retraction rates. Dr Madhugiri and his team decided to compare retractions in neurosurgery with those of high-impact medical journals. They also included anesthesiology journals in the comparison matrix.

“Anaesthesiology as a specialty has this reputation – of having the highest number of retractions of any clinical area, thanks to the malpractice of a few researchers. For example, a certain Dr. Fuji holds the record for the greatest number of retractions for a single author across all scientific disciplines – a whopping 182 fraudulent papers,” noted Dr Amrutha Bindu Nagella, assistant professor of anesthesiology at the Sapthagiri Institute of Medical Sciences in Bangalore, and collaborator of Dr Madhugiri .

The anesthesia journals had a retraction rate of 2.6 retractions per 1000 articles published, much higher than the rate of the other groups. High-impact journals, for example, had a retraction rate of 0.75 per 1000, and neurosurgery journals had a retraction rate of 0.66 retractions per 1000 papers. Obviously, different disciplines retract articles at different rates. However, many questions remained unanswered. Why were these erroneous articles retracted in the first place and how were they cited and disseminated to the wider medical community?

The team then analyzed retractions in neurosurgery in more detail – 191 retracted articles published in 108 journals. They categorized retracted articles by reason for retraction and analyzed their citation trends. Their findings were alarming, to put it mildly. They found that author misconduct or data fraud accounted for more than two-thirds of all retractions in neurosurgery.

The time between publication and withdrawal of defective items could reach 21 years! Although the median time to shrinkage was 16.2 months, it varied significantly by group. For example, high-impact journals took longer to remove defective articles. Worryingly, they also found that the removal time was longer when items were removed due to fault or data fraud than due to actual errors.

Even more concerning was the fact that the articles continued to be read and widely circulated even after the retraction – half of the total citations received by the retracted articles accrued after the retraction! This is very problematic, as simply removing the defective papers does not seem to erase their effects.

“The exponential explosion of medical literature is bound to create ever-increasing opportunities for error, data fabrication and outright fraud. The sad reality is that many of these will remain perfect crimes. niche journals seem particularly vulnerable, but the damage to public health and trust in science is greater when the rot occurs in high-visibility journals,” said Dr. Gopalakrishnan, professor of neurosurgery at the prestigious Jawaharlal Institute of Postgraduate Medical Education and Research Emphasizes the need for regular management of published research.

The tip of the iceberg

Peer review before publication is the guardian of quality and truthfulness. But how effective is this process? Remember that the COVID-related retractions happened in high-impact journals during a time of greater scrutiny. If so, what happens to articles in low-impact, low-visibility journals?

“Peer review is the entry point that decides the relevance and internal consistency of a research paper. It is simply not equipped to detect data errors or statistical manipulation unless reviewers receive the experimental data. A stalemate of papers awaiting peer review (as during the pandemic) can completely overwhelm this process. Then the quality checkpoint moves further downstream, namely after publishing. said Dr. Subeikshanan Venkatesan, a member of the research team and contributing author to the study.

In their third and final paper, the authors sought to delve deeper into the world of defective papers. Using a formula described by Cocol et al.they estimated that the proportion of potentially retractable articles was about 1% of all articles published in the neurosurgery and high-impact journal groups. This is still a very large number considering the amount of articles published, which is in the millions.

Dr. Madhugiri and his team had to think outside the box to make sense of this murky environment of fraudulent publishing. They developed two new parameters, namely, the shrinkage gap and the proportion of true articles in a journal. The shrink gap is simply the proportion of compromised items that remained unshrink; it represents the failure of post-publication review to detect fraudulent articles. The proportion of true articles, as the name suggests, is the proportion of articles in a newspaper presumed to contain no errors. It represents the success of the peer review process in that these articles have stood the test of post-publication review.

The shrinkage gap was found to be incredibly high in neurosurgery and high impact journals – around 96%. This means that only a very small fraction fraudulent articles that escaped detection during peer review are then detected as bad apples and removed due to post-publication review.

Neurosurgery journals had both a higher proportion of true articles and a higher shrinkage gap compared to high-impact journals. This implies that a strong peer review process is the mainstay of quality control in neurosurgery and similar subspecialty journals with limited audiences. The higher retraction gap implies that post-publication review was not sufficient to detect all potentially retractable articles in these niche journals. This is expected, since journals dedicated to publishing articles from a single discipline would have much less visibility compared to high-impact journals that cater to a much wider audience.

Only a tiny fraction of potentially defective items are actually detected and retracted., Graphic Credits: Article Authors

“While we assume enough time and resources can be spared to address these issues, the awareness and will needed to actually effect change may not be widespread,” noted Dr. Akshat Dutt, another member of the group. of research and one of the contributing authors. to the retraction series.

Nevertheless, the authors offer useful information on the fight against corruption within science. Dr. Madhugiri believes that almost anyone, medical students, residents in training, academic peers and even those outside the field, can critically inspect and analyze a research paper, with a little training. The plethora of free online resources such as retraction watches database, Publicistetc., facilitate this academic spring cleaning.

To improve the peer review process, journals and publishers should consider increasing the number of subspecialty-specific associate editors and expanding the pool of reviewers available. Making published articles freely available and having mandatory data deposit policies are good first steps to improve post-publication review, which would in turn reduce the shrinkage gap. Ultimately, science by its nature is self-correcting and a little nudge goes a long way.

Comments are closed.