Quantcast
Channel: Bharatkalyan97
Viewing all articles
Browse latest Browse all 11039

Crisis in science communications -- Curt Rice

$
0
0

Science research: three problems that point to a communications crisis

Retraction is exploding, replicability of research diminishing, and our measure of journal quality is farcical, argues Curt Rice
Communications
Has the communication of scientific research reached a crisis point? Photograph: Alamy
Traditional scientific communication directly threatens the quality of scientific research. Today's system is unreliable – or worse. Scholarly publishing regularly gives the highest status to research that is most likely to be wrong. This system determines the trajectory of a scientific career and the longer we stick with it, the more likely it will deteriorate.
Think these are strong claims? They, and the problems described below, are grounded in research recently presented by Björn Brembs from the University of Regensburg andMarcus Munafò of the University of Bristol in Deep impact: unintended consequences of journal rank.

Retraction rates

Retraction is one possible response to discovering that something is wrong with a published scientific article. When it works well, journals publish a statement identifying the reason for the retraction.
Retraction rates have increased tenfold in the past decade after many years of stability. According to a recent paper in the Proceedings of the National Academy of Sciences, two-thirds of all retractions follow from scientific misconduct: fraud, duplicate publication and plagiarism.
More disturbing is the finding that the most prestigious journals have the highest rates of retraction, and that fraud and misconduct are greater sources of retraction in these journals than in less prestigious ones.
Among articles that are not retracted, there is evidence that the most visible journals publish less reliable (in other words, not replicable) research results than lower ranking journals. This may be due to a preference among prestigious journals for results that have more spectacular or novel findings, a phenomenon known as publication bias.

The decline effect

One cornerstone of the quality control system in science is replicability – research results should be so carefully described that they can be obtained by others who follow the same procedure. Yet journals generally are not interested in publishing mere replications, giving this particular quality control measure somewhat low status, independent of how important it is, for example in studying potential new medicines.
When studies are reproduced, the resulting evidence is often weaker than in the original study. Brembs and Munafò review research leading them to claim that "the strength of evidence for a particular finding often declines over time."
In a fascinating piece entitled The truth wears off, the New Yorker offers the following interpretation of the decline effect, that the most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. Yet it is exactly the spectacularity of statistical flukes that increase the odds of getting published in a high prestige journal.

The politics of prestige

One approach to measuring the importance of a journal is to count how many times scientists cite its articles; this is the intuition behind impact factor. Publishing in journals with high impact factors feeds job offers, grants, awards, and promotions. A high impact factor also enhances the popularity – and profitability – of a journal, and journal editors and publishers work hard to increase them, primarily by trying to publish what they believe will be the most important papers.
However, impact factor can also be illegitimately manipulated. For example, the actual calculation of impact factor involves dividing the total number of citations in recent years by the number of articles published in the journal in the same period. But what is an article? Do editorials count? What about reviews, replies or comments?
By negotiating to exclude some pieces from the denominator in this calculation, publishers can increase the impact factor of their journals. In 'The impact factor game', the editors of peer-reviewed open access journal PLoS Medicine describe the negotiations determining their impact factor. Their impact factor could have been anywhere from 4 to 11; an impact factor in the 30s is extremely high, while most journals are under 1. In other words, 4 to 11 is a significant range. This process led the editors to "conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive".

A crisis for science?

I believe the problems discussed here are a crisis for science and the institutions that fund and carry out research. We have a system for communicating results in which the need for retraction is exploding, the replicability of research is diminishing, and the most standard measure of journal quality is becoming a farce. Indeed, the ranking of journals by impact factor is at the heart of all three of these problems. Brembs and Munafò conclude that the system is so broken it should be abandoned.
Getting past this crisis will require both systemic and cultural changes. Citations of individual articles can be a good indicator of quality, but the excellence of individual articles does not correlate with the impact factor of the journals in which they are published. When we have convinced ourselves of that, we must see the consequences it has for the evaluation processes essential to the construction of careers in science and we must push nascent alternatives such as Google Scholar and others forward.
Politicians have a legitimate need to impose accountability, and while the ease of counting – something, anything – makes it tempting for them to infer quality from quantity, it doesn't take much reflection to realize that this is a stillborn strategy. As long as we believe that research represents one of the few true hopes for moving society forward, then we have to face this crisis. It will be challenging, but there is no other choice.
Curt Rice is vice president for research and development at the University of Tromsø– this is an edited version of an article first published on his blog. Follow him on Twitter @curtrice

Viewing all articles
Browse latest Browse all 11039

Trending Articles