From a qualitative perspective, it’s relatively easy to define a good researcher as one who publishes many good papers. But quantitatively measuring these papers is more complicated, since they can be measured in several different ways. In the past few years, several different metrics have been proposed that determine an individual’s scientific caliber based on the quantity and quality of the individual’s peer-reviewed publications. However, most of these metrics assume that all authors contribute equally when a paper has multiple authors. In a new study, researchers have argued that this assumption causes bias in these metrics, and they have proposed a new metric that accounts for the relative contributions of all coauthors, resulting in a rational way to capture a researcher’s scientific impact.
The researchers, Jonathan Stallings, et al., have published their paper “Determining scientific impact using a collaboration index” in a recent issue of PNAS.
“Since we all have credit cards, it goes without saying that measuring credit is important in daily life,” corresponding author Ge Wang, the Clark & Crossan Endowed Chair Professor in the Department of Biomedical Engineering at Rensselaer Polytechnic Institute in Troy, New York, told Phys.org, “How to measure intellectual credit is a hot topic, but a way has been missing to individualize scientific impact rigorously for teamwork such as a joint peer-reviewed publication. Our recent PNASpaper provides an axiomatic answer to this fundamental question.”
Currently, one of the most common measures of an individual’s scientific impact is the H-index, which reflects both a researcher’s number of publications and number of citations per publication (a measure of the publication’s quality). Specifically, a scientist has a value h if h of their papers have at least h citations each, and their other papers are less frequently cited. The H-index does not account for the possibility that some collaborators may have contributed more than others on a paper. There are also many situations where the H-index falls short. For example, when a researcher has only a few publications but they are highly cited, the researcher’s h value is limited by the small number of publications regardless of their high quality.
Read more at: Phys.org