Scientific civility and academic performance
Posted on: 15 May 2024
Preprint posted on 5 January 2024
Categories: scientific communication and education
Background
Scientific research is extremely competitive, with some scientists comparing their career advancement to that of an Olympic athlete or a professional artist. From a pool of thousands, only a few make it to the top. It is therefore quite important to ensure we are measuring scientific contribution, productivity and excellence in the best possible way. Although established and commonly used metrics for researcher and journal assessment exist, scientists are striving for more holistic ways to evaluate their careers, publications and other scientific contributions.
Established metrics are used for different purposes: to evaluate candidates applying for research group leader or professorship positions, to score researchers applying for grant funding, or to award research prizes. Hirsch- or H-index, the most commonly used researcher metric, counts the number of publications (X) for which an author has been cited by other authors at least X times. Therefore, the more you publish, and the more citations you get, the higher the H-index. On the other hand, Journal Impact Factor (JIF), the most commonly used journal metric, measures yearly mean of citations in a given journal within a 2-year or a 5-year timeframe. While these metrics are often central to researcher evaluation, they have been criticized for being too simple or quantity-focused to account for different aspects of researcher contribution to a scientific field or wider society. For instance, 28.5% of tweets spanning a 15-year timeframe mentioning the H-index expressed negative views about its use1. The criticism ranges from its bias towards established researchers, through sensitivity to self-citation, to its focus on the number, rather than impact, of citations. While JIF was designed to measure journal impact, it has also, controversially, been used to evaluate the scientific impact of single authors2. The attempts to reform scientific evaluations have been formalized in the Declaration on Research Assessment or DORA (https://sfdora.org/) and signed by over 20 000 individuals and over 3000 institutions, demonstrating worldwide support to rethink ways in which we value and evaluate scientists and their contributions.
Figure 1. A measure of scientific civility. C score, a newly developed metric to score researchers’ collaborative efforts, measures the size, persistence, geographic distribution and emergence of their peer networks.
Key findings
What does C score stand for?
While scientists are traditionally seen as lone geniuses, current science is much more dependent on teamwork and sharing of knowledge and expertise than many imagine. Aiming to evaluate researchers’ collaborative efforts, Camacho and colleagues have developed a measure of “scientific civility” termed the C score, where “C” stands for Civility, Cooperation, Collaboration, or a Combination of those. In the metric, ranging from 0 to 1, a quantitative component as well as a teamwork component is included. Also, teamwork is divided into a geographic (global), temporal (longevity), and an emergent (new collaborations) component. Altogether, the C score measures the number of publications, as well as how global, long-lasting and current the researchers’ collaborations are.
How does it compare to the H-index?
To quantitatively assess the newly developed score, the preprint authors first compared C scores and H-indexes of tenure-track professors from the top five Schools of Public Health in the US. The two metrics were strongly positively correlated (ρ = 0.7640). Rather than being associated with career length, the C score was positively associated with career rank (Assistant, Associate or Full Professor). When comparing academics of equal career ranks, the C score revealed no significant gender differences, suggesting it could be used for career assessment without introducing a gender bias. Interestingly, unlike the H-index, which is cumulative, a researcher’s C score exhibited dynamic behavior, and could rise, fall and stagnate throughout the career, reflecting differing engagement in collaborations at different career stages.
Collaboration bolsters impact of under-privileged scientists
Access to resources is a strong determinant of scientific productivity and impact3. To evaluate whether collaboration (C score) fosters scientific productivity and impact (H-index) of researchers from under-privileged backgrounds, Camacho and colleagues compared the two metrics for a panel of scientists from middle-income countries elected to the American Academy of Microbiology. They found a strong positive correlation (ρ = 0.8084), implying that collaboration can boost the careers of scientists with limited access to resources.
Gender differences exist, but do they matter?
They next stratified C scores into three classes: C-I (<0.3), C-II (0.3-0.8) and C-III (>0.8). The number of unique first and last co-authors (FL authors) in the researcher’s network strongly positively correlated with the C score. The authors wondered if there are gender disparities in the number of unique FL authors, and if this affects productivity. Interestingly, they found that females within one C-class tend to publish with fewer unique FL authors than males, without it negatively impacting their productivity, measured by the number of publications. Additionally, as the scientist’s C-class rose, so did the size of their research teams, as well as the number of countries to which they have established a research connection.
Tweaking reward systems – a key to better science?
Altogether, the authors point out that the current complexities of the research landscape necessitate cross-disciplinary solutions and that researchers benefit from synergies reflecting diverse knowledge and skill sets. On the other hand, they acknowledge that the productivity-focused reward system results in an unhealthy hypercompetitive culture, creating incentives to cheat and undermining scientific integrity. Assuming that reward systems have the power to strongly affect human behavior, they advise promotion and award committees to take an integrative approach and reward collaboration, collegiality and openness.
C score: it reveals a lot, but not everything
The authors acknowledge limitations of the C score: it is not adjusted to the appointment year, it cannot discern collaboration with positive and negative motivation, nor does it allocate fractional credit in multi-author papers, which is a drawback the H-index has been criticized for in the past5. Additionally, the C score cannot recognize “salami” publications6, nor does it take into account publication impact (citations). Nevertheless, they highlight its multiple positives, and encourage its use in the integrative assessment of an individual’s academic performance.
What I like about the preprint / why I think it is important
Aside from drawing attention to the overwhelmingly positive impacts that collaboration has on both researcher careers and scientific progress in general, this preprint opens the door for a wider discussion on the topic of career assessment in science and beyond. I find it important that scientists, especially early-career researchers, not only perform and analyze experiments, but are also involved in creating and re-thinking science policy. I also hope that this kind of research inspires decision makers to consider the pitfalls of our current approaches to measure scientific success and think of ways to improve them.
Bibliography
- Thelwall, M. & Kousha, K. Researchers’ attitudes towards the h-index on Twitter 2007–2020: criticism and acceptance. Scientometrics 126, 5361–5368 (2021).
- Scully, C. & Lodge, H. Impact factors and their significance; overrated or misused? British Dental Journal vol. 198 391–393 (2005).
- Ramesh Babu, A. & Singh, Y. P. Determinants of research productivity. Scientometrics 43, 309–329 (1998).
- Sanderson, K. Science’s fake-paper problem: high-profile effort will tackle paper mills. Nature 626, 17–18 (2024).
- Koltun, V. & Hafner, D. The h-index is no longer an effective correlate of scientific reputation. PLoS One 16, e0253397 (2021).
- Ding, D., Nguyen, B., Gebel, K., Bauman, A. & Bero, L. Duplicate and salami publication: a prevalence study of journal policies. Int. J. Epidemiol. 49, 281–288 (2020).
doi: Pending
Read preprintSign up to customise the site to your preferences and to receive alerts
Register hereAlso in the scientific communication and education category:
An updated and expanded characterization of the biological sciences academic job market
Jennifer Ann Black et al.
Have AI-Generated Texts from LLM Infiltrated the Realm of Scientific Writing? A Large-Scale Analysis of Preprint Platforms
Amy Manson et al.
Sci-comm “behind the scenes”: Gendered narratives of scientific outreach activities in the life sciences
Martin Estermann et al.