Close

Scientific civility and academic performance

Emma Camacho, Quigly Dragotakes, Isabella Hartshorn, Arturo Casadevall, Daniel L Buccino

Posted on: 15 May 2024

Preprint posted on 5 January 2024

A new researcher metric takes collaborative networks into account

Selected by Ivan Mikicic

Background

Scientific research is extremely competitive, with some scientists comparing their career advancement to that of an Olympic athlete or a professional artist. From a pool of thousands, only a few make it to the top. It is therefore quite important to ensure we are measuring scientific contribution, productivity and excellence in the best possible way. Although established and commonly used metrics for researcher and journal assessment exist, scientists are striving for more holistic ways to evaluate their careers, publications and other scientific contributions.

Established metrics are used for different purposes: to evaluate candidates applying for research group leader or professorship positions, to score researchers applying for grant funding, or to award research prizes. Hirsch- or H-index, the most commonly used researcher metric, counts the number of publications (X) for which an author has been cited by other authors at least X times. Therefore, the more you publish, and the more citations you get, the higher the H-index. On the other hand, Journal Impact Factor (JIF), the most commonly used journal metric, measures yearly mean of citations in a given journal within a 2-year or a 5-year timeframe. While these metrics are often central to researcher evaluation, they have been criticized for being too simple or quantity-focused to account for different aspects of researcher contribution to a scientific field or wider society. For instance, 28.5% of tweets spanning a 15-year timeframe mentioning the H-index expressed negative views about its use1. The criticism ranges from its bias towards established researchers, through sensitivity to self-citation, to its focus on the number, rather than impact, of citations. While JIF was designed to measure journal impact, it has also, controversially, been used to evaluate the scientific impact of single authors2. The attempts to reform scientific evaluations have been formalized in the Declaration on Research Assessment or DORA (https://sfdora.org/) and signed by over 20 000 individuals and over 3000 institutions, demonstrating worldwide support to rethink ways in which we value and evaluate scientists and their contributions.

 

Figure 1. A measure of scientific civility. C score, a newly developed metric to score researchers’ collaborative efforts, measures the size, persistence, geographic distribution and emergence of their peer networks.

Key findings

What does C score stand for?

While scientists are traditionally seen as lone geniuses, current science is much more dependent on teamwork and sharing of knowledge and expertise than many imagine. Aiming to evaluate researchers’ collaborative efforts, Camacho and colleagues have developed a measure of “scientific civility” termed the C score, where “C” stands for Civility, Cooperation, Collaboration, or a Combination of those. In the metric, ranging from 0 to 1, a quantitative component as well as a teamwork component is included. Also, teamwork is divided into a geographic (global), temporal (longevity), and an emergent (new collaborations) component. Altogether, the C score measures the number of publications, as well as how global, long-lasting and current the researchers’ collaborations are.

How does it compare to the H-index?

To quantitatively assess the newly developed score, the preprint authors first compared C scores and H-indexes of tenure-track professors from the top five Schools of Public Health in the US. The two metrics were strongly positively correlated (ρ = 0.7640). Rather than being associated with career length, the C score was positively associated with career rank (Assistant, Associate or Full Professor). When comparing academics of equal career ranks, the C score revealed no significant gender differences, suggesting it could be used for career assessment without introducing a gender bias. Interestingly, unlike the H-index, which is cumulative, a researcher’s C score exhibited dynamic behavior, and could rise, fall and stagnate throughout the career, reflecting differing engagement in collaborations at different career stages.

Collaboration bolsters impact of under-privileged scientists

Access to resources is a strong determinant of scientific productivity and impact3. To evaluate whether collaboration (C score) fosters scientific productivity and impact (H-index) of researchers from under-privileged backgrounds, Camacho and colleagues compared the two metrics for a panel of scientists from middle-income countries elected to the American Academy of Microbiology. They found a strong positive correlation (ρ = 0.8084), implying that collaboration can boost the careers of scientists with limited access to resources.

Gender differences exist, but do they matter?

They next stratified C scores into three classes: C-I (<0.3), C-II (0.3-0.8) and C-III (>0.8). The number of unique first and last co-authors (FL authors) in the researcher’s network strongly positively correlated with the C score. The authors wondered if there are gender disparities in the number of unique FL authors, and if this affects productivity. Interestingly, they found that females within one C-class tend to publish with fewer unique FL authors than males, without it negatively impacting their productivity, measured by the number of publications. Additionally, as the scientist’s C-class rose, so did the size of their research teams, as well as the number of countries to which they have established a research connection.

Tweaking reward systems – a key to better science?

Altogether, the authors point out that the current complexities of the research landscape necessitate cross-disciplinary solutions and that researchers benefit from synergies reflecting diverse knowledge and skill sets. On the other hand, they acknowledge that the productivity-focused reward system results in an unhealthy hypercompetitive culture, creating incentives to cheat and undermining scientific integrity. Assuming that reward systems have the power to strongly affect human behavior, they advise promotion and award committees to take an integrative approach and reward collaboration, collegiality and openness.

C score: it reveals a lot, but not everything

The authors acknowledge limitations of the C score: it is not adjusted to the appointment year, it cannot discern collaboration with positive and negative motivation, nor does it allocate fractional credit in multi-author papers, which is a drawback the H-index has been criticized for in the past5. Additionally, the C score cannot recognize “salami” publications6, nor does it take into account publication impact (citations). Nevertheless, they highlight its multiple positives, and encourage its use in the integrative assessment of an individual’s academic performance.

What I like about the preprint / why I think it is important

Aside from drawing attention to the overwhelmingly positive impacts that collaboration has on both researcher careers and scientific progress in general, this preprint opens the door for a wider discussion on the topic of career assessment in science and beyond. I find it important that scientists, especially early-career researchers, not only perform and analyze experiments, but are also involved in creating and re-thinking science policy. I also hope that this kind of research inspires decision makers to consider the pitfalls of our current approaches to measure scientific success and think of ways to improve them.

Bibliography

  1. Thelwall, M. & Kousha, K. Researchers’ attitudes towards the h-index on Twitter 2007–2020: criticism and acceptance. Scientometrics 126, 5361–5368 (2021).
  2. Scully, C. & Lodge, H. Impact factors and their significance; overrated or misused? British Dental Journal vol. 198 391–393 (2005).
  3. Ramesh Babu, A. & Singh, Y. P. Determinants of research productivity. Scientometrics 43, 309–329 (1998).
  4. Sanderson, K. Science’s fake-paper problem: high-profile effort will tackle paper mills. Nature 626, 17–18 (2024).
  5. Koltun, V. & Hafner, D. The h-index is no longer an effective correlate of scientific reputation. PLoS One 16, e0253397 (2021).
  6. Ding, D., Nguyen, B., Gebel, K., Bauman, A. & Bero, L. Duplicate and salami publication: a prevalence study of journal policies. Int. J. Epidemiol. 49, 281–288 (2020).

Tags: academic age, academic publishing, author-level metrics, bibliometrics, citation impact, collaborative writing, h-index, impact factor, journal ranking, metascience, network science, scientific collaboration network, scientometrics

doi: Pending

Read preprint (No Ratings Yet)

Author's response

Emma Camacho shared

Q1: Assessment of research productivity is often criticized for emphasizing publication quantity over quality. What influenced your decision to include the number of publications component (‘Research Output per Year’), but not the number of citations component into the C score? I suppose this was to ensure distinction from, and orthogonality with, the H-index? Since both H-index and C score depend on the publication number, isn’t a strong positive correlation between them a given?

The rationale for using ROY as a key component of scientific performance instead of number of citations is driven by a principle of inclusion. Citations are intimately associated with the visibility of the research work, which on most occasions can be obtained by publishing in high-impact journals. Publishing costs in top-tier journals are not affordable for most scientists working outside developed countries. To put this in context, in Latin America, research budget for a lab could range between 5,000 to 40,000 USD per year and expenses in research-grade plastic consumables, chemical reagents, and overall life science reagents and kits are 3-5 times higher than in developed countries due to shipping and handling costs. Publishing costs in open access journals are high. For example, in PLOS journals these can range from 1,500 to 3,000 USD but in high-impact journals as Nature, publishing fees are over 10,000 USD.

Regarding your second question, since both H-index and C score depend on the publication number, isn’t a strong positive correlation between them a given? Not so, this is not necessarily the case. A scientist’s academic performance encompasses much more than making discoveries and accumulating citations; it also includes teaching, mentoring, reviewing, and serving on academic committees among many other roles. The H-index and C-score analyze a scientist’s performance from two different and complementary perspectives.

The C-score, regardless of the number of citations an article receives, values all original research contributions along with scientist’s mentorship and collegiality, recognizing their ability to nurture and foster growth of their collaborative network. On the other hand, the H-index focuses solely on impact of an author’s work, disregarding collaborative efforts. The collaborative network (CN) component of the C-score measures sustained and long-standing contributions (productivity) within a timeframe to capture the scientist’s mentorship. By employing this approach, we  evaluate both high prolific productivity and the scientist’s impact on their sphere of influence (expansion of their collaborative network). To illustrate this concept further, two individuals with the same H-index may have different C-scores due to their collaborative patterns. Individuals primarily engaged in solo-authored work typically have lower C-scores compared to those involved in multiple-authors collaborations.

Q2: I assume this is a common question in scientometrics, but how do you infer direction of causality? Are scientists more successful because they collaborate, or do they collaborate more because they are successful?

The C-score determines academic performance. Scientific collaboration is a means to an end and the definition of success is based on circumstantial needs. For some individuals, success may represent securing an academic position, for others, peer recognition and winning awards, while for others may mean uplifting each other’s abilities to create a better society. The development of collaborative efforts has multiple benefits ranging from enhance visibility, access to state-of-the-art resources to increase productivity; yet productivity, in terms of research articles, and the relevance of collaborative efforts across all scientific fields is variable.  Regarding the direction of causality, we agree with your premise that it can be difficult to tease out whether they are more successful because they collaborate or collaborate more because they are successful.  In fact, for many scientists, collaborations and success may be interlinked.  One way to approach the direction of causality is to consider the temporal dimension: that scientists must achieve some level of success first in order to collaborate. If this is the case, then collaborations become an amplifier for success.

Q3: Criteria for academic advancement are often based on “hard numbers”, as they are easy to measure and seemingly leave less room for assessment bias. This could lead to desirable leadership traits being disregarded, such as selflessness, empathy, cooperativity, honesty, lack of bias, inclusivity or trust. In my opinion, the C score is a step in the right direction in trying to measure these neglected factors. Do you have additional suggestions on how to improve criteria for academic advancement beyond already established and commonly used metrics?

Thanks, we also think that C-score provides an integral approach for assessing academic performance. As we mentioned earlier and, in our article, the selection of other forms of scholarship such as books, patents, or conference proceedings for social sciences and humanities, physics, or computer science, respectively, could account more accurately for ROY instead of research articles.

Q4: Did you consider developing an online tool which would enable scientists to easily calculate C scores?

Yes, we are working on the required steps to move along with this project.

Have your say

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up to customise the site to your preferences and to receive alerts

Register here

Also in the scientific communication and education category:

An updated and expanded characterization of the biological sciences academic job market

Brooklynn Flynn, Ariangela J. Kozik, You Cheng, et al.

Selected by 01 October 2024

Jennifer Ann Black et al.

Scientific Communication and Education

Have AI-Generated Texts from LLM Infiltrated the Realm of Scientific Writing? A Large-Scale Analysis of Preprint Platforms

Huzi Cheng, Bin Sheng, Aaron Lee, et al.

Selected by 13 July 2024

Amy Manson et al.

Scientific Communication and Education

Sci-comm “behind the scenes”: Gendered narratives of scientific outreach activities in the life sciences

Perry G. Beasley-Hall, Pam Papadelos, Anne Hewitt, et al.

Selected by 10 July 2024

Martin Estermann et al.

Scientific Communication and Education
Close