Zum Hauptinhalt springen
TU Graz/ TU Graz/ Services/ TU Graz Library/

What you need to know about research metrics

By Michaela Zottler | 08/12/2024

Whether h-index, journal impact factor or citation counts: Research metrics follow researchers throughout their careers.

Metrics definition

What are research metrics? Bibliometrics is the use of statistical methods to measure the performance of scientific publications. The different metrics that result from this are generally referred to as research metrics. It should be noted that quantitative methods are used in an attempt to assess quality.

Types of metrics

There are three types of metrics, depending on the level of application: Metrics at author, journal or article level.

Author-level metrics are citation metrics that attempt to measure the impact or performance of individual researchers or groups of researchers. A well-known example of such a metric is the h-index.

Journal-level metrics attempt to measure the impact or quality of entire journals. An example of this is the Journal Impact Factor, which is based on the Web of Science database and is therefore only available for journals indexed there. An equivalent to the Journal Impact Factor is the CiteScore, which in turn only evaluates journals indexed in the Scopus database.

Metrics at the article level attempt to measure the impact, but also the use of individual articles, e.g. through citation and download figures.

Use cases for metrics

Research metrics are used in a variety of scenarios: for the promotion of researchers, for the distribution of research funding, or for university rankings. They are also actively used by researchers themselves, for example to decide which journal is best suited to publish their research. High impact journals are often chosen. Whether this is a relevant criterion for choosing a journal varies from discipline to discipline. However, the pressure to publish in a journal with a high impact factor is increasingly viewed critically.

Disadvantages and criticism

As mentioned at the beginning, research metrics attempt to assess the quality of research performance using quantitative methods. However, this does not guarantee that quality is adequately assessed in its entirety.

The h-index is an example of this. It is a combination of the number of publications and their citations. An h-index of 10 means that 10 of the researcher's publications have received at least 10 citations. Therefore, at the beginning of a research career, the h-index will always be lower as it can never be higher than the number of articles published – even if they are of high quality. More advanced researchers who have more publications may also have a higher h-index.

Moreover, the metrics are not comparable across disciplines. Each field of research has its own publication culture and therefore a different citation behaviour. In addition, not only do different disciplines cite differently, but different types of documents also have a different citation frequency. For example, reviews are cited more often than primary studies.

In some cases, inappropriate metrics are used for evaluation. For example, when the impact factor of a journal is used to evaluate researchers or articles. However, this figure only provides an assessment of the journal. It says nothing about the individual articles or authors who publish in the journal.

Another problem is excessive self-citation, which can distort the metrics. When whole groups of researchers cite each other excessively - with the sole aim of achieving higher citation figures - this is known as a citation cartel.

Responsible use of metrics

Because of these weaknesses in research metrics, it is important to use them responsibly. This may mean using not just one metric, but several to get a more complete picture. In addition, it is advisable, for example, to involve experts in the evaluation - i.e. a qualitative assessment - in order to capture research performance more holistically.

There are also initiatives that advocate the responsible use of metrics in science, such as the Leiden Manifesto or DORA (San Francisco Declaration of Research Assessment)DORA’s central demand is the abandonment of the Journal Impact Factor and other metrics at journal level as quality indicators. In addition, DORA also contains recommendations for researchers and all other actors in the scientific publication process who want to contribute to a change in research assessment. These recommendations include, for example, the use of content-based assessments in funding, hiring, tenure and promotion committees, instead of only looking at figures. For recommendation and review letters, DORA recommends using a range of metrics for articles to demonstrate their importance, rather than relying on just one.

Links

Leiden Manifesto
DORA

Michaela Zottler is a librarian at TU Graz. She helps researchers and students finding literature and is happy to answer questions about scientific publications.
Share Article on
Blog start page