Print copy academic journals arranged in order at the library

Bibliometrics is the quantitative study of publications and can be applied to any type of research output, author, or institution

Metrics used in the academic context are usually classified as traditional and alternative metrics. The following is not an exhaustive list of metrics and only shows mostly used ones. Bear in mind that all metrics have their own strengths and weaknesses. Therefore, they should be used with great care (check the weaknesses of metrics below). 

 

Basics of metrics

Traditional metrics

Traditional metrics 

Traditional metrics are citation based. Citation occurs when someone properly acknowledges another work in their paper. Citation-based metrics refer to the number of times publications have been cited by other papers within a specific database. 

  • Article-level metrics: The aggregate number of citations that a research output has received. It is usually called citation count of research output 
  • Author-level metrics: The metrics that aim to measure both an author’s productivity (number of research outputs) and research impact (citation count). The h-index is a commonly used example of this type of metric 
  • Journal-level metrics: It is a type of metric that measures the average number of citations to articles published in a journal within a specific period. There are many journal-level metrics measured by different databases. The most prominent one is the Journal Impact Factor (JIF) which is a part of the Journal Citation Reports. Please note that journal-level metrics are not recommended in demonstrating quality of individual paper. 
Alternative metrics

Alternative metrics (aka altmetrics) is an umbrella term for all metrics that are not measured with citation. With the improvement of digital platforms and data collection technologies, new ways to share research outputs and collect their usage are now available. For example, the view/download count of a publication in an institutional repository or mention of a journal article in a social media platform or a policy document can now be traced. These indicators can be used to understand the impact of publications beyond traditional metrics by looking at applied, societal, economic or governmental uses of research outputs. 

Importance of metrics
  • Bibliometrics can be used to measure the impact of publications of an individual researcher, department, or institution (see ‘Limitations of metrics’ for caveats) on academia, society, economy, or policy.  
  • Metrics are useful to narrow down the literature review research.
  • Metrics are used in national or international research assessment exercises such as the Research Excellence Framework (REF) and university rankings.
  • Funding, promotion or research position applications may require the inclusion of bibliometrics (in a responsible way). Therefore, metrics are used to allocate resources.
  • Metrics can offer valuable insights into emerging topics both in academia and professional practice. 
  • They are also important in analysing publication, citation, and collaboration trends of academic disciplines. Therefore, bibliometrics are also often used to produce research. 
  • Metrics can be also valuable in understanding the effects of various scholarly communication practices, such as the relationship between open access and citations (e.g. some studies show that open access articles are likely to attract more citations than paywalled papers). 
Limitations of metrics

Although bibliometrics is extensively used in the scholarly world and offers valuable insights into publications, it also has significant limitations that may impact research and the cultivation of a better research culture. As a general rule, never fully trust indicators, always provide context when using them, and avoid relying on a single indicator to assess researchers, departments, or institutions.

  • The number of citations is counted within the database. Therefore, the citation count of a research output may differ from one database to another 
  • Citations occur usually some time after the work is published. This may take up to a few years in some subject fields. The length of an academic’s career can make a significant difference in the number of citations. 
  • Citations are mostly derived from journal articles or conference proceedings. Therefore, citations from books, chapters, or other types of works may not be counted.
  • Publication and citation practices are usually different from one discipline to another. For example, articles in the medicine field tend to be published and cited more frequently than the papers in arts and humanities. Therefore, it is important to refrain from making comparisons between different academic fields.
  • The author’s background and gender may have an impact on citation count. Some studies of citation patterns suggest that gender and racial biases exist in citation networks.    
  • Self-citations can be misused to inflate the number of citations. 
  • Non-English research outputs may not be picked up in the citation count. Research is performed across the world in all languages, yet the dominant language for research publishing is English. This can exclude non-English language research from being made discoverable and cited by other readers.  
  • Not all citations signify positive impact. An article can be highly cited because it is controversial, satirical or its claims are being challenged. 
  • Citation practice can vary from one publication type to another. Review articles are usually more cited than original research papers. Therefore, citations must never be used to compare between articles of different types. Using h-index should be avoided to compare researchers or demonstrate impact because of the abovementioned limitations of citations. A higher h-index does not signify the quality of individual research papers. 
  • Journal-based metrics (e.g. Journal Impact Factor) have significant weaknesses and thus should never be used to signify impact since they never show the quality of individual research output. Their calculation is also flawed. Also, the impact factor of a journal is mostly driven by a very few highly cited papers. 
  • Academic work is multi-dimensional, covering a broad range of topics such as research, teaching, administration etc. Therefore, there is no panacea for putting all these dimensions into a single metric.

Alternative metrics are still emerging and should be used with great care. They may also have some limitations:

  • Not everything on the internet is counted. The data aggregators are usually highly selective. They include or exclude some social media platforms, news outlets, or blog post sites for one reason or another. 
  • The paper may not be identified even if it is mentioned on social media. Most of the data aggregators count mentions if the paper is shared with a DOI or a unique identifier.  
  • Social media platforms, news outlets, or blog sites can be unpredictable. They can change their business, discontinue services, or limit the access of third parties to the platform which may affect the number of mentions. 
  • Social media platforms are vulnerable to gaming. The number of mentions can be inflated with one way or another 
  • Not all mentions point to positive impact. There could be negative comments about the paper mentioned on the platform.  
  • Most of the alternative metrics are still not field normalised.  

 

Because of their limitations, metrics should be used with caution and should not be considered the only way to assess impact. Both quantitative and qualitative indicators should be used together to assess research performance.