Although bibliometrics is extensively used in the scholarly world and offers valuable insights into publications, it also has significant limitations that may impact research and the cultivation of a better research culture. As a general rule, never fully trust indicators, always provide context when using them, and avoid relying on a single indicator to assess researchers, departments, or institutions.
- The number of citations is counted within the database. Therefore, the citation count of a research output may differ from one database to another.
- Citations occur usually some time after the work is published. This may take up to a few years in some subject fields. The length of an academic’s career can make a significant difference in the number of citations.
- Citations are mostly derived from journal articles or conference proceedings. Therefore, citations from books, chapters, or other types of works may not be counted.
- Publication and citation practices are usually different from one discipline to another. For example, articles in the medicine field tend to be published and cited more frequently than the papers in arts and humanities. Therefore, it is important to refrain from making comparisons between different academic fields.
- The author’s background and gender may have an impact on citation count. Some studies of citation patterns suggest that gender and racial biases exist in citation networks.
- Self-citations can be misused to inflate the number of citations.
- Non-English research outputs may not be picked up in the citation count. Research is performed across the world in all languages, yet the dominant language for research publishing is English. This can exclude non-English language research from being made discoverable and cited by other readers.
- Not all citations signify positive impact. An article can be highly cited because it is controversial, satirical or its claims are being challenged.
- Citation practice can vary from one publication type to another. Review articles are usually more cited than original research papers. Therefore, citations must never be used to compare between articles of different types. Using h-index should be avoided to compare researchers or demonstrate impact because of the abovementioned limitations of citations. A higher h-index does not signify the quality of individual research papers.
- Journal-based metrics (e.g. Journal Impact Factor) have significant weaknesses and thus should never be used to signify impact since they never show the quality of individual research output. Their calculation is also flawed. Also, the impact factor of a journal is mostly driven by a very few highly cited papers.
- Academic work is multi-dimensional, covering a broad range of topics such as research, teaching, administration etc. Therefore, there is no panacea for putting all these dimensions into a single metric.
Alternative metrics are still emerging and should be used with great care. They may also have some limitations:
- Not everything on the internet is counted. The data aggregators are usually highly selective. They include or exclude some social media platforms, news outlets, or blog post sites for one reason or another.
- The paper may not be identified even if it is mentioned on social media. Most of the data aggregators count mentions if the paper is shared with a DOI or a unique identifier.
- Social media platforms, news outlets, or blog sites can be unpredictable. They can change their business, discontinue services, or limit the access of third parties to the platform which may affect the number of mentions.
- Social media platforms are vulnerable to gaming. The number of mentions can be inflated with one way or another.
- Not all mentions point to positive impact. There could be negative comments about the paper mentioned on the platform.
- Most of the alternative metrics are still not field normalised.