Journal bibliometrics

Journal bibliometrics are measures of the attention a journal receives in terms of citations and provide a way to compare journals based on this. While these measures may help inform your journal choice, they are not necessarily an indicator of quality.

Many of these metrics are not normalised to correct for different citation patterns in different disciplines and should not be used to compare journals from different fields.

The links below will show you how to find impact factors and other journal metrics:

Finding a Journal Impact Factor

 

Journal impact factors are perhaps the most familiar journal bibliometric.  They are  produced by Clarivate Analytics (previously Thomson Reuters) and can be found in the database Journal Citation Reports.

The impact factor reflects the mean frequency  of citation of the articles in a journal.  An impact factor of 4 for a particular journal in a particular year, means that the articles published in the journal one and two years prior to that year were cited on average 4 times.

Impact factors can be found for science and social science journals, but not for arts and humanities journals.

Impact factors, should not be used to compare journals across research fields because no account is taken of the difference in citation patterns.  They can however be useful in comparing journals in the same subject area.

An impact factor for a journal may have little meaning on it’s own but you can find the rank and quartile of a journal in it’s subject area, based on it’s impact factor.  This presentation produced by Thomson Reuters shows how impact factors are calculated, how to find them and how to compare journals by impact factor, rank and quartile.

Impact factors can also be found for an individual journal through the database Web of Science.  When you find an article of interest, click on the title of the journal to see the bibliometrics for that journal.

Finding a journal CiteScore, SJR or SNIP

CiteScore is a relatively new metric from Elsevier launched in December 2016.  It is calculated in a similar way to a journal impact factor (although there are some differences) but is calculated from a different set of data. CiteScores may be present for journals which do not have an impact factor as many more journals are covered.  Some journals fare very differently in the two systems – this can be to do with the CiteScore calculation including items which are traditionally less cited, such as editorials and letters.  In a similar way to impact factors, CiteScore should not be used to compare journals across research fields because no account is taken of the difference in citation patterns.  CiteScore percentiles are available and will have more meaning when comparing journals than the number itself, especially if comparing across subject fields.

The calculation of a journal’s Source Normalised Impact per Paper (SNIP) developed by the Centre for Science and Technology Studies (CWTS) at the University of Leiden takes into consideration the citation potential of the journal in it’s subject or field.  It is therefore helpful if you wish to compare journals across disciplines.

Scimago Journal Rank (SJR) developed by the SCImago research group is another journal ranking metric, but it accounts for both the number of citations received by a journal and the importance or prestige of the journals the citations come from.

All these metrics can be found at: https://www.scopus.com/sources or  through the “Sources” button when using the library database Scopus.

Other journal metrics

The 5-year-impact factor, journal immediacy index, Eigenfactor score and other journal metrics are available from Journal Citation Reports.

You may also wish to look at Google Scholar metrics and check out the ‘Top publications’ lists which can be viewed by subject category. This Google Scholar Metrics help page explains the metrics that are available

Please make sure you read about and understand the limitations of any of these metrics which you decide to use.

Criticisms of impact factors and other journal metrics

Some of the criticisms that are directed at impact factors are described below:

  • Because the calculation generates an arithmetic mean for frequency of citation, impact factors are skewed upwards by very popular and highly cited papers
    Various research outputs have described the skewed distribution of citations .  For example:
    Callaway, E. (2016). Beat it, impact factor! Publishing elite turns against controversial metric. Nature News, July 8. doi:10.1038/nature.2016.20224
  • Retracted articles are not subtracted from the calculation and the measure also does not take account of the fact that articles may be cited in order to correct or dispute them
  • It is possible for journals to boost their impact factors by encouraging authors to cite other articles from the same journal
  • Authors can boost the impact factor of journals in which their work appears by citing their own papers in later published research

Similar criticisms can be directed at other bibliometrics and there are lots of articles in the literature about their limitations. Below is a link to one example paper which may be of interest.
Bornmann, L., Marx, W., Gasparyan, A.Y. & Kitas, G.D. (2011). Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatology international clinical and experimental investigation, 32(7), 1861-1867. DOI: 10.1007/s00296-011-2276-1

Spurious journal metrics

Please be wary of impact factors and bibliometric measures from sources that may be of questionable validity. The databases; Journal Citation Reports, Web of Science and Scopus as described on these pages are the established sources of journal bibliometrics.

For more information, please see this article: Gutierrez, F. R.S., Beall, J. & Forero, D. A. (2015). Spurious alternative impact factors: the scale of the problem from an academic perspective. Bioessays, 37, 474-476. doi:10.1002/bies.201500011

 

Using journal bibliometrics wisely

There is a lot of discussion about the merits and issues with journal bibliometrics in the literature and more widely and it is important to use these measures wisely.  Be sure you understand the limitations of any bibliometrics you use and use them for appropriate purposes. For example:

  • it is recommended to use a number of different metrics (not rely on just one)
  • it is important to use qualitative analysis in your judgements of journals not just the quantitative measures offered by bibliometrics
  • it is  considered by many to be illegitimate to use journal bibliometrics to evaluate the quality of particular articles published in the journals

You may wish to read the Library Research Support Team’s approach to  responsible use of metrics and to consider the advice in the Leiden Manifesto and the Metric Tide report (pgs 134-5).