Yoshihiro Yonekawa, MD
Mass Eye & Ear / Boston Children’s
RETINA Roundup Editor
Journal impact factors (JIFs) are numbers calculated to represent “how good” academic journals are. In this article, we’re going to dig deeper and determine whether “how good” is really what JIFs measure. In any case, JIFs are numbers that authors and publishers alike keep an eye on, and use.
So how do we use these numbers? Authors may consider a journal’s IF when determining where to submit their latest and greatest paper. Some universities may look at the impact factors of the journals that their faculty are publishing in when promotions are being considered. Same with some academic job applications and grant reviews. There are also rankings of journals based on JIF, and publishers take this seriously. The JIF is therefore a big deal.
However, like with most metrics that try to place a number on quality, there are controversies and limitations. Please read on to find out how these numbers are calculated, in order to get a good sense for what these numbers actually mean.
How Impact Factors are Calculated
The JIF is the average number of times that a journal’s papers published the past 2 years are cited in the assessment year. Thomson Reuters publishes this metric annually. They take the number of total citations in the assessment year, and divide it by the number of papers published the 2 years prior.
For example, the 2017 JIF for our journal RETINA is 3.700. This means that papers published in RETINA in 2015 and 2016, on average, was cited 3.700 times, in 2017.
This is slightly different from the 5-year JIF, which is the mean number of citations generated in the assessment year by papers published in the past 5 years. This therefore assesses the journal over a longer period of time, and most likely to show less variance. In general though, when people discuss JIFs, they are referring to the 2-year JIF.
So JIFs are a useful metric to get a sense for the “citability” of journals in a field – not necessarily “how good.” Let’s see what precautions we need to take when looking at these numbers.
Controversies
Calculating the JIF is theoretically very simple as shown above. But in practice, there may be many issues that need to be accounted for:
The Type of Papers Published
Some journals only publish review papers. Review articles usually don’t present the newest and most creative findings, but they tend to get cited more. Therefore these journals often get a bump in their JIFs.
Seminal papers that advance our field generate many citations. But so do controversial, or even fraudulent, papers, because they are often discussed.
So a high JIF does not necessarily indicate the quality of the papers being published. It’s all about citability.
Is the Journal Indexed?
The digital platform has made creating new journals something that you can do off of one laptop. Recent years have seen dozens of new journals appearing in our field, some great, some not so great, and some even predatory (see here). It takes several years for journals to be indexed, so newer journals will not have JIFs from Tomson Reuters. Some newer unindexed journals calculate “theoretical” JIFs, but these numbers are not official, and the exact methodology is uncertain.
Can’t Compare Across Other Fields
Ophthalmology journals can only be compared to ophthalmology journals, with the precautions stated above. Different fields have different citation and authorship tendencies, making comparison of JIF across disciplines problematic.
For example, basic science papers tend to have many more co-authors than clinical papers. Since authors have a natural propensity to cite themselves, the higher the number of authors, usually the higher number of self-citations.
Another example is that some disciplines may be more likely to cite older papers. This may be the case in fields such as Mathematics and Physics. Since JIFs only take into account citations generated by papers published in the past 2 years, these classic older papers are not part of the equation.
An Average is an Average
The JIF is an average number of citations per paper of the journal as a whole. Therefore a few papers that are cited often will skew the JIF positively. This is commonly the case, so a journal’s IF may not predict how good your new paper being submitted will perform.
The Bottom Line
Impact factors are something we like to talk about, and it has become a very influential metric, for better or worse. It’s a measure of citability of recent papers, but may not be a true indicator of quality, or predictability that a paper will be cited often.
I think we should use JIFs as a guide to get a sense for how actively cited recent papers in the journal are, but not necessarily as the sole measure of “how good” the journal is.
Journals are much more than just a number.
Hope this was helpful, and best of luck on your paper submissions!