DOI QR코드

DOI QR Code

A Novel Journal Evaluation Metric that Adjusts the Impact Factors across Different Subject Categories

  • Pyo, Sujin (Department of Industrial Engineering, Seoul National University) ;
  • Lee, Woojin (Department of Industrial Engineering, Seoul National University) ;
  • Lee, Jaewook (Department of Industrial Engineering, Seoul National University)
  • Received : 2016.02.26
  • Accepted : 2016.03.09
  • Published : 2016.03.30

Abstract

During the last two decades, impact factor has been widely used as a journal evaluation metric that differentiates the influence of a specific journal compared with other journals. However, impact factor does not provide a reliable metric between journals in different subject categories. For example, higher impact factors are given to biology and general sciences than those assigned to other traditional engineering and social sciences. This study initially analyzes the trend of the time series of the impact factors of the journals listed in Journal Citation Reports during the last decade. This study then proposes new journal evaluation metrics that adjust the impact factors across different subject categories. The proposed metrics possibly provides a consistent measure to mitigate the differences in impact factors among subject categories. On the basis of experimental results, we recommend the most reliable and appropriate metric to evaluate journals that are less dependent on the characteristics of subject categories.

Keywords

References

  1. Althouse, B. M., West, J. D., Bergstrom, C. T., and Bergstrom, T. (2009), Differences in impact factor across fields and over time, Journal of the American Society for Information Science and Technology, 60(1), 27-34. https://doi.org/10.1002/asi.20936
  2. Bornmann, L. and Daniel, H. D. (2008), What do citation counts measure? A review of studies on citing behavior, Journal of Documentation, 64(1), 45-80. https://doi.org/10.1108/00220410810844150
  3. Brody, S. (2013), Impact factor: Imperfect but not yet replaceable, Scientometrics, 96, 255-257. https://doi.org/10.1007/s11192-012-0863-x
  4. Chen, J. and Konstan, J. A. (2010), Conference paper selectivity and impact, Communications of the ACM, 53(6), 79-83. https://doi.org/10.1145/1743546.1743569
  5. Dorta-Gonzalez, P. and Dorta-Gonzalez, M. I. (2012), Comparing journals from different fields of science and social science through a JCR subject categories normalized impact factor, Scientometrics, 95(2), 645-672. https://doi.org/10.1007/s11192-012-0929-9
  6. Garfield, E. (1979), Citation indexing-Its theory and application in Science, Technology and Humanities, New York: Wiley and Sons.
  7. Leydesdorff, L. (2012), Alternatives of the journal impact factor I3 and the top-10% of the most highly cited papers, Scientometrics, 92, 355-365. https://doi.org/10.1007/s11192-012-0660-6
  8. Pudovkin, A. I. and Garfield, E. (2012), Rank normalization of impact factors will resolve Vanclay's dilemma with TRIF, Scientometrics, 92, 409-412. https://doi.org/10.1007/s11192-012-0634-8
  9. Simons, K. (2008), The misused impact factor, Science, 322(5899), 165-165. https://doi.org/10.1126/science.1165316
  10. Vanclay, J. K. (2011), Impact Factor: outdated artefact or stepping-stone to journal certification?, Scientometrics, 92, 211-238.
  11. Zhang, Z., Cheng, Y., and Liu, N. C. (2014), Comparison of the effect of mean-based method and z-score for field normalization of citations at the level of Web of Science subject categories, Scientometrics, 101, 1679-1693. https://doi.org/10.1007/s11192-014-1294-7
  12. Zitt, M., Ramanana, S., and Bassecoulard, E. (2005), Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation, Scientometrics, 63(2), 373-401. https://doi.org/10.1007/s11192-005-0218-y