참고문헌
- Jang, Youngjin, Kwon, Oh-Woog, & Kim, Harksoo (2020). Passage re-ranking model for information retrieval based machine reading comprehension. Conference of Computing Science and Engineering, 410-412.
- Kim, HanJoon, Noh, Joonho, & Chang, Jae-Young (2012). A new re-ranking technique based on concept-network profiles for personalized web search. The Journal of The Institute of Internet, Broadcasting and Communication, 12(2), 69-76. http://dx.doi.org/10.7236/JIWIT.2012.12.2.6.
- Kim, HongRyul & Lee, Too-Young (1999). A study on relevance criteria of retrieved documents according to the research stage. Conference of the Korean Society for Information Management, 5-8.
- Kim, SeonWook & Yang, Kiduk (2022). Topic model augmentation and extension method using LDA and BERTopic. Journal of the Korean Society for Information Management, 39(3), 99-132. https://doi.org/10.3743/KOSIM.2022.39.3.099
- Lee, Seung-Wook, Song, Young-In, & Rim, Hae-Chang (2008). An opinionated document retrieval system based on hybrid method. Journal of the Korean Society for Information Management, 25(4), 115-129. https://doi.org/10.3743/KOSIM.2008.25.4.115
- Park, JungAh & Sohn, YoungWoo (2009). User-centered relevance judgement model for information retrieval. The Korean Society For Emotion & Sensibility, 12(4), 489-500.
- Anker, M. S., Hadzibegovic, S., Lena, A., & Haverkamp, W. (2019). The difference in referencing in Web of Science, Scopus, and Google Scholar. ESC Heart Failure, 6(6), 1291-1312. https://doi.org/10.1002/ehf2.12583
- Bar-Ilan, J. (2008). Which h-index?-a comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257-271. https://doi.org/10.1007/s11192-008-0216-y
- Birkle, C., Pendlebury, D. A., Schnell, J., & Adams, J. (2020). Web of Science as a data source for research on scientific and scholarly activity. Quantitative Science Studies, 1(1), 363-376. https://doi.org/10.1162/qss_a_00018
- Buckley, C., Salton, G., & Allan, J. (1993, March). The smart information retrieval project. In Proceedings of the workshop on Human Language Technology, 392-392.
- Claveau, V. (2021, December). Neural text generation for query expansion in information retrieval. In IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, 202-209. https://doi.org/10.1145/3486622.3493957
- Cooper, W. S. (1971). A definition of relevance for information retrieval. Information storage and retrieval, 7(1), 19-37. https://doi.org/10.1016/0020-0271(71)90024-6
- Cronin, B. (1982), Norms and functions in citation - the view of journal editors and referees in psychology. Social Science Information Studies, 2, 65-78. https://doi.org/10.1016/0143-6236(82)90001-1
- Gao, Q., Huang, X., Dong, K., Liang, Z., & Wu, J. (2022). Semantic-enhanced topic evolution analysis: a combination of the dynamic topic model and word2vec. Scientometrics, 127(3), 1543-1563. https://doi.org/10.1007/s11192-022-04275-z
- Garfield, E. (1964). "Science citation index"-a new dimension in indexing. Science, 144(3619), 649-654. https://doi.org/10.1126/science.144.3619.649
- Gupta, V., Chinnakotla, M., & Shrivastava, M. (2018, November). Retrieve and re-rank: a simple and effective IR approach to simple question answering over knowledge graphs. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 22-27. https://doi.org/10.18653/v1/W18-5504
- Harman, D. (1988, May). Towards interactive query expansion. In Proceedings of the 11th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 321-331. https://doi.org/10.1145/62437.62469
- Harman, D. (1992, June). Relevance feedback revisited. In Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1-10. https://doi.org/10.1145/133160.133167
- Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102
- Ioannakis, G., Koutsoudis, A., Pratikakis, I., & Chamzas, C. (2017). RETRIEVAL-an online performance evaluation tool for information retrieval methods. IEEE Transactions on Multimedia, 20(1), 119-127. https://doi.org/10.1109/TMM.2017.2716193
- Jain, S., Seeja, K. R., & Jindal, R. (2021). A fuzzy ontology framework in information retrieval using semantic query expansion. International Journal of Information Management Data Insights, 1(1), 100009. https://doi.org/10.1016/j.jjimei.2021.100009
- Jiang, Z., Tang, R., Xin, J., & Lin, J. (2021, November). How does BERT rerank passages? an attribution analysis with information bottlenecks. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and InterpretingNeural Networks for NLP, 496-509. https://doi.org/10.18653/v1/2021.blackboxnlp-1.39
- Lancaster, F. W. (1979). Information Retrieval Systems; Characteristics, Testing, and Evaluation. New York: Wiley.
- Lv, Y. & Zhai, C. (2010, July). Positional relevance model for pseudo-relevance feedback. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 579-586. https://doi.org/10.1145/1835449.1835546
- Maglaughlin, K. L. & Sonnenwald, D. H. (2002). User perspectives on relevance criteria: a comparison among relevant, partially relevant, and not relevant judgments. Journal of the American Society for Information Science and Technology, 53(5), 327-342. https://doi.org/10.1002/asi.10049
- Martin-Martin, A., Thelwall, M., Orduna-Malea, E., & Delgado Lopez-Cozar, E. (2021). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations' COCI: a multidisciplinary comparison of coverage via citations. Scientometrics, 126(1), 871-906. https://doi.org/10.1007/s11192-020-03690-4
- Mizzaro, S. (1998). How many relevances in information retrieval? Interacting with computers, 10(3), 303-320. https://doi.org/10.1016/S0953-5438(98)00012-5
- Natsev, A., Haubold, A., Tesic, J., Xie, L., & Yan, R. (2007, September). Semantic concept-based query expansion and re-ranking for multimedia retrieval. In Proceedings of the 15th ACM international conference on Multimedia, 991-1000. https://doi.org/10.1145/1291233.1291448
- Pereira, M., Etemad, E., & Paulovich, F. (2020, March). Iterative learning to rank from explicit relevance feedback. In Proceedings of the 35th Annual ACM Symposium on Applied Computing, 698-705. https://doi.org/10.1145/3341105.3374002
- Rivas, A. R., Iglesias, E. L., & Borrajo, L. (2014). Study of query expansion techniques and their application in the biomedical information retrieval. The Scientific World Journal, 2014. https://doi.org/10.1155/2014/132158
- Rocchio, J. (1971). Relevance feedback in information retrieval. The Smart Retrieval Systemexperiments in Automatic Document Processing, 313-323.
- Rovira, C., Codina, L., Guerrero-Sole, F., & Lopezosa, C. (2019). Ranking by relevance and citation counts, a comparative study: Google Scholar, Microsoft Academic, WoS and Scopus. Future Internet, 11(9), 202. https://doi.org/10.3390/fi11090202
- Salton, G. & Lesk, M. E. (1965). The SMART automatic document retrieval systems-an illustration. Communications of the ACM, 8(6), 391-398. https://doi.org/10.1145/364955.364990
- Salton, G. & McGill, Michael J. (1983) Introduction to Modern Information Retrieval. New York: McGraw-Hill.
- Salton, G., Wong, A., & Yang, C. S. (1975). A vector space model for automatic indexing. Communications of the ACM, 18(11), 613-620. https://doi.org/10.1145/361219.361220
- Smith, L. C. (1981). Citation Analysis. Library Trends, 30(1), 83-106. https://hdl.handle.net/2142/7190
- Soboroff, I. (2021). Overview of TREC 2021. In 30th Text REtrieval Conference. Gaithersburg, Maryland. Available: https://trec.nist.gov/pubs/trec30/papers/Overview-2021.pdf
- Spink, A., Greisdorf, H., & Bateman, J. (1998). From highly relevant to not relevant: examining different regions of relevance. Information processing & management, 34(5), 599-621. https://doi.org/10.1016/S0306-4573(98)00025-9
- Taylor, A. (2012). User relevance criteria choices and the information search process. Information Processing & Management, 48(1), 136-153. https://doi.org/10.1016/j.ipm.2011.04.005
- Van Gysel, C. & de Rijke, M. (2018, June). Pytrec_eval: an extremely fast python interface to trec_eval. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 873-876. https://doi.org/10.1145/3209978.3210065
- Van Raan, A. F. (2005). For your citations only? Hot topics in bibliometric analysis. Measurement: interdisciplinary research and perspectives, 3(1), 50-62. https://doi.org/10.1207/s15366359mea0301_7
- Voorhees, E. M. & Harman, D. K. ed. (2005). TREC: Experiment and evaluation in information retrieval (Vol. 63). Cambridge: MIT Press. Available: http://aclanthology.lst.uni-saarland.de/J06-4008.pdf
- Wang, X., Yang, H., Zhao, L., Mo, Y., & Shen, J. (2021, July). Refbert: Compressing bert by referencing to pre-computed representations. In 2021 International Joint Conference on Neural Networks (IJCNN), 1-8. IEEE. http://doi.org/10.1109/IJCNN52387.2021.9534402
- Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, June). A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013), 25-54. https://doi.org/10.48550/arXiv.1304.6480
- Xu, J. & Croft, W. B. (2017, August). Quary expansion using local and global document analysis. In Acm Sigir Forum, 51(2). 168-175. https://doi.org/10.1145/3130348.3130364
- Zheng, Z., Hui, K., He, B., Han, X., Sun, L., & Yates, A. (2020). BERT-QE: contextualized query expansion for document re-ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, 4718-4728. https://doi.org/10.18653/v1/2020.findings-emnlp.424