참고문헌
- Bae, Seongho, Ku, Xyle, Park, Chanbong, & Kim, Jungsu (2020). A latent topic modeling approach for jubject summarization of research on the military art and science in South Korea. Korean Journal of Military Art and Science, 76(2), 181-216. http://doi.org/10.31066/kjmas.2020.76.2.008
- Choi, Yongseok & Lee, Kong Joo (2020). Performance analysis of Korean morphological analyzer based on transformer and BERT. Journal of Korean Institute of Information Scientists and Engineers, 47(8), 730-741. http://doi.org/10.5626/JOK.2020.47.8.730
- Choi, Yunsoo & Choi, Sung-Pil (2019). A study on patent literature classification using distributed representation of technical terms. Journal of the Korean Society for Library and Information Science, 53(2), 179-199. https://doi.org/10.4275/KSLIS.2019.53.2.179
- Electronics and Telecommunicaions Research Institute (2019). KorBERT. Available: https://aiopen.etri.re.kr/service_dataset.php
- Hwang, Sangheum & Kim, Dohyun (2020). BERT-based classification model for Korean documents. Journal of Society for e-Business Studies, 25(1), 203-214. https://doi.org/10.7838/jsebs.2020.25.1.203
- Kim, Hae-Chan-Sol, An, Dae-Jin, Yim, Jin-Hee, & Lieh, Hae-Young (2017). A study on automatic classification of record text using machine learning. Journal of the Korean Society for Information Management, 34(4), 321-344. https://doi.org/10.3743/KOSIM.2017.34.4.321
- Kim, Pan-Jun (2016). An analytical study on performance factors of automatic classification based on machine learning. Journal of the Korean Society for Information Management, 33(2), 33-59. https://doi.org/10.3743/KOSIM.2016.33.2.033
- Kim, Pan-Jun (2018). An analytical study on automatic classification of domestic journal articles based on machine learning. Journal of the Korean Society for Information Management, 35(2), 37-62. https://doi.org/10.3743/KOSIM.2018.35.2.037
- Kim, Pan-Jun (2019). An analytical study on automatic classification of domestic journal articles using random forest. Journal of the Korean Society for Information Management, 36(2), 57-77. https://doi.org/10.3743/KOSIM.2019.36.2.057
- Lee, Chi-Hoon, Lee, Yeon-Ji, & Lee, Dong-Hee (2020). A study of fine tuning pre-trained Korean bert for question answering performance development. Journal of Information Technology Services, 19(5), 83-91. https://doi.org/10.9716/KITS.2020.19.5.083
- Lee, Sang-Woo, Kwon, Jung-Hyok, Kim, Nam, Choi, Hyung-Do, & Kim, Eui-Jik (2020). Research category classification for scientific literature on human health risk of electromagnecit fields. The Journal of Korean Institute of Electromagnetic Engineering and Science, 31(10), 839-842. https://doi.org/10.5515/KJKIEES.2020.31.10.839
- Lee, Soobin, Kim, Seongdeok, Lee, Juhee, Ko, Youngsoo, & Song, Min (2021). Building and analyzing panic disorder social media corpus for automatic deep learning classification model. Journal of the Korean Society for Information Management, 38(2), 153-172. https://doi.org/10.3743/KOSIM.2021.38.2.153
- National Research Foundation of Korea (2016). the classification table of academic research fields. Available: https://www.nrf.re.kr/biz/doc/class/view?menu_no=323
- Park, Kyu Hwon & Jeong, Young-Seob (2021). Korean daily conversation topics classification using KoBERT. Proceedings of Korea Computer Congress 2021, 1735-1737.
- Seong, So-yun, Choi, Jae-yong, & Kim, Kyoung-chul (2019). A study on improved comments generation using transformer. Journal of Korea Game Society, 19(5), 103-113. https://doi.org/10.7583/JKGS.2019.19.5.103
- Shim, Jaekwoun (2021). A study on automatic classification of profanity sentences of elementary school students using BERT. Journal of Creative Information Culture, 7(2), 91-98. http://www.doi.org/10.32823/jcic.7.2.202105.91
- Song, Euiseok & Kim, Namgyu (2021). Transformer-based text summarization using pre-trained language model. Management & Information Systems Review, 40(4), 31-47. https://doi.org/10.29214/DAMIS.2021.40.4.002
- Yuk, Jee Hee & Song, Min (2018). A study of research on methods of automated biomedical document classification using topic modeling and deep learning. Journal of the Korean Society for Information Management, 35(2), 63-88. https://doi.org/10.3743/KOSIM.2018.35.2.063
- Yun, Hee Seung & Jung, Jason J. (2021). Automated fact checking model using efficient transfomer. Journal of the Korea Institute of Information and Communication Engineering, 25(9), 1275-1278. https://doi.org/10.6109/jkiice.2021.25.9.1275
- Asim, M. N., Ghani, M. U., Ibrahim, M. A., Mahmood, W., Dengel, A., & Ahmen, S. (2021). Benchmarking performance of machine and deep learning-based methodologies for Urdu text document classifiation. Neural Computing and Applications, 33, 5437-5469. https://doi.org/10.1007/s00521-020-05321-8
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://arxiv.org/abs/1810.04805
- El-Alami, F., El Alaoui, S. O., & Nahnahi, N. E. (2021). Contextual semantic embeddings based on fine-tuned AraBERT model for Arabic text multi-class categorization. Journal of King Saud University - Computer and Information Sciences, 2021, 1-7. http://doi.org/10.1016/j.jksuci.2021.02.005
- Hikmah, A., Adi, S., & Sulistiyono, M. (2020). The best parameter tuning on RNN layers for inonesian text classification. Proceedings 2020 3rd International Seminar on Research of Information Technology and Intelligent Systems, 94-99. https://doi.org/10.1109/ISRITI51436.2020.9315425
- Okur, H. I. & Sertbas, A. (2021). Pretrained neural models for turkish text classification. Proceeding of 2021 6th International Conference on Computer Science and Engineering, 174-179. https://doi.org/10.1109/UBMK52708.2021.9558878
- Peters, M. E., Neumann M., Iyyer M., & Gardner M. (2018). Deep Contextualized Word Representations. https://arxiv.org/abs/1802.05365