Acknowledgement
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(No. 2020R1A6A1A03040583)
References
- P. Kherwa and P. Bansal, "Topic modeling: a Comprehensive Review," EAI Endorsed transactions on scalable information systems, 2019. http://dx.doi.org/10.4108/eai.13-7-2018.159623
- J. Qiang, Z. Qian, Y. Li, Y. Yuan and X. Wu, "Short Text Topic Modeling Techniques, Applications, and Performance: A Survey," IEEE Transactions on Knowledge and Data Engineering, Vol. 34, No. 3, pp. 1427-1445, 2022. http://dx.doi.org/10.1109/TKDE.2020.2992485
- WS. El-Kassas, CR. Salama, AA, Rafea and HK. Mohamed, "Automatic Text Summarization: A Comprehensive Survey," Expert Systems with Applications, Vol. 165, 113679, 2021. https://doi.org/10.1016/j.eswa.2020.113679
- H. Yoo, R. C. Park, and K. Chung, "IoT-Based Health Big-Data Process Technologies: A Survey," KSII Transactions on Internet and Information Systems, Vol. 15, No. 3, pp. 974-992, 2021. https://doi.org/10.3837/tiis.2021.03.009
- B. Jeon, K Chung, "CutPaste-Based Anomaly Detection Model using Multi Scale Feature Extraction in Time Series Streaming Data," KSII Transactions on Internet and Information Systems, Vol. 16, No. 8, 2022. http://dx.doi.org/10.3837/tiis.2022.08.018
- J. Delvin, M.W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," Proceedings of naacL-HLT, Vol. 1, pp. 4171-4186, 2019. http://dx.doi.org/10.18653/v1/N19-1423
- M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and Luke Zettlemoyer, "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension," Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.703
- H. Yoo, K. Chung, "Deep Learning-based Evolutionary Recommendation Model for heterogeneous Big Data Integration," KSII Transactions on Internet and Information Systems, Vol. 14, No. 9, pp. 3730-3744, 2020. https://doi.org/10.3837/tiis.2020.09.009
- D. O'callaghan, D. Greene, J. Carthy, and P. Cunningham, "An analysis of the coherence of descriptors in topic modeling," Expert Systems with Applications, Vol. 42, pp. 5645-5657, 2015. https://doi.org/10.1016/j.eswa.2015.02.055
- AI Hub, [Online] : https://aihub.or.kr/, 2023.
- KoBART, [Online] : https://github.com/SKT-AI/KoBART, 2023.
- H. Jelodar, Y. Wang, C. Yuan, X. Feng, X. Jiang, Y. Li, and L. Zhao, "Latent Dirichlet allocation (LDA) and topic modeling: models, applications, a survey," Multimedia Tools and Applications, 78, 15169-15211, 2019. https://doi.org/10.1007/s11042-018-6894-4
- K. Park, J. Lee, S. Jang, and D. Jung, "An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks," Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pp. 133-142, 2020. https://doi.org/10.48550/arXiv.2010.02534