Acknowledgement
이 성과는 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임(No. RS-2023-00241142).
References
- H. Kil, "How to realize rhetorical irony in Korean," Studies in Humanities, Vol.13, pp.1-35, 2005.
- C. Turban and U. Kruschwitz, "Tackling irony detection using ensemble classifiers," Proceedings of the Thirteenth Language Resources and Evaluation Conference, 2022.
- J. Sarzynska-Wawer et al., "Detecting formal thought disorder by deep contextualized word representations," Psychiatry Research, Vol.304, pp.114135, 2021.
- J. Devlin, M. Chang, K. Lee, and K Toutanova, "Bert: Pretraining of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805. 2018.
- A. Radford and K. Narasimhan, "Improving Language Understanding by Generative Pre-Training," 2018.
- A. Vaswani et al., "Attention is all you need," Advances in Neural Information Processing Systems, Vol.30, 2017.
- M. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar, and T. Koshoiba, "Bangla-BERT: transformer-based efficient model for transfer learning and language understanding," IEEE Access, Vol.10, pp.91855-91870, 2022. https://doi.org/10.1109/ACCESS.2022.3197662
- A. Arnold, R. Nallapati, and W. W. Cohen, "A comparative study of methods for transductive transfer learning," Seventh IEEE international conference on data mining workshops (ICDMW 2007). IEEE, 2007.
- O. Habimana, Y. Li, R. Li, X. Gu, and Y. Peng, "A multi-task learning approach to improve sentiment analysis with explicit recommendation," 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
- T. B. Brown et al., "Language models are few-shot learners," Advances in Neural Information Processing Systems, Vol.33, pp.1877-1901, 2020.
- L. Ouyang et al., "Training language models to follow instructions with human feedback," Advances in Neural Information Processing Systems, Vol.35, pp.27730-27744, 2022.
- L. Loukas, I. Stogiannidis, P. Malakasiotis, and S. Vassos, "Breaking the bank with chatgpt: Few-shot text classification for finance," arXiv preprint arXiv:2308.14634, 2023.
- A. Baruah, K. Das, F. Barbhuiya, and K. Dey, "Contextaware sarcasm detection using bert," Proceedings of the Second Workshop on Figurative Language Processing, 2020.
- P. Golazizian, B. Sabeti, S. A. A. Asli, Z. Majdabadi, O. Momenzadeh, and R. Fahmi, "Irony detection in Persian language: A transfer learning approach using emoji prediction," Proceedings of the Twelfth Language Resources and Evaluation Conference, 2020.
- M. Kosterin, I. Paramonov, and N. Lagutina, "Automatic Irony and Sarcasm Detection in Russian Sentences: Baseline Methods," 2023 33rd Conference of Open Innovations Association (FRUCT). IEEE, 2023.
- Y. Kuratov and M. Arkhipov, "Adaptation of deep bidirectional multilingual transformers for Russian language," arXiv preprint arXiv:1905.07213, 2019.
- A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, "Learning word vectors for sentiment analysis," Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011.
- K. J. Lee, S. Bang, and J. E. Kim, "Korean irony corpus construction," Language and Information, Vol.27, No.1, pp.19-36, 2023. https://doi.org/10.29403/LI27.1.2
- I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.