과제정보
이 논문은 2023학년도 세명대학교 대학혁신지원사업에 의한 연구임
참고문헌
- S. Gaurav, "Multimodal speech emotion recognition and ambiguity resolution", arXiv preprint arXiv:1904.06022, 2019. doi.org/10.48550/arXiv.1904.06022
- Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., ... & Farhan, L, "Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions", Journal of big Data, 8, pp 1-74, 2021. doi.org/10.1186/s40537-021-00444-8
- YY. Yu, X. Si, C. Hu and J. Zhang, "A review of recurrent neural networks: LSTM cells and network architectures", Neural computation, Vol 31, No. 7, pp. 1235-1270, 2019. doi: 10.1162/neco_a_01199.
- Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., ... & Narayanan, S. S., "IEMOCAP: Interactive emotional dyadic motion capture database" Language resources and evaluation, 42, pp. 335-359, 2008. https://doi.org/10.1007/s10579-008-9076-6
- Tzirakis, P., Trigeorgis, G., Nicolaou, M. A., Schuller, B. W., & Zafeiriou, S., "End-to-End Multimodal Emotion Recognition Using Deep Neural Networks", IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1301-1309, Dec. 2017, doi: 10.1109/JSTSP.2017.2764438.
- Kim, J. H. & Lee, S. P., "Multi-Modal Emotion Recognition Using Speech Features and Text Embedding", Trans. Korean Inst. Electr. Eng, 70, pp. 108-113, 2021. doi:10.5370/kiee.2021.70.1.108.
- Ranganathan, H., Chakraborty, S., & Panchanathan, S., "Multimodal emotion recognition using deep learning architectures" 2016 IEEE winter conference on applications of computer vision (WACV). IEEE, pp. 1-9, 2016. DOI: 10.1109/WACV.2016.7477679
- Liu, W., Qiu, J. L., Zheng, W. L., & Lu, B. L.. "Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition", IEEE Transactions on Cognitive and Developmental Systems, Vol. 14, No. 2, pp.715-729, 2021. DOI: 10.1109/TCDS.2021.3071170
- Jo, C.Y. & Jung, H.J., "Multimodal Emotion Recognition System using Face Images and Multidimensional Emotion-based Text", The Journal of Korean Institute of Information Technology, vol. 21, no. 5, pp. 39-47, 2023, doi: 10.14801/jkiit.2023.21.5.39
- Lee, S.J., Seo, J.Y. & Choi, J.H., "The Effect of Interjection in Conversational Interaction with the AI Agent: In the Context of Self-Driving Car", The Journal of the Convergence on Culture Technology, vol. 8, no. 1, pp. 551-563, 2022. doi:10.17703/JCCT.2022.8.1.551.
- Yoon, S., Byun, S. & Jung, K., "Multimodal Speech Emotion Recognition Using Audio and Text", 2018 IEEE Spoken Language Technology Workshop (SLT), Athens, Greece, pp 112-118, 2018, doi: 10.1109/SLT.2018.8639583.
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K., "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", In Proceedings of naacL-HLT, Vol. 1, p. 2, pp 4171-4186, 2019. DOI: 10.18653/V1/N19-1423