Acknowledgement
이 연구는 2023년도 정부(산업통상자원부)의 재원으로 한국산업기술진흥원의 지원을 받아 수행된 연구이자 (P0012746, 2023년 산업혁신인재성장지원사업), 과학기술정보통신부 및 정보통신기획평가원의 메타버스 융합대학원의 연구 결과로 수행되었음 (IITP-2023-RS-2022-00156318)
References
- Sanders, Timothy, and Paul Cairns. "Time perception, immersion and music in videogames," Proceedings of HCI 2010 24, 160-167, 2010.
- Weiss, Karl, Taghi M. Khoshgoftaar, and DingDing Wang. "A survey of transfer learning," Journal of Big data 3.1, 1-40, 2016. https://doi.org/10.1186/s40537-016-0043-6
- Russell, James A. "A circumplex model of affect," Journal of personality and social psychology 39.6, 1161, 1980.
- K. J. Noh and H. Jeong, "KEMDy20," https://nanum.etri.re.kr/share/kjnoh/KEMDy20?lang=ko_KR
- Na-Mo Bang, Heui-Yeen Yeen, Jee-Hyun Lee, Myoung-Wan Koo. "MMM: Multi-modal Emotion Recognition in conversation with MLP-Mixer," 한국정보과학회 학술발표논문집, 2288-2290, 2022.
- June-Woo Kim, Dong-Hyun Kim, Ju-Seong Do, Ho-Young Jung. "Strategies of utilizing pre-trained text and speech model-based feature representation for multi-modal emotion recognition," 한국정보과학회 학술발표논문집, 2282-2284, 2022.
- Baevski, Alexei, et al. "wav2vec 2.0: A framework for self-supervised learning of speech representations," Advances in neural information processing systems 33, 12449-12460, 2020.
- Conneau, Alexis, et al. "Unsupervised cross-lingual representation learning for speech recognition," arXiv preprint arXiv:2006.13979, 2020.
- M. Soleymani, A. Aljanaki, Y. Yang, "DEAM: Mediaeval database for emotional analysis in music," http://cvml.unige.ch/databases/DEAM/, 2016.