Acknowledgement
본 논문은 2018년도 정부(교육부)의 재원으로 한국연구재단의 지원과 2021년도 광운대학교 우수연구자 지원 사업의 지원을 받아 수행된 기초연구사업임(NRF-2018R1D1A1B07041783).
References
- M. Schedl, H. Zamani, C.-W. Chen, Y. Deldjoo, and M. Elahi, "Current challenges and visions in music recommender systems research," Int. J. Multimed. Inf. Retr. 7, 95-116 (2018). https://doi.org/10.1007/s13735-018-0154-2
- J. Lee S. Shin, D. Jang, S.-J Jang, and K. Yoon, "Music recommendation system based on usage history and automatic genre classification, Proc. IEEE Int. Conf. Consum. Electron. 134-135 (2015).
- L. S. Chen, T. S. Huang, T. Miyasato, and R. Nakatsu, "Multimodal human emotion/expression recognition," Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition, 134-135 (1998).
- D. Ayata, Y. Yaslan, and M. E. Kamasak, "Emotion based music recommendation system using wearable physiological sensors," IEEE Trans. Consum. Electron. 64, 196-203 (2018). https://doi.org/10.1109/tce.2018.2844736
- J. X. Chen, D. M. Jiang, and Y. N. Zhang, "A hierarchical bidirectional GRU model with attention for EEG-based emotion classification," IEEE Access, 7, 118530-118540 (2019). https://doi.org/10.1109/access.2019.2936817
- F. E. Harrell, Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis (Springer, New York, 2015), pp. 359-387.
- I. J. Ding and N.W. Zheng, "Classification of restlessness level by deep learning of visual geometry group convolution neural network with acoustic speech and visual face sensor data for smart care applications," Sensors and Materials, 32, 2329-2341 (2020). https://doi.org/10.18494/SAM.2020.2881
- Y. Ma, Y. Hao, M. Chen, J. Chen, P. Lu, and A. Kosir, "Audio-visual emotion fusion (AVEF): A deep efficient weighted approach", Inf. Fusion, 46, 184-192 (2019). https://doi.org/10.1016/j.inffus.2018.06.003