Acknowledgement
이 논문은 교육부의 재원으로 한국연구재단 4단계 BK21사업의 지원을 받아 수행됨(NO.4120200913638). 이 논문은 2018년도 정부(미래창조과학부)의 재원으로 한국연구재단의 지원을 받아 수행되었음(No.2018R1A1A3A04078934).
References
- Alcamo, J. (2008). Chapter six the SAS approach: combining qualitative and quantitative knowledge in environmental scenarios. Developments in integrated environmental assessment, 2, 123-150. DOI: /10.1016/S1574-101X(08)00406-7
- Davitz, J. R. (1964). The communication of emotional meaning. Oxford, England: McGraw Hill.
- Jang, K., & Kim, T. (2005). The pragmatic elements concerned with the sounds of utterance. Korean Semantics, 18, 175-196.
- Jones, C. M., & Jonsson, I. M. (2005). Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future (pp. 1-10), Narrabundah, Australia, Nov. 2005. DOI: 10.5555/1108368.1108397
- Jones, C. M., & Jonsson, I. M. (2007). Performance analysis of acoustic emotion recognition for in-car conversational interfaces. In International Conference on Universal Access in Human-Computer Interaction (pp. 411-420). Berlin, Heidelberg, DOI: 10.1007/978-3-540-73281-5_44
- Kepuska, V. Z., & Klein, T. B. (2009). A novel wakeup-word speech recognition system, wake-up-word recognition task, technology and evaluation. Nonlinear Analysis: Theory, Methods & Applications, 71(12), e2772-e2789. DOI: 10.1016/j.na.2009.06.089
- Kim, Y., Kim, T., Kim, G., Jeon, H., & Suk. H. J. (2020). Hi Kia~, hi... kia..., HI KIA!! Proceeding of Fall Conference of Korean Society for Emotion and Sensibility (pp. 21-22), Daejeon.
- Nass, C., Jonsson, I. M., Harris, H., Reaves, B., Endo, J., Brave, S., & Takayama, L. (2005). Improving automotive safety by pairing driver emotion and car voice emotion. In Proceedings of CHI '05 Extended Abstracts on Human Factors in Computing Systems 2-7 (pp. 1973-1976), Portland, Oregon, USA. DOI: 10.1145/1056808.1057070
- Nordstrom, H., & Laukka, P. (2019). The time course of emotion recognition in speech and music. The Journal of the Acoustical Society of America, 145(5), 3058-3074. DOI: 10.1121/1.5108601
- Ogilvy, J. (2011). Facing the Fold: Essays on Scenario Planning (pp. 11-29). Devon: Triarchy Press.
- Park. J., Park, J., & Sohn, J. (2013). Acoustic parameters for induced emotion categorizing and dimensional approach. Science of Emotion and Sensibility, 16(1), 117-124.
- Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39(6), 1161-1178. DOI: 10.1037/h0077714
- Schuller, B., Lang, M., & Rigoll, G. (2006). Recognition of spontaneous emotions by speech within automotive environment. Proceedings of German Annual Conference of Acoustics, Braunschweig, Germany, Mar, 2006.
- Swain, M., Routray, A., & Kabisatpathy, P. (2018). Databases, features and classifiers for speech emotion recognition: a review. International Journal of Speech Technology, 21(1), 93-120. DOI: 10.1007/s10772-018-9491-z
- Voicebot. ai. (2020). In-car voice assistant consumer adoption report. Retrieved from https://voicebot.ai/wp-content/uploads/2020/02/in_car_voice_assistant_consumer_adoption_report_2020_voicebot.pdf
- Wiegand, G., Mai, C., Hollander, K., & Hussmann, H. (2019). InCarAR: A Design Space Towards 3D Augmented Reality Applications in Vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, Netherlands (pp. 1-13), DOI: 10.1145/3342197.3344539