• Title/Summary/Keyword: 음성 감정인식

Search Result 139, Processing Time 0.028 seconds

Building Living Lab for Acquiring Behavioral Data for Early Screening of Developmental Disorders

  • Kim, Jung-Jun;Kwon, Yong-Seop;Kim, Min-Gyu;Kim, Eun-Soo;Kim, Kyung-Ho;Sohn, Dong-Seop
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.8
    • /
    • pp.47-54
    • /
    • 2020
  • Developmental disorders are impairments of brain and/or central nervous system and refer to a disorder of brain function that affects languages, communication skills, perception, sociality and so on. In diagnosis of developmental disorders, behavioral response such as expressing emotions in proper situation is one of observable indicators that tells whether or not individual has the disorders. However, diagnosis by observation can allow subjective evaluation that leads erroneous conclusion. This research presents the technological environment and data acquisition system for AI based screening of autism disorder. The environment was built considering activities for two screening protocols, namely Autism Diagnostic Observation Schedule (ADOS) and Behavior Development Screening for Toddler (BeDevel). The activities between therapist and baby during the screening are fully recorded. The proposed software in this research was designed to support recording, monitoring and data tagging for learning AI algorithms.

Speech Emotion Recognition using Feature Selection and Fusion Method (특징 선택과 융합 방법을 이용한 음성 감정 인식)

  • Kim, Weon-Goo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.8
    • /
    • pp.1265-1271
    • /
    • 2017
  • In this paper, the speech parameter fusion method is studied to improve the performance of the conventional emotion recognition system. For this purpose, the combination of the parameters that show the best performance by combining the cepstrum parameters and the various pitch parameters used in the conventional emotion recognition system are selected. Various pitch parameters were generated using numerical and statistical methods using pitch of speech. Performance evaluation was performed on the emotion recognition system using Gaussian mixture model(GMM) to select the pitch parameters that showed the best performance in combination with cepstrum parameters. As a parameter selection method, sequential feature selection method was used. In the experiment to distinguish the four emotions of normal, joy, sadness and angry, fifteen of the total 56 pitch parameters were selected and showed the best recognition performance when fused with cepstrum and delta cepstrum coefficients. This is a 48.9% reduction in the error of emotion recognition system using only pitch parameters.

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

VR Companion Animal Communion System for Pet Loss Syndrome (펫로스 증후군을 위한 VR 반려동물 교감 시스템)

  • Choi, Hyeong-Mun;Moon, Mikyeong;Lee, Gun-ho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.563-564
    • /
    • 2021
  • 반려동물 보유 가구 수가 증가하면서 반려동물의 상실로 인한 펫로스 증후군을 호소하는 반려인 또한 증가하고 있다. 펫로스 증후군을 치유하기 위해 반려동물을 가상으로라도 만나서 평소에 하던 말과 행동을 할 수 있도록 하여 차츰 이별을 할 수 있도록 할 필요가 있다. 본 논문에서는 VR을 통하여 반려인이 3D로 모델링 된 반려동물과 직접 교감할 수 있는 시스템에 대한 연구 내용을 기술한다. 이 시스템을 통해 떠나보낸 반려동물과 평소와 같은 말과 행동을 할 수 있도록 도와주어 감정의 정화를 서서히 할 수 있도록 해준다.

  • PDF

The AI Promotion Strategy of Korea Defense for the AI Expansion in Defense Domain (국방분야 인공지능 저변화를 위한 대한민국 국방 인공지능 추진전략)

  • Lee, Seung-Mok;Kim, Young-Gon;An, Kyung-Soo
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.59-73
    • /
    • 2021
  • Recently, artificial intelligence has spread rapidly and popularized and expanded to the voice recognition personal service sector, and major countries have established artificial intelligence promotion strategies, but in the case of South Korea's defense domain, its influence is low with a geopolitical location with North Korea. This paper presents a total of six strategies for promoting South Korea's defense artificial intelligence, including establishing roadmaps, securing manpower, installing the artificial intelligence base, and strengthening cooperation among stakeholders in order to increase the impact of South Korea's defense artificial intelligence and successfully promote artificial intelligence. These suggestions are expected to establish the foundation for expanding the base of artificial intelligence.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

Framework Switching of Speaker Overlap Detection System (화자 겹침 검출 시스템의 프레임워크 전환 연구)

  • Kim, Hoinam;Park, Jisu;Cha, Shin;Son, Kyung A;Yun, Young-Sun;Park, Jeon Gue
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.101-113
    • /
    • 2021
  • In this paper, we introduce a speaker overlap system and look at the process of converting the existed system on the specific framework of artificial intelligence. Speaker overlap is when two or more speakers speak at the same time during a conversation, and can lead to performance degradation in the fields of speech recognition or speaker recognition, and a lot of research is being conducted because it can prevent performance degradation. Recently, as application of artificial intelligence is increasing, there is a demand for switching between artificial intelligence frameworks. However, when switching frameworks, performance degradation is observed due to the unique characteristics of each framework, making it difficult to switch frameworks. In this paper, the process of converting the speaker overlap detection system based on the Keras framework to the pytorch-based system is explained and considers components. As a result of the framework switching, the pytorch-based system showed better performance than the existing Keras-based speaker overlap detection system, so it can be said that it is valuable as a fundamental study on systematic framework conversion.

Applying Social Strategies for Breakdown Situations of Conversational Agents: A Case Study using Forewarning and Apology (대화형 에이전트의 오류 상황에서 사회적 전략 적용: 사전 양해와 사과를 이용한 사례 연구)

  • Lee, Yoomi;Park, Sunjeong;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.21 no.1
    • /
    • pp.59-70
    • /
    • 2018
  • With the breakthrough of speech recognition technology, conversational agents have become pervasive through smartphones and smart speakers. The recognition accuracy of speech recognition technology has developed to the level of human beings, but it still shows limitations on understanding the underlying meaning or intention of words, or understanding long conversation. Accordingly, the users experience various errors when interacting with the conversational agents, which may negatively affect the user experience. In addition, in the case of smart speakers with a voice as the main interface, the lack of feedback on system and transparency was reported as the main issue when the users using. Therefore, there is a strong need for research on how users can better understand the capability of the conversational agents and mitigate negative emotions in error situations. In this study, we applied social strategies, "forewarning" and "apology", to conversational agent and investigated how these strategies affect users' perceptions of the agent in breakdown situations. For the study, we created a series of demo videos of a user interacting with a conversational agent. After watching the demo videos, the participants were asked to evaluate how they liked and trusted the agent through an online survey. A total of 104 respondents were analyzed and found to be contrary to our expectation based on the literature study. The result showed that forewarning gave a negative impression to the user, especially the reliability of the agent. Also, apology in a breakdown situation did not affect the users' perceptions. In the following in-depth interviews, participants explained that they perceived the smart speaker as a machine rather than a human-like object, and for this reason, the social strategies did not work. These results show that the social strategies should be applied according to the perceptions that user has toward agents.

A Human-Robot Interaction Entertainment Pet Robot (HRI 엔터테인먼트 애완 로봇)

  • Lee, Heejin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.179-185
    • /
    • 2014
  • In this paper, a quadruped walking pet robot for human-robot interaction, a robot-controller using a smart phone application program, and a home smart control system using sensor informations providing from the robot are described. The robot has 20 degree of freedom and consists of various sensors such as Kinect sensor, infrared sensor, 3 axis motion sensor, temperature/humidity sensor, gas sensor and graphic LCD module. We propose algorithms for the robot entertainment: walking algorithm of the robot, motion and voice recognition algorithm using Kinect sensor. emotional expression algorithm, smart phone application algorithm for a remote control of the robot, and home smart control algorithm for controlling home appliances. The experiments of this paper show that the proposed algorithms applied to the pet robot, smart phone, and computer are well operated.