• Title/Summary/Keyword: 감정 인식

Search Result 904, Processing Time 0.03 seconds

Emotion Recognition using Pitch Parameters of Speech (음성의 피치 파라메터를 사용한 감정 인식)

  • Lee, Guehyun;Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.3
    • /
    • pp.272-278
    • /
    • 2015
  • This paper studied various parameter extraction methods using pitch information of speech for the development of the emotion recognition system. For this purpose, pitch parameters were extracted from korean speech database containing various emotions using stochastical information and numerical analysis techniques. GMM based emotion recognition system were used to compare the performance of pitch parameters. Sequential feature selection method were used to select the parameters showing the best emotion recognition performance. Experimental results of recognizing four emotions showed 63.5% recognition rate using the combination of 15 parameters out of 56 pitch parameters. Experimental results of detecting the presence of emotion showed 80.3% recognition rate using the combination of 14 parameters.

Emotion Recognition and Expression using Facial Expression (얼굴표정을 이용한 감정인식 및 표현 기법)

  • Ju, Jong-Tae;Park, Gyeong-Jin;Go, Gwang-Eun;Yang, Hyeon-Chang;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.295-298
    • /
    • 2007
  • 본 논문에서는 사람의 얼굴표정을 통해 4개의 기본감정(기쁨, 슬픔, 화남, 놀람)에 대한 특징을 추출하고 인식하여 그 결과를 이용하여 감정표현 시스템을 구현한다. 먼저 주성분 분석(Principal Component Analysis)법을 이용하여 고차원의 영상 특징 데이터를 저차원 특징 데이터로 변환한 후 이를 선형 판별 분석(Linear Discriminant Analysis)법에 적용시켜 좀 더 효율적인 특징벡터를 추출한 다음 감정을 인식하고, 인식된 결과를 얼굴 표현 시스템에 적용시켜 감정을 표현한다.

  • PDF

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Emotion Recognition System based on Upper Body Tracking (상반신 추적 기술 기반 감정 인식 시스템)

  • Oh, Jihun;Yu, Sunjin;Lee, Minkyu;Lim, Wootaek;Ahn, ChungHyun;Lee, Sangyoun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.06a
    • /
    • pp.284-285
    • /
    • 2013
  • Kinect를 통해 Color영상과 Depth영상을 받아온 후, 사람과 사람의 스켈레톤이 검출되도록 했다. 스켈레톤이 검출되면 머리 위치를 중심으로 얼굴 유효영역을 만들고, 효율적인 얼굴 검출로 사용자 인식이 가능하도록 했다. 스켈레톤 검출 및 추적을 통해, 4가지 감정에 대해 제스쳐를 정의했으며, 각 감정에 따른 제스쳐를 취했을 때 정의한 감정이 인식되는지 실험했다. 실험 결과, 제스쳐를 통한 감정 인식 성공률이 86~88% 나왔으며, 이 제스쳐 인식이 다른 감정인식 방법과 융합될 필요가 있다.

  • PDF

Emotional States Recognition of Text Data Using Hidden Markov Models (HMM을 이용한 채팅 텍스트로부터의 화자 감정상태 분석)

  • 문현구;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.127-129
    • /
    • 2001
  • 입력된 문장을 분석하여 미리 정해진 범주에 따라 그 문장의 감정 상태의 천이를 출력해 주는 감정인식 시스템을 제안한다. Naive Bayes 알고리즘을 사용했던 이전 방법과 달리 새로 연구된 시스템은 Hidden Markov Model(HMM)을 사용한다. HMM은 특정 분포로 발생하는 현상에서 그 현상의 원인이 되는 상태의 천이를 찾아내는데 적합한 방법으로서, 하나의 문장에 여러 가지 감정이 표현된다는 가정 하에 감정인식에 관한 이상적인 알고리즘이라 할 수 있다. 본 논문에서는 HMM을 사용한 감정인식 시스템에 관한 개요를 설명하고 이전 버전에 비해 보다 향상된 실험결과를 보여준다.

  • PDF

Design for Mood-Matched Music Based on Deep Learning Emotion Recognition (딥러닝 감정 인식 기반 배경음악 매칭 설계)

  • Chung, Moonsik;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.834-836
    • /
    • 2021
  • 멀티모달 감정인식을 통해 사람의 감정을 정확하게 분류하고, 사람의 감정에 어울리는 음악을 매칭하는 시스템을 설계한다. 멀티모달 감정 인식 방법으로는 IEMOCAP(Interactive Emotional Dyadic Motion Capture) 데이터셋을 활용해 감정을 분류하고, 분류된 감정의 분위기에 맞는 음악을 매칭시키는 시스템을 구축하고자 한다. 유니모달 대비 멀티모달 감정인식의 정확도를 개선한 시스템을 통해 텍스트, 음성, 표정을 포함하고 있는 동영상의 감성 분위기에 적합한 음악 매칭 시스템을 연구한다.

Emotion Recognition of Korean and Japanese using Facial Images (얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교)

  • Lee, Dae-Jong;Ahn, Ui-Sook;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.197-203
    • /
    • 2005
  • In this paper, we propose an emotion recognition using facial Images to effectively design human interface. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition for the facial images is performed after applying the discrete wavelet. Here, the feature vectors are extracted from the PCA and LDA. Experimental results show that human emotions such as happiness, sadness, and anger has better performance than surprise, fear and dislike. Expecially, Japanese shows lower performance for the dislike emotion. Generally, the recognition rates for Korean have higher values than Japanese cases.

Multi-Dimensional Emotion Recognition Model of Counseling Chatbot (상담 챗봇의 다차원 감정 인식 모델)

  • Lim, Myung Jin;Yi, Moung Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.21-27
    • /
    • 2021
  • Recently, the importance of counseling is increasing due to the Corona Blue caused by COVID-19. Also, with the increase of non-face-to-face services, researches on chatbots that have changed the counseling media are being actively conducted. In non-face-to-face counseling through chatbot, it is most important to accurately understand the client's emotions. However, since there is a limit to recognizing emotions only in sentences written by the client, it is necessary to recognize the dimensional emotions embedded in the sentences for more accurate emotion recognition. Therefore, in this paper, the vector and sentence VAD (Valence, Arousal, Dominance) generated by learning the Word2Vec model after correcting the original data according to the characteristics of the data are learned using a deep learning algorithm to learn the multi-dimensional We propose an emotion recognition model. As a result of comparing three deep learning models as a method to verify the usefulness of the proposed model, R-squared showed the best performance with 0.8484 when the attention model is used.

Real-time emotion analysis service with big data-based user face recognition (빅데이터 기반 사용자 얼굴인식을 통한 실시간 감성분석 서비스)

  • Kim, Jung-Ah;Park, Roy C.;Hwang, Gi-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.49-54
    • /
    • 2017
  • In this paper, we use face database to detect human emotion in real time. Although human emotions are defined globally, real emotional perception comes from the subjective thoughts of the judging person. Therefore, judging human emotions using computer image processing technology requires high technology. In order to recognize the emotion, basically the human face must be detected accurately and the emotion should be recognized based on the detected face. In this paper, based on the Cohn-Kanade Database, one of the face databases, faces are detected by combining the detected faces with the database.

  • PDF