• Title/Summary/Keyword: 감정 인식

Search Result 904, Processing Time 0.025 seconds

Development of Facial Image Based Emotion Recongition System (얼굴 영상 기반 감정 인식 시스템 개발)

  • Kim M. H.;Joo Y. H.;Park J. B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.433-436
    • /
    • 2005
  • 감정 인식 기술은 사회의 여러 분야에서 요구되고 있는 필요한 기술임에 불구하고 인식 과정의 어려움으로 인해 풀리지 않는 문제로 남아있다. 특히 얼굴 영상을 이용한 감정 인식 기술은 많은 응용이 가능하기 때문에 개발의 필요성이 증대되고 있다. 얼굴 영상을 이용하여 감정을 인식하는 시스템은 매우 다양한 기법들이 사용되는 복합적인 시스템이다. 따라서 이를 설계하기 위해서는 얼굴 영상 분석, 특징 벡터 추출 및 패턴 인식 등 다양한 기법의 연구가 필요하다. 본 논문에서는 이전에 연구된 얼굴 영상 기법들을 기반으로 새로운 감정 인식 시스템을 제안한다. 제안된 시스템은 감정 분석에 적합한 퍼지 이론 기반 퍼지 분류기를 이용하여 감정을 인식한다. 제안된 시스템의 성능을 평가하기 위해 평가데이터 베이스가 구축되었으며, 이를 통해 제안된 시스템을 성능을 평가하였다.

  • PDF

Emotional Human Body Recognition by Using Extraction of Human Body from Image (인간의 움직임 추출을 이용한 감정적인 행동 인식 시스템 개발)

  • Song, Min-Kook;Park, Jin-Bae;So, Je-Yoon;Joo, Young-Hoon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.11a
    • /
    • pp.348-351
    • /
    • 2006
  • 영상을 통한 감정 인식 기술은 사회의 여러 분야에서 필요성이 대두되고 있음에도 불구하고 인식 과정의 어려움으로 인해 풀리지 않는 문제로 남아 있다. 인간의 움직임을 이용한 감정 인식 기술은 많은 응용이 가능하기 때문에 개발의 필요성이 증대되고 있다. 영상을 통해 감정을 인식하는 시스템은 매우 다양한 기법들이 사용되는 복합적인 시스템이다. 본 논문에서는 이전에 연구된 움직임 추출 방법들을 바탕으로 한 새로운 감정 인식 시스템을 제안한다. 제안된 시스템은 은닉 마르코프 모델을 통해 동정된 분류기를 이용하여 감정을 인식한다. 제안된 시스템의 성능을 평가하기 위해 평가데이터 베이스가 구축되었으며, 이를 통해 제안된 감정 인식 시스템의 성능을 확인하였다.

  • PDF

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.

Recognition of Emotional states in speech using combination of Unsupervised Learning with Supervised Learning (비감독 학습과 감독학습의 결합을 통한 음성 감정 인식)

  • Bae, Sang-Ho;Lee, Jang-Hoon;Kim, Hyun-jung;Won, Il-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.391-394
    • /
    • 2011
  • 사용자의 감정을 자동으로 인식하는 연구는 사용자 중심의 서비스를 제공할 때 중요한 요소이다. 인간은 하나의 감정을 다양하게 분류하여 인식한다. 그러나 기계학습을 통해 감정을 인식하려고 할 때 감정을 단일값으로 취급하는 방법만으로는 좋은 성능을 기대하기 어렵다. 따라서 본 논문에서는 비감독 학습과 감독학습을 결합한 감정인식 모델을 제시하였다. 제안된 모델의 핵심은 비감독 학습을 이용하여 인간처럼 한 개의 감정을 다양한 하부 감정으로 분류하고, 이렇게 분류된 감정을 감독학습을 통해 성능을 향상 시키는 것이다.

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Robust Speech Recognition using Vocal Tract Normalization for Emotional Variation (성도 정규화를 이용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo;Bang, Hyun-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.773-778
    • /
    • 2009
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, vocal tract normalization method is used to develop the robust speech recognition system for emotional variations. Experimental results from the isolated word recognition using HMM showed that the vocal tract normalization method reduced the error rate of the conventional recognition system by 41.9% when emotional test data was used.

Speech Emotion Recognition Framework on Smartphone Environment (스마트폰환경에서 음성기반 감정인식 프레임워크)

  • Bang, Jae Hun;Lee, Sungyoung;Jung, Taechung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.254-256
    • /
    • 2013
  • 기존의 음성기반 감정인식 기술은 충분한 컴퓨팅 파워를 가진 PC에서 수백개의 특징을 사용하여 감정을 인식하고 있다. 이러한 음성기반 감정인식 기술은 컴퓨팅 파워에 제약이 많은 스마트폰 환경을 고려하지 않은 방법이다. 본 논문에서는 제한된 스마트폰 컴퓨팅 파워를 고려한 음성의 특징 추출 기법과 서버 클라이언트 개념을 도입한 효율적인 음성기반 감정인식 프레임워크를 제안한다.

A Training Method for Emotionally Robust Speech Recognition using Frequency Warping (주파수 와핑을 이용한 감정에 강인한 음성 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.528-533
    • /
    • 2010
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variation on the speech signal and the speech recognition system were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, a training method that cover the speech variations is proposed to develop the emotionally robust speech recognition system. Experimental results from the isolated word recognition using HMM showed that propose method reduced the error rate of the conventional recognition system by 28.4% when emotional test data was used.

A Study on Visual Perception based Emotion Recognition using Body-Activity Posture (사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.305-314
    • /
    • 2011
  • Research into the visual perception of human emotion to recognize an intention has traditionally focused on emotions of facial expression. Recently researchers have turned to the more challenging field of emotional expressions through body posture or activity. Proposed work approaches recognition of basic emotional categories from body postures using neural model applied visual perception of neurophysiology. In keeping with information processing models of the visual cortex, this work constructs a biologically plausible hierarchy of neural detectors, which can discriminate 6 basic emotional states from static views of associated body postures of activity. The proposed model, which is tolerant to parameter variations, presents its possibility by evaluating against human test subjects on a set of body postures of activities.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.