• Title/Summary/Keyword: Emotion Recognition and Expression

검색결과 143건 처리시간 0.025초

얼굴 인식을 통한 동적 감정 분류 (Dynamic Emotion Classification through Facial Recognition)

  • 한우리;이용환;박제호;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권3호
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식 (Emotion Recognition of Facial Expression using the Hybrid Feature Extraction)

  • 변광섭;박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템 (Emotion Recognition and Expression System of Robot Based on 2D Facial Image)

  • 이동훈;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법 (Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences)

  • 김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제20권8호
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

애착 유형에 따른 아동의 정서인식, 정서표현 및 상호작용 (Children's Emotion Recognition, Emotion Expression, and Social Interactions According to Attachment Styles)

  • 최은실
    • 아동학회지
    • /
    • 제33권2호
    • /
    • pp.55-68
    • /
    • 2012
  • The goals of this study were to examine how children's recognition of various emotions, emotion expression, and social interactions among their peers differed according to their attachment styles. A total of 65 three to five years old children completed both attachment story-stem doll plays and a standard emotion recognition task. Trained observers documented children's valence of emotion expression and social interactions among their peers in the classroom. Consistent with attachment theory, children who were categorized as secure in the doll play were more likely to express positive emotions than children who were categorized as avoidant in the doll play. Children who were categorized as avoidant in the doll play were more likely to express neutral emotions among their peers than children who were categorized as secure and anxious in the doll play. The findings of this study contribute to the general attachment literature by documenting how attachment security plays a crucial role in having positive emotions in ordinary situations. It does so by also demonstrating how different attachment styles are associated with children's qualitatively different patterns of emotion processing, especially in terms of their expression of emotions.

사용자의 성향 기반의 얼굴 표정을 통한 감정 인식률 향상을 위한 연구 (A study on the enhancement of emotion recognition through facial expression detection in user's tendency)

  • 이종식;신동희
    • 감성과학
    • /
    • 제17권1호
    • /
    • pp.53-62
    • /
    • 2014
  • 인간의 감정을 인식하는 기술은 많은 응용분야가 있음에도 불구하고 감정 인식의 어려움으로 인해 쉽게 해결되지 않는 문제로 남아 있다. 인간의 감정 은 크게 영상과 음성을 이용하여 인식이 가능하다. 감정 인식 기술은 영상을 기반으로 하는 방법과 음성을 이용하는 방법 그리고 두 가지를 모두 이용하는 방법으로 많은 연구가 진행 중에 있다. 이 중에 특히 인간의 감정을 가장 보편적으로 표현되는 방식이 얼굴 영상을 이용한 감정 인식 기법에 대한 연구가 활발히 진행 중이다. 그러나 지금까지 사용자의 환경과 이용자 적응에 따라 많은 차이와 오류를 접하게 된다. 본 논문에서는 감정인식률을 향상시키기 위해서는 이용자의 내면적 성향을 이해하고 분석하여 이에 따라 적절한 감정인식의 정확도에 도움을 주어서 감정인식률을 향상 시키는 메카니즘을 제안하였으며 본 연구는 이러한 이용자의 내면적 성향을 분석하여 감정 인식 시스템에 적용함으로 얼굴 표정에 따른 감정인식에 대한 오류를 줄이고 향상 시킬 수 있다. 특히 얼굴표정 미약한 이용자와 감정표현에 인색한 이용자에게 좀 더 향상된 감정인식률을 제공 할 수 있는 방법을 제안하였다.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권3호
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

얼굴 특징점 추적을 통한 사용자 감성 인식 (Emotion Recognition based on Tracking Facial Keypoints)

  • 이용환;김흥준
    • 반도체디스플레이기술학회지
    • /
    • 제18권1호
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.