• Title/Summary/Keyword: Facial Emotion Expression

Search Result 202, Processing Time 0.026 seconds

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF

Recognition and Generation of Facial Expression for Human-Robot Interaction (로봇과 인간의 상호작용을 위한 얼굴 표정 인식 및 얼굴 표정 생성 기법)

  • Jung Sung-Uk;Kim Do-Yoon;Chung Myung-Jin;Kim Do-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.255-263
    • /
    • 2006
  • In the last decade, face analysis, e.g. face detection, face recognition, facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial emotion mimic system which can recognize human facial expressions and also generate the recognized facial expression. In order to recognize human facial expression in real-time, we propose a facial expression classification method that is performed by weak classifiers obtained by using new rectangular feature types. In addition, we make the artificial facial expression using the developed robotic system based on biological observation. Finally, experimental results of facial expression recognition and generation are shown for the validity of our robotic system.

Automatic facial expression generation system of vector graphic character by simple user interface (간단한 사용자 인터페이스에 의한 벡터 그래픽 캐릭터의 자동 표정 생성 시스템)

  • Park, Tae-Hee;Kim, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1155-1163
    • /
    • 2009
  • This paper proposes an automatic facial expression generation system of vector graphic character using gaussian process model. Proposed method extracts the main feature vectors from twenty-six facial data of character redefined based on Russell's internal emotion state. Also by using new gaussian process model, SGPLVM, we find low-dimensional feature data from extracted high-dimensional feature vectors, and learn probability distribution function (PDF). All parameters of PDF are estimated by maximization the likelihood of learned expression data, and these are used to select wanted facial expressions on two-dimensional space in real time. As a result of simulation, we confirm that proposed facial expression generation tool is working in the small facial expression datasets and can generate various facial expressions without prior knowledge about relation between facial expression and emotion.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Facial Color Control based on Emotion-Color Theory (정서-색채 이론에 기반한 게임 캐릭터의 동적 얼굴 색 제어)

  • Park, Kyu-Ho;Kim, Tae-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1128-1141
    • /
    • 2009
  • Graphical expressions are continuously improving, spurred by the astonishing growth of the game technology industry. Despite such improvements, users are still demanding a more natural gaming environment and true reflections of human emotions. In real life, people can read a person's moods from facial color and expression. Hence, interactive facial colors in game characters provide a deeper level of reality. In this paper we propose a facial color adaptive technique, which is a combination of an emotional model based on human emotion theory, emotional expression pattern using colors of animation contents, and emotional reaction speed function based on human personality theory, as opposed to past methods that expressed emotion through blood flow, pulse, or skin temperature. Experiments show this of expression of the Facial Color Model based on facial color adoptive technique and expression of the animation contents is effective in conveying character emotions. Moreover, the proposed Facial Color Adaptive Technique can be applied not only to 2D games, but to 3D games as well.

  • PDF

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Facial expression recognition based on pleasure and arousal dimensions (쾌 및 각성차원 기반 얼굴 표정인식)

  • 신영숙;최광남
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.4
    • /
    • pp.33-42
    • /
    • 2003
  • This paper presents a new system for facial expression recognition based in dimension model of internal states. The information of facial expression are extracted to the three steps. In the first step, Gabor wavelet representation extracts the edges of face components. In the second step, sparse features of facial expressions are extracted using fuzzy C-means(FCM) clustering algorithm on neutral faces, and in the third step, are extracted using the Dynamic Model(DM) on the expression images. Finally, we show the recognition of facial expression based on the dimension model of internal states using a multi-layer perceptron. The two dimensional structure of emotion shows that it is possible to recognize not only facial expressions related to basic emotions but also expressions of various emotion.

  • PDF