• Title/Summary/Keyword: Facial Emotion Expression

Search Result 202, Processing Time 0.029 seconds

Effects of the facial expression's presenting type and areas on emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Kim, Hyuk;Han, Kwang-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1393-1400
    • /
    • 2006
  • 정서를 측정하고 나타내는 기술이 발전에 따라 문화적 보편성을 가진 얼굴표정 연구의 필요성이 증가하고 있다. 그리고 지금까지의 많은 얼굴 표정 연구들은 정적인 얼굴사진 위주로 이루어졌다. 그러나 실제 사람들은 단적인 얼굴표정만으로 정서를 인식하기 보다는 미묘한 표정의 변화나 얼굴근육의 움직임 등을 통해 정서상태를 추론한다. 본 연구는 동적인 얼굴표정이 정적인 얼굴표정 보다 정서상태 전달에서 더 큰 효과를 가짐을 밝히고, 동적인 얼굴 표정에서의 눈과 입의 정서인식 효과를 비교해 보고자 하였다. 이에 따라 15 개의 형용사 어휘에 맞는 얼굴 표정을 얼굴전체, 눈, 입의 세 수준으로 나누어 동영상과 스틸사진으로 제시하였다. 정서 판단의 정확성을 측정한 결과, 세 수준 모두에서 동영상의 정서인식 효과가 스틸사진 보다 유의미하게 높게 나타나 동적인 얼굴 표정이 더 많은 내적정보를 보여주는 것을 알 수 있었다. 또한 얼굴전체-눈-입 순서로 정서인식 효과의 차이가 유의미하게 나타났으며, 부정적 정서는 눈에서 더 잘 나타나고 긍정적 정서는 입에서 더 잘 나타났다. 따라서 눈과 입에 따른 정서인식이 정서의 긍정성-부정성 차원에 따라 달라짐을 볼 수 있었다.

  • PDF

The Effect of Cognitive Movement Therapy on Emotional Rehabilitation for Children with Affective and Behavioral Disorder Using Emotional Expression and Facial Image Analysis (감정표현 표정의 영상분석에 의한 인지동작치료가 정서·행동장애아 감성재활에 미치는 영향)

  • Byun, In-Kyung;Lee, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.12
    • /
    • pp.327-345
    • /
    • 2016
  • The purpose of this study was to carry out cognitive movement therapy program for children with affective and behavioral disorder based on neuro science, psychology, motor learning, muscle physiology, biomechanics, human motion analysis, movement control and to quantify characteristic of expression and gestures according to change of facial expression by emotional change. We could observe problematic expression of children with affective disorder, and could estimate the efficiency of application of movement therapy program by the face expression change of children with affective disorder. And it could be expected to accumulate data for early detection and therapy process of development disorder applying converged measurement and analytic method for human development by quantification of emotion and behavior therapy analysis, kinematic analysis. Therefore, the result of this study could be extendedly applied to the disabled, the elderly and the sick as well as children.

A Study on Character's Emotional Appearance in Distinction Focused on 3D Animation "Inside Out" (3D 애니메이션 "인사이드 아웃" 분석을 통한 감성별 캐릭터 외형특징 연구)

  • Ahn, Duck-ki;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.361-368
    • /
    • 2017
  • This study analyzes into the characteristic appearance in distintion with emotional changes toward visual forms of psychology along with character development in the 3D animation industry. In this regard, the study seeks to propose essential targets of the five emotional characters from the Pixar's animation Inside-Out to prove psychological effects to the character's visual appearance. As a previous research, the study analysis the visual representations oriented toward both emotional facial expression and emotional color expression using both Paul Ekman and Robert Plutchik's human basic emotion research. The purpose of this study is to present the visual guideline of emotional character's appearance through the various human expression for differentiated character development in animation production.

Discrimination between spontaneous and posed smile: Humans versus computers (자발적 웃음과 인위적 웃음 간의 구분: 사람 대 컴퓨터)

  • Eom, Jin-Sup;Oh, Hyeong-Seock;Park, Mi-Sook;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.95-106
    • /
    • 2013
  • The study compares accuracies between humans and computer algorithms in the discrimination of spontaneous smiles from posed smiles. For this purpose, subjects performed two tasks, one was judgment with single pictures and the other was judgment with pair comparison. At the task of judgment with single pictures, in which pictures of smiling facial expression were presented one by one, subjects were required to judge whether smiles in the pictures were spontaneous or posed. In the task for judgment with pair comparison, in which two kinds of smiles from one person were presented simultaneously, subjects were to select spontaneous smile. To calculate the discrimination algorithm accuracy, 8 kinds of facial features were used. To calculate the discriminant function, stepwise linear discriminant analysis (SLDA) was performed by using approximately 50 % of pictures, and the rest of pictures were classified by using the calculated discriminant function. In the task of single pictures, the accuracy rate of SLDA was higher than that of humans. In the analysis of accuracy on pair comparison, the accuracy rate of SLDA was also higher than that of humans. Among the 20 subjects, none of them showed the above accuracy rate of SLDA. The facial feature contributed to SLDA effectively was angle of inner eye corner, which was the degree of the openness of the eyes. According to Ekman's FACS system, this feature corresponds to AU 6. The reason why the humans had low accuracy while classifying two kinds of smiles, it appears that they didn't use the information coming from the eyes enough.

  • PDF

A Study on Visual Perception based Emotion Recognition using Body-Activity Posture (사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.305-314
    • /
    • 2011
  • Research into the visual perception of human emotion to recognize an intention has traditionally focused on emotions of facial expression. Recently researchers have turned to the more challenging field of emotional expressions through body posture or activity. Proposed work approaches recognition of basic emotional categories from body postures using neural model applied visual perception of neurophysiology. In keeping with information processing models of the visual cortex, this work constructs a biologically plausible hierarchy of neural detectors, which can discriminate 6 basic emotional states from static views of associated body postures of activity. The proposed model, which is tolerant to parameter variations, presents its possibility by evaluating against human test subjects on a set of body postures of activities.

An Efficient Study of Emotion Inference in USN Computing (USN 컴퓨팅에서 효율적인 감성 추론 연구)

  • Yang, Dong-Il;Kim, Young-Gyu;Jeong, Yeon-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.127-134
    • /
    • 2009
  • Recently, much research have been done on ubiquitous computing models in advanced countries as well as in Korea. Ubiquitous computing is defined as a computing environment that isn't bounded by time and space. Different kinds of computers are embedded in artifacts, devices, and environment, thus people can be connected everywhere and every time. To recognize user's emotion, facial expression, temperature, humidity, weather, and lightning factors are used for building ontology. Ontology Web Language (OWL) is adopted to implement ontology and Jena is used as an emotional inference engine. The context-awareness service infrastructure suggested in this research can be divided into several modules by their functions.

Emotion Recognition and Expression using Facial Expression (얼굴표정을 이용한 감정인식 및 표현 기법)

  • Ju, Jong-Tae;Park, Gyeong-Jin;Go, Gwang-Eun;Yang, Hyeon-Chang;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.295-298
    • /
    • 2007
  • 본 논문에서는 사람의 얼굴표정을 통해 4개의 기본감정(기쁨, 슬픔, 화남, 놀람)에 대한 특징을 추출하고 인식하여 그 결과를 이용하여 감정표현 시스템을 구현한다. 먼저 주성분 분석(Principal Component Analysis)법을 이용하여 고차원의 영상 특징 데이터를 저차원 특징 데이터로 변환한 후 이를 선형 판별 분석(Linear Discriminant Analysis)법에 적용시켜 좀 더 효율적인 특징벡터를 추출한 다음 감정을 인식하고, 인식된 결과를 얼굴 표현 시스템에 적용시켜 감정을 표현한다.

  • PDF

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

Emotional Expression Technique using Facial Recognition in User Review (사용자 리뷰에서 표정 인식을 이용한 감정 표현 기법)

  • Choi, Wongwan;Hwang, Mansoo;Kim, Neunghoe
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.23-28
    • /
    • 2022
  • Today, the online market has grown rapidly due to the development of digital platforms and the pandemic situation. Therefore, unlike the existing offline market, the distinctiveness of the online market has prompted users to check online reviews. It has been established that reviews play a significant part in influencing the user's purchase intention through precedents of several studies. However, the current review writing method makes it difficult for other users to understand the writer's emotions by expressing them through elements like tone and words. If the writer also wanted to emphasize something, it was very cumbersome to thicken the parts or change the colors to reflect their emotions. Therefore, in this paper, we propose a technique to check the user's emotions through facial expression recognition using a camera, to automatically set colors for each emotion using research on existing emotions and colors, and give colors based on the user's intention.

Analysis of facial expression recognition (표정 분류 연구)

  • Son, Nayeong;Cho, Hyunsun;Lee, Sohyun;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.5
    • /
    • pp.539-554
    • /
    • 2018
  • Effective interaction between user and device is considered an important ability of IoT devices. For some applications, it is necessary to recognize human facial expressions in real time and make accurate judgments in order to respond to situations correctly. Therefore, many researches on facial image analysis have been preceded in order to construct a more accurate and faster recognition system. In this study, we constructed an automatic recognition system for facial expressions through two steps - a facial recognition step and a classification step. We compared various models with different sets of data with pixel information, landmark coordinates, Euclidean distances among landmark points, and arctangent angles. We found a fast and efficient prediction model with only 30 principal components of face landmark information. We applied several prediction models, that included linear discriminant analysis (LDA), random forests, support vector machine (SVM), and bagging; consequently, an SVM model gives the best result. The LDA model gives the second best prediction accuracy but it can fit and predict data faster than SVM and other methods. Finally, we compared our method to Microsoft Azure Emotion API and Convolution Neural Network (CNN). Our method gives a very competitive result.