• Title/Summary/Keyword: Facial emotion

Search Result 311, Processing Time 0.031 seconds

The Congruent Effects of Gesture and Facial Expression of Virtual Character on Emotional Perception: What Facial Expression is Significant? (가상 캐릭터의 몸짓과 얼굴표정의 일치가 감성지각에 미치는 영향: 어떤 얼굴표정이 중요한가?)

  • Ryu, Jeeheon;Yu, Seungbeom
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.21-34
    • /
    • 2016
  • In the design and develop a virtual character, it is important to correctly deliver target emotions generated by the combination of facial expression and gesture. The purpose of this study is to examine the effect of congruent/incongruent between gesture and facial expression on target emotion. In this study four emotions, joy, sadness, fear, and anger, are applied. The results of study showed that sadness emotion were incorrectly perceived. Moreover, it was perceived as anger instead of sadness. Sadness can be easily confused when facial expression and gestures were simultaneously presented. However, in the other emotional status, the intended emotional expressions were correctly perceived. The overall evaluation of virtual character's emotional expression was significantly low when joy gesture was combined with sad facial expression. The results of this study suggested that emotional gesture is more influential correctly to deliver target emotions to users. This study suggested that social cues like gender or age of virtual character should be further studied.

Brain Activation to Facial Expressions Among Alcoholics (알코올 중독자의 얼굴 표정 인식과 관련된 뇌 활성화 특성)

  • Park, Mi-Sook;Lee, Bae Hwan;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.20 no.4
    • /
    • pp.1-14
    • /
    • 2017
  • The purpose of this study was to investigate the neural substrates for recognizing facial expressions among alcoholics by using functional magnetic resonance imaging (fMRI). Abstinent inpatient alcoholics (n=18 males) and demographically similar social drinkers (n=16 males) participated in the study. The participants viewed pictures from the Japanese Female Facial Expression Database (JAFFE) and evaluated intensity of facial expressions. the alcoholics had a reduced activation in the limbic areas including amygdala and hippocampus while recognizing the emotional facial expressions compared to the nonalcoholic controls. On the other hand, the alcoholics showed greater brain activations than the controls in the left lingual (BA 19)/fusiform gyrus, the left middle frontal gyrus (BA 8/9/46), and the right superior parietal lobule (BA 7) during the viewing of emotional faces. In sum, specific brain regions were identified that are associated with recognition of facial expressions among alcoholics. The implication of the present study could be used in developing intervention for alcoholism.

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

A Literature Review for an Emotion Evaluation Protocols Based on Skin Temperature for Home Appliances (피부온을 기반으로 한 가전제품의 감성 평가 프로토콜 수립을 위한 문헌 조사)

  • Jeon, Eun-Jin;Lee, Seung-hoon;Kim, Hee-Eun;You, Hee-Cheon
    • Fashion & Textile Research Journal
    • /
    • v.22 no.2
    • /
    • pp.240-249
    • /
    • 2020
  • This study reviews studies that used skin temperature in order to establish an emotion evaluation protocol based on skin temperature for home appliances. A survey of skin temperature evaluation papers was conducted by the following five stages: (1) keyword search, (2) title screening, (3) abstract screening, (4) full paper screening, and (5) relevance evaluation. Selected papers were reviewed for: purpose, recruitment criteria of participants, the number of participants, apparatus, procedure, measures, analysis methods, and major findings. Thermistor sensors and thermography are used for the measurement of skin temperature. Skin temperature sensors are attached to 4 - 10 locations on the body and their mean of skin temperature is calculated by Ramanatan's 4-point or Hardy & Dubois's 7-point method. Semantic differential (SD) method and thermography measuring facial surface temperature have been used for emotion evaluation. The SD method provides a set of adjective pairs related to a product and evaluates changes in emotion from the use of the product. The range of facial surface analyzed is defined in the thermal image and temperature changes before and after the evaluation are analyzed. The evaluation items of home appliances include form, color, material, aesthetics, satisfaction, novelty, convenience, pleasantness, and excellence. Many existing emotion studies using skin temperature do not apply physiological and psychological methods. This study provides basic data to establish a skin temperature and emotion evaluation protocol by examining literature for skin temperature and evaluation of sensitivity.

Accurate Visual Working Memory under a Positive Emotional Expression in Face (얼굴표정의 긍정적 정서에 의한 시각작업기억 향상 효과)

  • Han, Ji-Eun;Hyun, Joo-Seok
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.605-616
    • /
    • 2011
  • The present study examined memory accuracy for faces with positive, negative and neutral emotional expressions to test whether their emotional content can affect visual working memory (VWM) performance. Participants remembered a set of face pictures in which facial expressions of the faces were randomly assigned from pleasant, unpleasant and neutral emotional categories. Participants' task was to report presence or absence of an emotion change in the faces by comparing the remembered set against another set of test faces displayed after a short delay. The change detection accuracies of the pleasant, unpleasant and neutral face conditions were compared under two memory exposure duration of 500ms vs. 1000ms. Under the duration of 500ms, the accuracy in the pleasant condition was higher than both unpleasant and neutral conditions. However the difference disappeared when the duration was extended to 1000ms. The results indicate that a positive facial expression can improve VWM accuracy relative to the negative or positive expressions especially when there is not enough time for forming durable VWM representations.

  • PDF

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Facial Expression Research according to Arbitrary Changes in Emotions through Visual Analytic Method (영상분석법에 의한 자의적 정서변화에 따른 표정연구)

  • Byun, In-Kyung;Lee, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.10
    • /
    • pp.71-81
    • /
    • 2013
  • Facial expressions decide an image for the individual, and the ability to interpret emotion from facial expressions is the core of human relations, hence recognizing emotion through facial expression is important enough to change attitude and decisions between individuals within social relations. Children with unstable attachment development, seniors, autistic group, ADHD children and depression group showed low performance results in facial expression recognizing ability tasks, and active interventions with such groups anticipates possibilities of prevention and therapeutic effects for psychological disabilities. The quantified figures that show detailed change in position of lips, eyes and cheeks anticipates for possible applications in diverse fields such as human sensibility ergonomics, korean culture and art contents, therapeutical and educational applications to overcome psychological disabilities and as methods of non-verbal communication in the globalizing multicultural society to overcome cultural differences.

The affective components of facial beauty (아름다운 얼굴의 감성적 특징)

  • 김한경;박수진;정찬섭
    • Science of Emotion and Sensibility
    • /
    • v.7 no.1
    • /
    • pp.23-28
    • /
    • 2004
  • In this paper, we investigated the affective components of facial beauty. In study 1, we did factor analysis of affective evaluations of the faces, and about 65% of the variances are explained by only two factors. Two factors were named 'sharp' and 'soft', respectively. In study 2, the correlation between facial beauty and affective evaluations was analyzed, and the correlation between facial beauty and sharp factor was significant. In study 3, we made the new images by morphing and warping the faces: 'average', 'high-ranked', and 'exaggerated'. The participants evaluated the 'high-ranked' face more beautiful than the 'average' face, and the 'exaggerated' face more beautiful than the 'high-ranked' face. The rating of affective words on the faces showed that the 'average' face was related to 'soft' impression, the 'high-ranked' image to 'sharp' impression, and the 'exaggerated' face might have double impression. These results might support the directional hypothesis for the facial beauty.

  • PDF

Emotion Recognition Using Tone and Tempo Based on Voice for IoT (IoT를 위한 음성신호 기반의 톤, 템포 특징벡터를 이용한 감정인식)

  • Byun, Sung-Woo;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.1
    • /
    • pp.116-121
    • /
    • 2016
  • In Internet of things (IoT) area, researches on recognizing human emotion are increasing recently. Generally, multi-modal features like facial images, bio-signals and voice signals are used for the emotion recognition. Among the multi-modal features, voice signals are the most convenient for acquisition. This paper proposes an emotion recognition method using tone and tempo based on voice. For this, we make voice databases from broadcasting media contents. Emotion recognition tests are carried out by extracted tone and tempo features from the voice databases. The result shows noticeable improvement of accuracy in comparison to conventional methods using only pitch.

Analysis and Synthesis of Facial Expression using Base Faces (기준얼굴을 이용한 얼굴표정 분석 및 합성)

  • Park, Moon-Ho;Ko, Hee-Dong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.827-833
    • /
    • 2000
  • Facial expression is an effective tool to express human emotion. In this paper, a facial expression analysis method based on the base faces and their blending ratio is proposed. The seven base faces were chosen as axes describing and analyzing arbitrary facial expression. We set up seven facial expressions such as, surprise, fear, anger, disgust, happiness, sadness, and expressionless as base faces. Facial expression was built by fitting generic 3D facial model to facial image. Two comparable methods, Genetic Algorithms and Simulated Annealing were used to search the blending ratio of base faces. The usefulness of the proposed method for facial expression analysis was proved by the facial expression synthesis results.

  • PDF