• Title/Summary/Keyword: Facial emotion

Search Result 312, Processing Time 0.026 seconds

A Pilot Study on Evoked Potentials by Visual Stimulation of Facial Emotion in Different Sasang Constitution Types (얼굴 표정 시각자극에 따른 사상 체질별 유발뇌파 예비연구)

  • Hwang, Dong-Uk;Kim, Keun-Ho;Lee, Yu-Jung;Lee, Jae-Chul;Kim, Myoyung-Geun;Kim, Jong-Yeol
    • Journal of Sasang Constitutional Medicine
    • /
    • v.22 no.1
    • /
    • pp.41-48
    • /
    • 2010
  • 1. Objective There has been a few trials to diagnose Sasang Constitution by using EEG, but has not been studied intensively. For the purpose of practical diagnosis, the characteristics of EEG for each constitution should be studied first. Recently it has been shown that Sasang Constitution might be related to harm avoidance and novelty seeking in temperament and character profiles. Based on this finding, we propose a visual stimulation method to evoke a EEG response which may discriminate difference between constitutional groups. Through the experiment with this method, we tried to reveal the characteristics of EEG of each constitutional groups by the method of event-related potentials. 2. Methods: We used facial visual stimulation to verify the characteristics of EEG for each constitutional groups. To reveal characteristic in sensitivity and latency of response, we added several levels of noise to facial images. 6 male subjects(2 Taeeumin, 2 Soyangin, 2 Soeumin) participated in this study. All subjects are healthy 20's. To remove artifacts and slow modulation, we removed EOG contaminated data and renormalization is applied. To extract stimulation related components, normalized event-related potential method was used. 3. Results: From Oz channels, it is verified that facial image processing components are extracted. For lower level noise, components related to the visual stimulation were clearly shown in Oz, Pz, and Cz channels. Pz and Cz channels show differences among 3 constitutional groups in maximum around 200 msec. Especially moderate level of noise looks appropriate for diagnosis. 4. Conclusion: We verified that the visual stimulation with facial emotion might be a good candidate to evoke the differences between constitutional groups in EEG response. The differences shown in the experiment may imply that the process of emotion has distinct tendencies in latencies and sensitivity for each consitutional group. And this distinction might be related to the temperament profile of consitutional groups.

Representation of Facial Expressions of Different Ages: A Multidimensional Scaling Study (다양한 연령의 얼굴 정서 표상: 다차원척도법 연구)

  • Kim, Jongwan
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • Previous studies using facial expressions have revealed valence and arousal as two core dimensions of affective space. However, it remains unknown if the two dimensional structure is consistent across ages. This study investigated affective dimensions using six facial expressions (angry, disgusted, fearful, happy, neutral, and sad) at three ages (young, middle-aged, and old). Several studies previously required participants to directly rate subjective similarity between facial expression pairs. In this study, we collected indirect measures by asking participants to decide if a pair of two stimuli conveyed the same emotions. Multidimensional scaling showed that "angry-disgusted" and "sad-disgusted" pairs are similar at all three ages. In addition, "angry-sad," "angry-neutral," "neutral-sad," and "disgusted-fearful" pairs were similar at old age. When two faces in a pair reflect the same emotion, "sad" was the most inaccurate in old age, suggesting that the ability to recognize "sad" decreases with old age. This study suggested that the general two-core dimension structure is robust across all age groups with the exception of specific emotions.

The Relationship between Physically Disability Persons Participation in Exercise, Heart Rate Variance, and Facial Expression Recognition (지체장애인의 운동참여와 심박변이도(HRV), 표정정서인식력과의 관계)

  • Kim, Dong hwan;Baek, Jae keun
    • 재활복지
    • /
    • v.20 no.3
    • /
    • pp.105-124
    • /
    • 2016
  • The This study aims to verify the causal relationship among physically disability persons participation in exercise, heart rate variance, and facial expression recognition. To achieve such research goal, this study targeted 139 physically disability persons and as for sampling, purposive sampling method was applied. After visiting a sporting stadium and club facilities that sporting events were held and explaining the purpose of the research in detail, only with those who agreed to participate in the research, their heart rate variance and facial emotion awareness were measured. With the results of measurement, mean value, standard deviation, correlation analysis, and structural equating model were analyzed, and the results are as follows. The quantity of exercise positively affected sympathetic activity and parasympathetic activity of autonomic nervous system. Exercise history of physically disability persons was found to have a positive influence on LF/HF, and it had a negative influence on parasympathetic activity. Sympathetic activity of physically disability persons turned out to have a positive effect on the recognition of the emotion, happiness, while the quantity of exercise had a negative influence on the recognition of the emotion, sadness. These findings were discussed and how those mechanisms that are relevant to the autonomic nervous system, facial expression recognition of physical disability persons.

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Computer-Based Training Program to Facilitate Learning of the Relationship between Facial-Based and Situation-Based Emotions and Prosocial Behaviors

  • Takezawa, Tomohiro;Ogoshi, Sakiko;Ogoshi, Yasuhiro;Mitsuhashi, Yoshinori;Hiratani, Michio
    • Industrial Engineering and Management Systems
    • /
    • v.11 no.2
    • /
    • pp.142-147
    • /
    • 2012
  • Individuals with autistic spectrum disorders (ASD) have difficulty inferring other people's feelings from their facial expressions and/or from situational cues, and therefore, they are less able to respond with prosocial behavior. We developed a computer-based training program to help teach the connection between facial-based or situation-based emotions and prosocial behavioral responses. An 8-year-old male school child with ASD participated in the study. In this program, he was trained to identify persons in need of help and appropriate prosocial responses using novel photo-based scenarios. When he misidentified emotions from photographs of another's face, the program highlighted those parts of the face which effectively communicate emotion. To increase the likelihood that he would learn a generalized repertoire of emotional understanding, multiple examples of emotional expressions and situations were provided. When he misidentified persons expressing a need for help, or failed to identify appropriate helping behaviors, role playing was used to help him appreciate the state of mind of a person in need of help. The results of the training indicated increases in prosocial behaviors during a laboratory task that required collaborative work. His homeroom teacher, using a behavioral rating scale, reported that he now understood another's emotion or situation better than before training. These findings indicate the effects of the training are not limited to the artificial experiment situation, but also carried over to his school life.

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

A Study on Utilization of Facial Recognition-based Emotion Measurement Technology for Quantifying Game Experience (게임 경험 정량화를 위한 안면인식 기반 감정측정 기술 활용에 대한 연구)

  • Kim, Jae Beom;Jeong, Hong Kyu;Park, Chang Hoon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.9
    • /
    • pp.215-223
    • /
    • 2017
  • Various methods for creating interesting games are used in the development process. Because the empirical part is difficult to measure and analyze, it usually only measures and analyzes the parts where data are easy to quantify. This is a clear limit to the fact that the experience of the game is important.This study proposes a system that recognizes the face of a game user and measures the emotion change from the recognized information in order to easily quantify the experience of the user who is playing the game. The system recognizes emotions and records them in real time from the face of the user who is playing the game. These recorded data include time and figures related to the progress of the game, and numerical values for emotions recognized from the face. Using the recorded data, it is possible to judge what kind of emotion the game induces to the user at a certain point in time. Numerical data on the recorded empirical part using the system of this study is expected to help develop the game according to the developer 's intention.

Attentional Bias to Emotional Stimuli and Effects of Anxiety on the Bias in Neurotypical Adults and Adolescents

  • Mihee Kim;Jejoong Kim;So-Yeon Kim
    • Science of Emotion and Sensibility
    • /
    • v.25 no.4
    • /
    • pp.107-118
    • /
    • 2022
  • Human can rapidly detect and deal with dangerous elements in their environment, and they generally manifest as attentional bias toward threat. Past studies have reported that this attentional bias is affected by anxiety level. Other studies, however, have argued that children and adolescents show attentional bias to threatening stimuli, regardless of their anxiety levels. Few studies directly have compared the two age groups in terms of attentional bias to threat, and furthermore, most previous studies have focused on attentional capture and the early stages of attention, without investigating further attentional holding by the stimuli. In this study, we investigated both attentional bias patterns (attentional capture and holding) with respect to negative emotional stimulus in neurotypical adults and adolescents. The effects of anxiety level on attentional bias were also examined. The results obtained for adult participants showed that abrupt onset of a distractor delayed attentional capture to the target, regardless of distractor type (angry or neutral faces), while it had no effect on attention holding. In adolescents, on the other hand, only the angry face distractor resulted in longer reaction time for detecting a target. Regarding anxiety, state anxiety revealed a significant positive correlation with attentional capture to a face distractor in adult participants but not in adolescents. Overall, this is the first study to investigate developmental tendencies of attentional bias to negative facial emotion in both adults and adolescents, providing novel evidence on attentional bias to threats at different ages. Our results can be applied to understanding the attentional mechanisms in people with emotion-related developmental disorders, as well as typical development.

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.