• Title/Summary/Keyword: Facial emotion

Search Result 312, Processing Time 0.275 seconds

P3 Elicited by the Positive and Negative Emotional Stimuli (긍정적, 부정적 정서 자극에 의해 유발된 P3)

  • An, Suk-Kyoon;Lee, Soo-Jung;NamKoong, Kee;Lee, Chang-Il;Lee, Eun;Kim, The-Hoon;Roh, Kyo-Sik;Choi, Hye-Won;Park, Jun-Mo
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.9 no.2
    • /
    • pp.143-152
    • /
    • 2001
  • Objects : The aim of this study was to determine whether the P3 elicited by the negative emotional stimuli is different to that by positive stimuli. Methods : We measured the event-related potentials, especially P3 elicited by the facial photographs in 12 healthy subjects. Subjects were instructed to feel and respond to the rare target facial photographs imbedded in frequent non-target checkerboards. Results : We found that amplitude of P3 elicited by negative emotional photographs was significantly larger than that by the positive stimuli in healthy subjects. Conclusion : These findings suggest that P3 elicited by facial stimuli may be used as a psychophy-siological variable of the emotional processing.

  • PDF

Development of facial recognition application for automation logging of emotion log (감정로그 자동화 기록을 위한 표정인식 어플리케이션 개발)

  • Shin, Seong-Yoon;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.737-743
    • /
    • 2017
  • The intelligent life-log system proposed in this paper is intended to identify and record a myriad of everyday life information as to the occurrence of various events based on when, where, with whom, what and how, that is, a wide variety of contextual information involving person, scene, ages, emotion, relation, state, location, moving route, etc. with a unique tag on each piece of such information and to allow users to get a quick and easy access to such information. Context awareness generates and classifies information on a tag unit basis using the auto-tagging technology and biometrics recognition technology and builds a situation information database. In this paper, we developed an active modeling method and an application that recognizes expressionless and smile expressions using lip lines to automatically record emotion information.

The effects of the usability of products on user's emotions - with emphasis on suggestion of methods for measuring user's emotions expressed while using a product -

  • Jeong, Sang-Hoon
    • Archives of design research
    • /
    • v.20 no.2 s.70
    • /
    • pp.5-16
    • /
    • 2007
  • The main objective of our research is analyzing user's emotional changes while using a product, to reveal the influence of usability on human emotions. In this study we have extracted some emotional words that can come up during user interaction with a product and reveal emotional changes through three methods. Finally, we extracted 88 emotional words for measuring user's emotions expressed while using products. And we categorized the 88 words to form 6 groups by using factor analysis. The 6 categories that were extracted as a result of this study were found to be user's representative emotions expressed while using products. It is expected that emotional words and user's representative emotions extracted in this study will be used as subjective evaluation data that is required to measure user's emotional changes while using a product. Also, we proposed the effective methods for measuring user's emotion expressed while using a product in the environment which is natural and accessible for the field of design, by using the emotion mouse and the Eyegaze. An examinee performs several tasks with the emotion mouse through the mobile phone simulator on the computer monitor connected to the Eyegaze. While testing, the emotion mouse senses user's EDA and PPG and transmits the data to the computer. In addition, the Eyegaze can observe the change of pupil size. And a video camera records user's facial expression while testing. After each testing, a subjective evaluation on the emotional changes expressed by the user is performed by the user him/herself using the emotional words extracted from the above study. We aim to evaluate the satisfaction level of usability of the product and compare it with the actual experiment results. Through continuous studies based on these researches, we hope to supply a basic framework for the development of interface with consideration to the user's emotions.

  • PDF

A Study on The Expression of Digital Eye Contents for Emotional Communication (감성 커뮤니케이션을 위한 디지털 눈 콘텐츠 표현 연구)

  • Lim, Yoon-Ah;Lee, Eun-Ah;Kwon, Jieun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.563-571
    • /
    • 2017
  • The purpose of this paper is to establish an emotional expression factors of digital eye contents that can be applied to digital environments. The emotion which can be applied to the smart doll is derived and we suggest guidelines for expressive factors of each emotion. For this paper, first, we research the concepts and characteristics of emotional expression are shown in eyes by the publications, animation and actual video. Second, we identified six emotions -Happy, Angry, Sad, Relaxed, Sexy, Pure- and extracted the emotional expression factors. Third, we analyzed the extracted factors to establish guideline for emotional expression of digital eyes. As a result, this study found that the factors to distinguish and represent each emotion are classified four categories as eye shape, gaze, iris size and effect. These can be used as a way to enhance emotional communication effects such as digital contents including animations, robots and smart toys.

Arithmetic Fluctuation Effect affected by Induced Emotional Valence (유발된 정서가에 따른 계산 요동의 효과)

  • Kim, Choong-Myung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.185-191
    • /
    • 2018
  • This study examined the type and extent of interruption between induced emotion and succeeding arithmetic operation. The experiment was carried out to determine the influence of the induced emotions (anger, joy, and sorrow) and stimulus types (picture and sentence) on the cognitive process load that may block the interactions among the constituents of working memory. The study subjects were 32 undergraduates who were similar with respect to age and education parameters and were especially instructed to attend to induced emotion by imitation of facial expression and to make a correct decision during the remainder calculation task. In the results, the stimulus types did not exhibit any difference but there was a significant difference among the induced emotion types. The difference was observed in slower response time at positive emotion(joy condition) as compared with other emotions(anger and sorrow). More specifically, error and delayed correct response rate for emotion types were analysed to determine which phase the slower response was associated with. Delayed responses of the joy condition by sentence-inducing stimulus were identified with the error rate difference, and those by picture-inducing stimulus with the delayed correct response rate. These findings not only suggest that induced positive emotion increased response time compared to negative emotions, but also imply that picture-inducing stimulus easily affords arithmetic fluctuation whereas sentence-inducing stimulus results in arithmetic failure.

Effects of the facial expression's presenting type and areas on emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Kim, Hyuk;Han, Kwang-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1393-1400
    • /
    • 2006
  • 정서를 측정하고 나타내는 기술이 발전에 따라 문화적 보편성을 가진 얼굴표정 연구의 필요성이 증가하고 있다. 그리고 지금까지의 많은 얼굴 표정 연구들은 정적인 얼굴사진 위주로 이루어졌다. 그러나 실제 사람들은 단적인 얼굴표정만으로 정서를 인식하기 보다는 미묘한 표정의 변화나 얼굴근육의 움직임 등을 통해 정서상태를 추론한다. 본 연구는 동적인 얼굴표정이 정적인 얼굴표정 보다 정서상태 전달에서 더 큰 효과를 가짐을 밝히고, 동적인 얼굴 표정에서의 눈과 입의 정서인식 효과를 비교해 보고자 하였다. 이에 따라 15 개의 형용사 어휘에 맞는 얼굴 표정을 얼굴전체, 눈, 입의 세 수준으로 나누어 동영상과 스틸사진으로 제시하였다. 정서 판단의 정확성을 측정한 결과, 세 수준 모두에서 동영상의 정서인식 효과가 스틸사진 보다 유의미하게 높게 나타나 동적인 얼굴 표정이 더 많은 내적정보를 보여주는 것을 알 수 있었다. 또한 얼굴전체-눈-입 순서로 정서인식 효과의 차이가 유의미하게 나타났으며, 부정적 정서는 눈에서 더 잘 나타나고 긍정적 정서는 입에서 더 잘 나타났다. 따라서 눈과 입에 따른 정서인식이 정서의 긍정성-부정성 차원에 따라 달라짐을 볼 수 있었다.

  • PDF

GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model

  • Ahn, Hyunchul;Kim, Seongjin;Kim, Jae Kyeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2056-2069
    • /
    • 2014
  • In order to implement interactive and personalized Web services properly, it is necessary to understand the tangible and intangible responses of the users and to recognize their emotional states. Recently, some studies have attempted to build emotional state estimation models based on facial expressions. Most of these studies have applied multiple regression analysis (MRA), artificial neural network (ANN), and support vector regression (SVR) as the prediction algorithm, but the prediction accuracies have been relatively low. In order to improve the prediction performance of the emotion prediction model, we propose a novel SVR model that is optimized using a genetic algorithm (GA). Our proposed algorithm-GASVR-is designed to optimize the kernel parameters and the feature subsets of SVRs in order to predict the levels of two aspects-valence and arousal-of the emotions of the users. In order to validate the usefulness of GASVR, we collected a real-world data set of facial responses and emotional states via a survey. We applied GASVR and other algorithms including MRA, ANN, and conventional SVR to the data set. Finally, we found that GASVR outperformed all of the comparative algorithms in the prediction of the valence and arousal levels.

The Uncanny Valley Effect for Celebrity Faces and Celebrity-based Avatars (연예인 얼굴과 연예인 기반 아바타에서의 언캐니 밸리)

  • Jung, Na-ri;Lee, Min-ji;Choi, Hoon
    • Science of Emotion and Sensibility
    • /
    • v.25 no.1
    • /
    • pp.91-102
    • /
    • 2022
  • As virtual space activities become more common, human-virtual agents such as avatars are more frequently used instead of people, but the uncanny valley effect, in which people feel uncomfortable when they see artifacts that look similar to humans, is an obstacle. In this study, we explored the uncanny valley effect for celebrity avatars. We manipulated the degree of atypicality by adjusting the eye size in photos of celebrities, ordinary people, and their avatars and measured the intensity of the uncanny valley effect. As a result, the uncanny valley effect for celebrities and celebrity avatars appeared to be stronger than the effect for ordinary people. This result is consistent with previous findings that more robust facial representations are formed for familiar faces, making it easier to detect facial changes. However, with real faces of celebrities and ordinary people, as in previous studies, the higher the degree of atypicality, the greater the uncanny valley effect, but this result was not found for the avatar stimulus. This high degree of tolerance for atypicality in avatars seems to be caused by cartoon characters' tendency to have exaggerated facial features such as eyes, nose, and mouth. These results suggest that efforts to reduce the uncanny valley in the virtual space service using celebrity avatars are necessary.

The analysis of parameters and affection(Gamsung) for facial types of Korean females in twenties (한국인 20대 여성 얼굴의 수치 및 감성 구조 분석)

  • 박수진;김한경;한재현;이정원;김종일;송경석;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.05a
    • /
    • pp.74-81
    • /
    • 2001
  • 얼굴은 내측두(IT: inferotemporal) 영역에 독자적인 처리 공간을 가지고 있는 (Bruce, Desimone, & Gross, 1981; Rolls, 1992) 매우 복잡한 시각 자극이다. 본연구는 이러한 복잡한 얼굴 자극을 구성하고 있는 물리적인 특징들을 추출하여 얼굴을 수치 구조면에서 분석하고 이를 감성 공간과 연결시킬 목적으로 수행되었ㄷ. 이를 위해 본연구에서는 먼저 얼굴 내부에 36개의 특징들 및 특징들 간 관계를 설정하였다. 또한 얼굴 외곽형의 분류를 위해 얼굴 윤곽선 부위에 14개의 특징점을 찍고 코끝에서부터 이들 지점과의 거리를 측정하였다. 사람마다 기본적인 얼굴 14개의 특징점을 찍고 코끝에서부터 이들 지점과의 거리를 측정하였다. 사람마다 기본적인 얼굴 크기가 다르다는 점을 감안하여 이들 특징값들 중 길이값들은 얼굴 좌우폭 또는 얼굴 상하길이를 기주으로 정규화(normalization)되었다. 그런 다음 36개의 얼굴 내부 특징 요소들과 5가지 얼굴 외곽형을 입력값으로 하여 주성분분석(PCA: proncipal component analysis)을 실시하고, 여기서 도출된 다섯 개의 요인점수를 기반으로 5차원 공간을 가정하였다. 이 공간을 대표하는 얼굴을 고루 선정하되 해당 얼굴이 있다고 보기 어려운 영역을 제외하고 평균에 해당하는 얼굴을 추가하여 총 30가지 대표 얼굴 유형을 선정하였다. 선정된 얼굴들에 대해 일차적으로 감성 평가를 실시하여 2차원 감성 공간에 대표 얼굴들을 분포시켰다.

  • PDF

Emotional Recognition System Using Eigenfaces (Eigenface를 이용한 인간의 감정인식 시스템)

  • Joo, Young-Hoon;Lee, Sang-Yun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.216-221
    • /
    • 2003
  • Emotions recognition is a topic on which little research has been done to date. This paper proposes a new method that can recognize the human s emotion from facial image by using eigenspace. To do so, first, we get the face image by using the skin color from the original color image acquired by CCD color camera. Second, we get the vector image which is projected the obtained face image into eigenspace. And then, we propose the method for finding out each person s identification and emotion from the weight of vector image. Finally, we show the practical application possibility of the proposed method through the experiment.