• 제목/요약/키워드: Expression of Anger

검색결과 237건 처리시간 0.027초

로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어 (Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity)

  • 김민규;이희승;박정우;조수훈;정명진
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.547-552
    • /
    • 2008
  • 앞으로 로봇은 더욱 사람과 가까워 질 것이고, 따라서 사람과 로봇간의 상호작용도 활발해질 것이다. 이 때 직관적인 소통수단이 필수적이기 때문에, 로봇이 표정을 통해 감정을 표현할 수 있도록 하는 연구가 활발히 진행되어왔다. 기존에는 얼굴 표정을 주로 이용하였는데, 사람처럼 감정의 강도를 표현하기 위해서는 얼굴 외의 다른 방법도 필요하다. 로봇이 감정의 강도를 표현하기 위해서는 팔의 제스처, 움직임, 소리, 색깔 등을 이용할 수 있다. 본 논문에서는 LED 를 이용할 수 있도록 색과 깜빡임에 대해 연구하였다. 색깔과 감정의 관계에 대해서는 기존에 연구가 많이 되어 있지만, 실제로 로봇에 구현하기에는 정량적 자료가 부족하여 어려움이 있다. 본 논문에서는 6 가지 기본 감정(화남, 슬픔, 혐오, 놀람, 기쁨, 공포)을 효과적으로 나타낼 수 있는 색과 깜빡임 주기를 결정하고, 아바타를 이용하여 감정의 강도를 설문조사 하였다. 결과적으로, 슬픔, 혐오, 화남의 경우 색깔과 깜빡임을 통해 감정의 강도를 높일 수 있었다. 공포, 기쁨, 놀람의 경우 색깔과 깜빡임이 감정 인식에 큰 영향을 미치지 못했는데, 이는 그 감정에 해당하는 색깔이나 깜빡임을 수정해서 개선할 수 있을 것이다.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

<도가니>의 참여적인 관객성을 위한 재현전략 (Representation Strategy for Participatory Spectatorship in Silence)

  • 계운경
    • 한국콘텐츠학회논문지
    • /
    • 제14권9호
    • /
    • pp.85-92
    • /
    • 2014
  • 본 논문은 <도가니>가 어떠한 재현전략으로 대중성을 확보하는지에 관하여 짚어낸다. <도가니>의 '집단적 기억의 환기', '고전적인 내러티브', '영화의 자기반영성'은 관객을 분노하게 하여 사이버 공론장으로 모이게 했으며, 더 나아가 '참여적인 관객성'을 가능하게 했던 중요한 요인들이다. 그런데 관객의 분노를 극대화시킬 수 있었던 이 영화의 장치들은 윤리적 논란을 불러일으킨다. 본 논문은 <도가니>가 다양한 재현전략을 통해 '참여적인 관객성'을 이끌어 낼 수 있었던 원인분석을 목표로 한다.

딥러닝 기반의 얼굴영상에서 표정 검출에 관한 연구 (Detection of Face Expression Based on Deep Learning)

  • 원철호;이법기
    • 한국멀티미디어학회논문지
    • /
    • 제21권8호
    • /
    • pp.917-924
    • /
    • 2018
  • Recently, researches using LBP and SVM have been performed as one of the image - based methods for facial emotion recognition. LBP, introduced by Ojala et al., is widely used in the field of image recognition due to its high discrimination of objects, robustness to illumination change, and simple operation. In addition, CS(Center-Symmetric)-LBP was used as a modified form of LBP, which is widely used for face recognition. In this paper, we propose a method to detect four facial expressions such as expressionless, happiness, surprise, and anger using deep neural network. The validity of the proposed method is verified using accuracy. Based on the existing LBP feature parameters, it was confirmed that the method using the deep neural network is superior to the method using the Adaboost and SVM classifier.

유아가 인식하는 부정적 정서와 반응 (Young Children's Perceptions and Responses to Negative Emotions)

  • 정윤희;김희진
    • 아동학회지
    • /
    • 제23권2호
    • /
    • pp.31-47
    • /
    • 2002
  • In this study, the perceptions and responses of 136 kindergarten children from middle SES families were recorded in one-to-one interviews about the cause, reasons for expression, and responses to negative emotions. Results showed that children perceived he causes of anger and sadness as 'interpersonal events' and they perceived he cause of fear to be 'fantasy/scary events'. The children tended not to express their negative emotions because they expected negative responses from their peers and mothers, but when they did, the expressed their negative emotions to their mothers rather than to peers. Children responded to the negative emotions of their peers with 'problem-solving focused strategies', but they responded to their mothers' negative emotions with passive strategies, such as 'emotion focused response' and 'avoidance'.

  • PDF

아다부스트 학습과 비정방형 Differential LBP를 이용한 얼굴영상 특징분석 (Face Image Analysis using Adaboost Learning and Non-Square Differential LBP)

  • 임길택;원철호
    • 한국멀티미디어학회논문지
    • /
    • 제19권6호
    • /
    • pp.1014-1023
    • /
    • 2016
  • In this study, we presented a method for non-square Differential LBP operation that can well describe the micro pattern in the horizontal and vertical component. We proposed a way to represent a LBP operation with various direction components as well as the diagonal component. In order to verify the validity of the proposed operation, Differential LBP was investigated with respect to accuracy, sensitivity, and specificity for the classification of facial expression. In accuracy comparison proposed LBP operation obtains better results than Square LBP and LBP-CS operations. Also, Proposed Differential LBP gets better results than previous two methods in the sensitivity and specificity indicators 'Neutral', 'Happiness', 'Surprise', and 'Anger' and excellence Differential LBP was confirmed.

웨이보 인기뉴스에 관한 감정표현에 영향을 미치는 요인 - '중국 산시성 린펀시 반점 붕괴 사건'을 중심으로 - (Influencing Factors on the Emotional Expression in Weibo Hot News - Focusing on 'Restaurant Collapse in Linfen City, Shanxi Province' -)

  • 륙치금;남인용
    • 한국콘텐츠학회논문지
    • /
    • 제21권5호
    • /
    • pp.105-117
    • /
    • 2021
  • 본 연구는 시나 웨이보(Sina Weibo)에 게재된 '산시성 린펀시 반점 붕괴 사건'이라는 인기뉴스(hot news)에 대한 댓글에 나타난 감정표현에 영향을 미치는 요인들을 살펴보았다. 연구결과, 첫째, 성별에 따라 감정표현에 차이가 나타났다. 여성은 남성보다 더 강한 분노, 실망, 슬픔, 비난 감정을 표현하였다. 둘째, 동부지역 이용자들의 감정표현 강도가 중부지역과 서부지역에 비하여 유의하게 높았다. 셋째, 이용자가 댓글에 참여하고 감정표현을 게시한 블로그의 총수량인 웨이보 수가 많을수록 감정표현이 더욱 강하게 나타났다. 넷째, 미인증 이용자는 인증된 이용자보다 실망, 슬픔의 감정표현이 더욱 강하였다. 본 연구는 중국의 온라인 여론형성 과정에서 감정표현의 영향 요인을 살펴봄으로써 서양의 트위터나 페이스북과 같은 소셜네트워크와 비교할 수 있다는 점에서 의의가 있으며, 온라인 뉴스분석에서 빅데이터 분석방법을 사용했다는 점에서도 의의가 있다.

마음챙김 명상과 이정변기요법을 이용한 공황장애 그룹치료 효과에 대한 임상적 고찰 (A Study on the Clinical Effects of Group Therapy for Panic Disorder Patients Based on Mindfulness & Li-Gyeung-Byun-Qi Therapy)

  • 이성용;유소정;최성열;유영수;강형원
    • 동의신경정신과학회지
    • /
    • 제25권4호
    • /
    • pp.319-332
    • /
    • 2014
  • Objectives: This study was conducted to evaluate the clinical effects of group therapy for Panic disorder patients based on Mindfulness & Li-Gyeung-Byun-Qi therapy. Methods: The FFMQ, BDI, STAI, STAXI, Panic attack, Anticipatory anxiety and subjective improvement of three Panic disorder patients were compared pre- and post-treatment when given Mindfulness & Li-Gyeung-Byun-Qi therapy. Results: 1) After the patient in case 1 underwent 5 weeks of group therapy for Panic disorder, the Mindfulness meditation score was slightly improved, anxiety and depression were significantly decreased, and expression of anger was also improved. In addition, the Panic attack and Anticipatory anxiety became more stable in the objective evaluation, while 'Extreme improvement' was shown in the subjective evaluation. 2) After the patients in case 2 and 3 underwent 5 weeks of group therapy for Panic disorder, Mindfulness meditation scores were slightly improved, anxiety and depression were significantly decreased, and expression of anger was also improved. In addition, the Panic attack and Anticipatory anxiety became more stable in the objective evaluation, while 'Moderate improvements' were shown in the subjective evaluation. Conclusions: As per the results in these cases, it was shown that group therapy for Panic disorder utilizing Mindfulness & Li-Gyeung-Byun-Qi therapy was effective to maintain meditation and control the emotions of anxiety, depression, anger and so on. Therefore, it was considered that expansion of clinical utilization through the standardization of a group therapy program for Panic disorder is needed. Furthermore, it was also considered that a comparative study of the effects of previous cognitive programs for Panic disorder according to the objectified and standardized manual is needed in the future.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권3호
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

3D 아바타 동작의 선택 제어를 통한 감정 표현 (Emotional Expression through the Selection Control of Gestures of a 3D Avatar)

  • 이지혜;진영훈;채영호
    • 한국CDE학회논문집
    • /
    • 제19권4호
    • /
    • pp.443-454
    • /
    • 2014
  • In this paper, an intuitive emotional expression of the 3D avatar is presented. Using the motion selection control of 3D avatar, an easy-to-use communication which is more intuitive than emoticon is possible. 12 pieces different emotions of avatar are classified as positive emotions such as cheers, impressive, joy, welcoming, fun, pleasure and negative emotions of anger, jealousy, wrath, frustration, sadness, loneliness. The combination of lower body motion is used to represent additional emotions of amusing, joyous, surprise, enthusiasm, glad, excite, sulk, discomfort, irritation, embarrassment, anxiety, sorrow. In order to get the realistic human posture, BVH format of motion capture data are used and the synthesis of BVH file data are implemented by applying the proposed emotional expression rules of the 3D avatar.