• Title/Summary/Keyword: Arousal-Valence

Search Result 75, Processing Time 0.028 seconds

Multidimensional Affective model-based Multimodal Complex Emotion Recognition System using Image, Voice and Brainwave (다차원 정서모델 기반 영상, 음성, 뇌파를 이용한 멀티모달 복합 감정인식 시스템)

  • Oh, Byung-Hun;Hong, Kwang-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.821-823
    • /
    • 2016
  • 본 논문은 다차원 정서모델 기반 영상, 음성, 뇌파를 이용한 멀티모달 복합 감정인식 시스템을 제안한다. 사용자의 얼굴 영상, 목소리 및 뇌파를 기반으로 각각 추출된 특징을 심리학 및 인지과학 분야에서 인간의 감정을 구성하는 정서적 감응요소로 알려진 다차원 정서모델(Arousal, Valence, Dominance)에 대한 명시적 감응 정도 데이터로 대응하여 스코어링(Scoring)을 수행한다. 이후, 스코어링을 통해 나온 결과 값을 이용하여 다차원으로 구성되는 3차원 감정 모델에 매핑하여 인간의 감정(단일감정, 복합감정)뿐만 아니라 감정의 세기까지 인식한다.

Cognitive and Emotional Structure of a Robotic Game Player in Turn-based Interaction

  • Yang, Jeong-Yean
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.154-162
    • /
    • 2015
  • This paper focuses on how cognitive and emotional structures affect humans during long-term interaction. We design an interaction with a turn-based game, the Chopstick Game, in which two agents play with numbers using their fingers. While a human and a robot agent alternate turn, the human user applies herself to play the game and to learn new winning skills from the robot agent. Conventional valence and arousal space is applied to design emotional interaction. For the robotic system, we implement finger gesture recognition and emotional behaviors that are designed for three-dimensional virtual robot. In the experimental tests, the properness of the proposed schemes is verified and the effect of the emotional interaction is discussed.

A Study on the analyzation method of EEG adapting Dataset (Dataset을 활용한 뇌파 데이터 분석 방법에 관한 연구)

  • Lee, HyunJu;Shin, DongIl;Shin, DongKyoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.995-997
    • /
    • 2014
  • 뇌파는 최근에 가장 많이 연구되고 있는 생체신호이다. 본 연구에서는 오픈 감정뇌파데이터인 DEAP Dataset를 활용한 데이터 분석 실험을 시행하였다. DEAP Dataset는 총 32개의 데이터이며, 32채널로 구성되어 있다. 전처리 과정에서는 디지털 필터인 IIR(Infinite Impulse Response) Filter를 사용하여 잡음을 제거하였고, 인공산물인 안구잡파(EOG: Electrooculograms) 제거에는 LMS(the Least Mean squares) 알고리즘을 사용하였다. 감정분류는 Valence-Arousal 평면을 사용하여 네 개의 감정으로 구분하였고, 분류 실험으로는 패턴인식 알고리즘인 SVM(support Vector Machine)를 사용하였다. 실험결과 SVM이 70%대의 결과를 도출하여 이전 실험결과보다 높은 정확도를 도출하였다.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

A Music Retrieval Scheme based on Fuzzy Inference on Musical Mood and Emotion (음악 무드와 감정의 퍼지 추론을 기반한 음악 검색 기법)

  • Jun, Sang-Hoon;Rho, Seung-Min;Hwang, Een-Jun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.51-53
    • /
    • 2008
  • 최근 오디오 압축 기술의 발전에 힘입은 디지털 음원과 웹 스트리밍의 보급으로, 사용자가 음악 정보에 손쉽게 접할 수 있게 되었다. 이에 따라 음악을 보다 쉽고 효율적인 방법으로 검색하는 방법뿐 아니라 사용자의 환경에 따라 적절한 음악을 검색할 수 있는 기능의 필요성이 증가하게 되었다. 본 논문에서는 음악의 특징에 따라 분류된 데이터베이스를 사용하고, 사용자의 감정을 분석하여 적절한 음악을 검색하는 시스템을 제안한다. 본 시스템은 사용자의 감정 입력을 효율적으로 처리하기 위한 방법으로 Thayer의 2D emotional space를 적용하여 Valence-Arousal model의 두 가지의 입력을 처리한다. 가장 적합한 음악의 정보를 얻기 위해 사용된 Fuzzy Inference System의 IF-THEN 규칙을 정의하기 위하여 언어적으로 정의된 기존의 음악 감정 연구 결과를 적용하였고, 도출된 결과와 가장 유사도가 깊은 음악을 우선적으로 검색하도록 설계하였다. 이와 같이 구현된 시스템의 타당성을 검증하기 위해 사용자 설문조사를 수행하였다.

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

The effect of LED lighting hues on the rating and recognition of affective stimulus (LED 조명색상이 정서자극의 평정과 재인에 미치는 효과)

  • Pak, Hyen-Sou;Lee, Chan-Su;Jang, Ja-Soon
    • Science of Emotion and Sensibility
    • /
    • v.14 no.3
    • /
    • pp.371-384
    • /
    • 2011
  • Three experiments were carried out to examine how LED lighting hues influence to the rating and recognition of affective stimuli. In Experiment 1 and 2, IAPS affective pictures were used and an affective rating(valence and arousal) task and a recognition memory task were conducted under red, green, blue, and white hue LED lightings in Experiment 1 and cyan, magenta, yellow, and white ones in Experiment 2, respectively. In Experiment 3, affective words were used and the same two tasks were conducted under red, green, blue, and white hue LED lightings. According to the results of affective rating tasks, when primary hues(RGB) were used, red LED lighting elicited an excitement at the arousal dimension and green LED lighting evoked pleasantness at the valence one. When secondary hues(CMY) were used, magenta and cyan showed the similar but weaker patterns of responses comparing to red and green. The results of recognition memory task showed that the responses to the picture stimuli presented at green and cyan hue lightings tended to be a bit faster comparing to the stimuli presented at the other conditions but the difference was insignificant. In Experiment 3, however, recognition memory responses to the affective words presented at green hue lighting were faster significantly. These results indicate that warm colors like red and magenta elicit unpleasantness or excitement while cool colors like green and cyan evoke pleasantness or relaxation, and the primary hues provoke more positive or negative affectivity than secondary ones do. Particularly, the result of recognition memory task in Experiment 3 suggests that green hue LED lighting might be advantageous at the memory performance of language stimuli rather than visual ones.

  • PDF

Developing Korean Affect Word List and It's Application (정서가, 각성가 및 구체성 평정을 통한 한국어 정서단어 목록 개발)

  • Hong, Youngji;Nam, Ye-eun;Lee, Yoonhyoung
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.3
    • /
    • pp.377-406
    • /
    • 2016
  • Current lists of the Korean emotion words either do not consider word frequency, or only include emotion expression words such as 'joy' while disregarding emotion inducing words like 'heaven'. Also, none of the current lists contains the concreteness level of the emotional words. Therefore, the current study aimed to develop a new Korean affect word list that makes up such limitations of the current lists. To do so, in experiment 1, valence, arousal and concreteness ratings of the 450 Korean emotion expression nouns and emotion inducing nouns were surveyed with 399 participants. In addition, in experiment 2, an emotional stroop task was performed with the newly developed word list to test the usefulness of the list. The results showed clear patterns of the congruency effects between emotional words and emotion expressing faces. Increased response times and more errors were found when the emotion of the words and faces are non-matched, than when they were matched. The result suggested that the newly developed Korean affect word list can be effectively adapted to studies examining the influence of various aspects emotion.

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services (지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형)

  • Lee, Kichun;Choi, So Yun;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2014
  • Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.

Representation of Facial Expressions of Different Ages: A Multidimensional Scaling Study (다양한 연령의 얼굴 정서 표상: 다차원척도법 연구)

  • Kim, Jongwan
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • Previous studies using facial expressions have revealed valence and arousal as two core dimensions of affective space. However, it remains unknown if the two dimensional structure is consistent across ages. This study investigated affective dimensions using six facial expressions (angry, disgusted, fearful, happy, neutral, and sad) at three ages (young, middle-aged, and old). Several studies previously required participants to directly rate subjective similarity between facial expression pairs. In this study, we collected indirect measures by asking participants to decide if a pair of two stimuli conveyed the same emotions. Multidimensional scaling showed that "angry-disgusted" and "sad-disgusted" pairs are similar at all three ages. In addition, "angry-sad," "angry-neutral," "neutral-sad," and "disgusted-fearful" pairs were similar at old age. When two faces in a pair reflect the same emotion, "sad" was the most inaccurate in old age, suggesting that the ability to recognize "sad" decreases with old age. This study suggested that the general two-core dimension structure is robust across all age groups with the exception of specific emotions.