• Title/Summary/Keyword: Valence-Arousal Model

Search Result 29, Processing Time 0.018 seconds

GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model

  • Ahn, Hyunchul;Kim, Seongjin;Kim, Jae Kyeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2056-2069
    • /
    • 2014
  • In order to implement interactive and personalized Web services properly, it is necessary to understand the tangible and intangible responses of the users and to recognize their emotional states. Recently, some studies have attempted to build emotional state estimation models based on facial expressions. Most of these studies have applied multiple regression analysis (MRA), artificial neural network (ANN), and support vector regression (SVR) as the prediction algorithm, but the prediction accuracies have been relatively low. In order to improve the prediction performance of the emotion prediction model, we propose a novel SVR model that is optimized using a genetic algorithm (GA). Our proposed algorithm-GASVR-is designed to optimize the kernel parameters and the feature subsets of SVRs in order to predict the levels of two aspects-valence and arousal-of the emotions of the users. In order to validate the usefulness of GASVR, we collected a real-world data set of facial responses and emotional states via a survey. We applied GASVR and other algorithms including MRA, ANN, and conventional SVR to the data set. Finally, we found that GASVR outperformed all of the comparative algorithms in the prediction of the valence and arousal levels.

Emotion Detection Model based on Sequential Neural Networks in Smart Exhibition Environment (스마트 전시환경에서 순차적 인공신경망에 기반한 감정인식 모델)

  • Jung, Min Kyu;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.109-126
    • /
    • 2017
  • In the various kinds of intelligent services, many studies for detecting emotion are in progress. Particularly, studies on emotion recognition at the particular time have been conducted in order to provide personalized experiences to the audience in the field of exhibition though facial expressions change as time passes. So, the aim of this paper is to build a model to predict the audience's emotion from the changes of facial expressions while watching an exhibit. The proposed model is based on both sequential neural network and the Valence-Arousal model. To validate the usefulness of the proposed model, we performed an experiment to compare the proposed model with the standard neural-network-based model to compare their performance. The results confirmed that the proposed model considering time sequence had better prediction accuracy.

A Novel Method for Modeling Emotional Dimensions using Expansion of Russell's Model (러셀 모델의 확장을 통한 감정차원 모델링 방법 연구)

  • Han, Eui-Hwan;Cha, Hyung-Tai
    • Science of Emotion and Sensibility
    • /
    • v.20 no.1
    • /
    • pp.75-82
    • /
    • 2017
  • We propose a novel method for modeling emotional dimensions using expansion of Russell's (1980) emotional dimensions (Circumplex Model). The Circumplex Model represents emotional words in two axes (Arousal, Valence). However, other researchers have insisted that location of word in Russell's model which is expressed by single point could not represent exact position. Consequently, it is difficult to apply this model in engineering fields (such as Science of Emotion & Sensibility, Human-Computer-Interaction, Ergonomics, etc.). Therefore, we propose a new modeling method which expresses emotional word not as a single point but as a region. We conducted survey to obtain actual data and derived equations using ellipse formula to represent emotional region. Furthermore, we applied ANEW and IAPS which are commonly used in many studies to our emotional model using pattern recognition algorithm. Using our method, we could solve problems with Russell's model and our model is easily applicable to the field of engineering.

Multi-Dimensional Emotion Recognition Model of Counseling Chatbot (상담 챗봇의 다차원 감정 인식 모델)

  • Lim, Myung Jin;Yi, Moung Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.21-27
    • /
    • 2021
  • Recently, the importance of counseling is increasing due to the Corona Blue caused by COVID-19. Also, with the increase of non-face-to-face services, researches on chatbots that have changed the counseling media are being actively conducted. In non-face-to-face counseling through chatbot, it is most important to accurately understand the client's emotions. However, since there is a limit to recognizing emotions only in sentences written by the client, it is necessary to recognize the dimensional emotions embedded in the sentences for more accurate emotion recognition. Therefore, in this paper, the vector and sentence VAD (Valence, Arousal, Dominance) generated by learning the Word2Vec model after correcting the original data according to the characteristics of the data are learned using a deep learning algorithm to learn the multi-dimensional We propose an emotion recognition model. As a result of comparing three deep learning models as a method to verify the usefulness of the proposed model, R-squared showed the best performance with 0.8484 when the attention model is used.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

Analysis of Personality and Emotion State Model Based on Multiple-Valued Automata (다치오토마타를 이용한 개성 및 감성상태 모델의 해석)

  • 손창식;정환묵
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.173-176
    • /
    • 2003
  • 본 논문은 기존의 다치오토마타 모델을 적용하여 개성과 감성상태 모델을 제안한다. 기존 다치오토마타 모델을 2개의 오토마타로 분리하여 하나는 감성상태, 다른 하나는 개성을 모델링하는데 이용하였다. 사용자의 내부 감성과 개성은 Valence-Arousal 공간으로 정의된 감성상태를 바탕으로 구성하였고, 2개로 분리된 다치오토마타의 관계를 정의에 따라 구성하여 감성상태와 개성이 동시에 한 개의 오토마타 모델로 모델링 될 수 있는 가능성을 제시하였다.

  • PDF

Improvement of a Context-aware Recommender System through User's Emotional State Prediction (사용자 감정 예측을 통한 상황인지 추천시스템의 개선)

  • Ahn, Hyunchul
    • Journal of Information Technology Applications and Management
    • /
    • v.21 no.4
    • /
    • pp.203-223
    • /
    • 2014
  • This study proposes a novel context-aware recommender system, which is designed to recommend the items according to the customer's responses to the previously recommended item. In specific, our proposed system predicts the user's emotional state from his or her responses (such as facial expressions and movements) to the previous recommended item, and then it recommends the items that are similar to the previous one when his or her emotional state is estimated as positive. If the customer's emotional state on the previously recommended item is regarded as negative, the system recommends the items that have characteristics opposite to the previous item. Our proposed system consists of two sub modules-(1) emotion prediction module, and (2) responsive recommendation module. Emotion prediction module contains the emotion prediction model that predicts a customer's arousal level-a physiological and psychological state of being awake or reactive to stimuli-using the customer's reaction data including facial expressions and body movements, which can be measured using Microsoft's Kinect Sensor. Responsive recommendation module generates a recommendation list by using the results from the first module-emotion prediction module. If a customer shows a high level of arousal on the previously recommended item, the module recommends the items that are most similar to the previous item. Otherwise, it recommends the items that are most dissimilar to the previous one. In order to validate the performance and usefulness of the proposed recommender system, we conducted empirical validation. In total, 30 undergraduate students participated in the experiment. We used 100 trailers of Korean movies that had been released from 2009 to 2012 as the items for recommendation. For the experiment, we manually constructed Korean movie trailer DB which contains the fields such as release date, genre, director, writer, and actors. In order to check if the recommendation using customers' responses outperforms the recommendation using their demographic information, we compared them. The performance of the recommendation was measured using two metrics-satisfaction and arousal levels. Experimental results showed that the recommendation using customers' responses (i.e. our proposed system) outperformed the recommendation using their demographic information with statistical significance.

Research on Emotion Evaluation using Autonomic Response (자율신경계 반응에 의한 감성 평가 연구)

  • 황민철;장근영;김세영
    • Science of Emotion and Sensibility
    • /
    • v.7 no.3
    • /
    • pp.51-56
    • /
    • 2004
  • Arousal level has been well defined by autonomic responses. However, entire emotion including both valence and arousal level is often questioned to be completely described by only autonomic responses. This study is to find the autonomic physiological parameters which were used emotion evaluation, 15 undergraduate students were asked to watch eight video clips from diverse movies and comedy shows for experiencing emotions. The subjectively experienced emotion were grouped by three factors. Two dimensional emotion model having the pleasant-unpleasant and arousal-non arousal factors were mapped with three physiological responses(GSR, PPG, SKT). The results may suggest that PPG and GSR may be used as arousal index while SKT may pleasant index. And the complex relation of physiological responses to emotional experiences are discussed.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

Multimedia Contents Recommendation Method using Mood Vector in Social Networks (소셜네트워크에서 분위기 벡터를 이용한 멀티미디어 콘텐츠 추천 방법)

  • Moon, Chang Bae;Lee, Jong Yeol;Kim, Byeong Man
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.6
    • /
    • pp.11-24
    • /
    • 2019
  • The tendency of buyers of web information is changing from the cost-effectiveness to the cost-satisfaction. There is such tendency in the recommendation of multimedia contents, some of which are folksonomy-based recommendation services using mood. However, there is a problem that they does not consider synonyms. In order to solve this problem, some studies have solved the problem by defining 12 moods of Thayer model as AV values (Arousal and Valence), but the recommendation performance is lower than that of a keyword-based method at the recall level 0.1. In this paper, we propose a method based on using mood vector of multimedia contents. The method can solve the synonym problem while maintaining the same performance as the keyword-based method even at the recall level 0.1. Also, for performance analysis, we compare the proposed method with an existing method based on AV value and a keyword-based method. The result shows that the proposed method outperform the existing methods.