• 제목/요약/키워드: Facial Emotion Expression

Search Result 202, Processing Time 0.029 seconds

An Emotion Recognition and Expression Method using Facial Image and Speech Signal (음성 신호와 얼굴 표정을 이용한 감정인식 몇 표현 기법)

  • Ju, Jong-Tae;Mun, Byeong-Hyeon;Seo, Sang-Uk;Jang, In-Hun;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.333-336
    • /
    • 2007
  • 본 논문에서는 감정인식 분야에서 가장 많이 사용되어지는 음성신호와 얼굴영상을 가지고 4개의(기쁨, 슬픔, 화남, 놀람) 감정으로 인식하고 각각 얻어진 감정인식 결과를 Multi modal 기법을 이용해서 이들의 감정을 융합한다. 이를 위해 얼굴영상을 이용한 감정인식에서는 주성분 분석(Principal Component Analysis)법을 이용해 특징벡터를 추출하고, 음성신호는 언어적 특성을 배재한 acoustic feature를 사용하였으며 이와 같이 추출된 특징들을 각각 신경망에 적용시켜 감정별로 패턴을 분류하였고, 인식된 결과는 감정표현 시스템에 작용하여 감정을 표현하였다.

  • PDF

Problem Inference System of Interactive Digital Contents Based on Visitor Facial Expression and Gesture Recognition (관람객 얼굴 표정 및 제스쳐 인식 기반 인터렉티브 디지털콘텐츠의 문제점 추론 시스템)

  • Kwon, Do-Hyung;Yu, Jeong-Min
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.375-377
    • /
    • 2019
  • 본 논문에서는 관람객 얼굴 표정 및 제스쳐 인식을 기반으로 인터렉티브 디지털콘텐츠의 문제점 추론 시스템을 제안한다. 관람객이 콘텐츠를 체험하고 다른 장소로 이동하기 전까지의 행동 패턴을 기준으로 삼아 4가지 문제점으로 분류한다. 문제점 분류을 위해 관람객이 콘텐츠 체험과정에서 나타낼 수 있는 얼굴 표정 3가지 종류와 제스쳐 5가지를 구분하였다. 실험에서는 입력된 비디오로부터 얼굴 및 손을 검출하기 위해 Adaboost algorithm을 사용하였고, mobilenet v1을 retraining하여 탐지모델을 생성 후 얼굴 표정 및 제스쳐를 검출했다. 이 연구를 통해 인터렉티브 디지털콘텐츠가 지니고 있는 문제점을 추론하여 향후 콘텐츠 개선과 제작에 사용자 중심 설계가 가능하도록 하고 양질의 콘텐츠 생산을 촉진 시킬 수 있을 것이다.

  • PDF

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Implementation of Real Time Facial Expression and Speech Emotion Analyzer based on Haar Cascade and DNN (Haar Cascade와 DNN 기반의 실시간 얼굴 표정 및 음성 감정 분석기 구현)

  • Yu, Chan-Young;Seo, Duck-Kyu;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.33-36
    • /
    • 2021
  • 본 논문에서는 인간의 표정과 목소리를 기반으로 한 감정 분석기를 제안한다. 제안하는 분석기들은 수많은 인간의 표정 중 뚜렷한 특징을 가진 표정 7가지를 별도의 클래스로 구성하며, DNN 모델을 수정하여 사용하였다. 또한, 음성 데이터는 학습 데이터 증식을 위한 Data Augmentation을 하였으며, 학습 도중 과적합을 방지하기 위해 콜백 함수를 사용하여 가장 최적의 성능에 도달했을 때, Early-stop 되도록 설정했다. 제안하는 표정 감정 분석 모델의 학습 결과는 val loss값이 0.94, val accuracy 값은 0.66이고, 음성 감정 분석 모델의 학습 결과는 val loss 결과값이 0.89, val accuracy 값은 0.65로, OpenCV 라이브러리를 사용한 모델 테스트는 안정적인 결과를 도출하였다.

  • PDF

Analysis of Visual Attention in Negative Emotional Expression Emoticons using Eye-Tracking Device (시선추적 장치를 활용한 부정적 감정표현 이모티콘의 시각적 주의집중도 분석)

  • Park, Minhee;Kwon, Mahnwoo;Hwang, Mikyung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1580-1587
    • /
    • 2021
  • Currently, the development and sale of various emoticons has given users a wider range of choices, but a systematic and specific approach to the recognition and use of emoticons by actual users is lacking. Therefore, this study tried to investigate the subjective perception and visual attention concentration of actual users on negative emotional expression emoticons through a survey and eye tracking experiment. First, as a result of subjective recognition analysis, it was found that emoticons are frequently used because their appearance is important, and they can express various emotions in a fun and interesting way. In particular, it was found that emoticons that express negative emotions are often used because they can indirectly express negative emotions through various and concretely expressed visual elements. Next, as a result of the eye tracking experiment, it was found that the negative emotional expression emoticons focused on the large elements that visually emphasized or emphasized the emotional expression elements, and it was found that the focus was not only on the facial expression but also on the physical behavioral responses and language of expression of emotions. These results will be used as basic data to understand users' perceptions and utilization of the diversified emoticons. In addition, for the long-term growth and activation of the emoticon industry market in the future, continuous research should be conducted to understand the various emotions of real users and to develop differentiated emoticons that can maximize the empathy effect appropriate to the situation.

Arithmetic Fluctuation Effect affected by Induced Emotional Valence (유발된 정서가에 따른 계산 요동의 효과)

  • Kim, Choong-Myung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.185-191
    • /
    • 2018
  • This study examined the type and extent of interruption between induced emotion and succeeding arithmetic operation. The experiment was carried out to determine the influence of the induced emotions (anger, joy, and sorrow) and stimulus types (picture and sentence) on the cognitive process load that may block the interactions among the constituents of working memory. The study subjects were 32 undergraduates who were similar with respect to age and education parameters and were especially instructed to attend to induced emotion by imitation of facial expression and to make a correct decision during the remainder calculation task. In the results, the stimulus types did not exhibit any difference but there was a significant difference among the induced emotion types. The difference was observed in slower response time at positive emotion(joy condition) as compared with other emotions(anger and sorrow). More specifically, error and delayed correct response rate for emotion types were analysed to determine which phase the slower response was associated with. Delayed responses of the joy condition by sentence-inducing stimulus were identified with the error rate difference, and those by picture-inducing stimulus with the delayed correct response rate. These findings not only suggest that induced positive emotion increased response time compared to negative emotions, but also imply that picture-inducing stimulus easily affords arithmetic fluctuation whereas sentence-inducing stimulus results in arithmetic failure.

Moderating Effects of User Gender and AI Voice on the Emotional Satisfaction of Users When Interacting with a Voice User Interface (음성 인터페이스와의 상호작용에서 AI 음성이 성별에 따른 사용자의 감성 만족도에 미치는 영향)

  • Shin, Jong-Gyu;Kang, Jun-Mo;Park, Yeong-Jin;Kim, Sang-Ho
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.127-134
    • /
    • 2022
  • This study sought to identify the voice user interface (VUI) design parameters that evoked positive user emotions. Six VUI design parameters that could affect emotional user satisfaction were considered. The moderating effects of user gender and the design parameters were analyzed to determine the appropriate conditions for user satisfaction when interacting with the VUI. An interactive VUI system that could modify the six parameters was implemented using the Wizard of OZ experimental method. User emotions were assessed from the users' facial expression data, which was then converted into a valence score. The frequency analysis and chi-square test found that there were statistically significant moderating gender and AI effects. These results implied that it is beneficial to consider the users' gender when designing voice-based interactions. Adult/male/high-tone voices for males and adult/female/mid-tone voices for females are recommended as general guidelines for future VUI designs. Future analyses that consider various human factors will be able to more delicately assess human-AI interactions from a UX perspective.

Emotional Contagion as an Eliciting Factor of Altruistic Behavior: Moderating Effects by Culture (이타행동의 유발요인으로서 정서전염: 문화변인의 조절효과)

  • Jungsik Kim;Wan-Suk Gim
    • Korean Journal of Culture and Social Issue
    • /
    • v.13 no.2
    • /
    • pp.55-76
    • /
    • 2007
  • This study investigated the relationship between emotional contagion and altruistic behaviors and also examined the moderating effect of self-construals(independent and interdependent self) in this relationship. It was hypothesized that the emotional expression of people in need would be caught by others through automatic mimicry, that emotional information would be internalized through the facial-feedback process and that the transferred emotion would eventually result in a motive to call for altruistic behaviors. In Study 1, participants watched a video clip about a disabled student reporting difficulties in school life but showing facial expression opposite to the contents of message to separate emotional contagion and empathy. Participants' decision to participate in voluntary works for the disabled student was measured. As a result, it was found that the more participants experienced emotional contagion, the more they participated in altruistic behaviors. Study 2 measured the vulnerability to emotional contagion, actual experiences of altruistic behaviors, and self-construals. The results of hierarchical regression showed that interdependent self moderated the influence of emotional contagion on altruistic behaviors whereas independent self moderated the relationship in an opposite direction. The implications of emotion and altruistic behaviors in human evolution process are discussed.

  • PDF

Experiencing and Expression of Deaf Adolescents (농인 청소년의 감정 경험 및 표현 특성)

  • Park, Ji-Eun;Kim, Eun-Ye;Jang, Un-Jung;Cheong, E-Nae;Eum, Young-Ji;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.19 no.3
    • /
    • pp.51-58
    • /
    • 2016
  • This study examined the difference between the deaf and hearing adolescents of experiencing emotions and the intensity levels of expressing them. Three different video clips were used to induce pleasure, anger, and sadness. While watching the clips, facial expressions of the participants were recorded. The experienced emotions were measured by a self-report method, and the third person rated participants' expressed emotions based upon the recorded facial images. Two groups (deaf and hearing) were compared if those two groups shared the same experienced emotions, and whether the ratings scored by the third person corresponded with the self-rated scores. There was no significant difference in experienced emotion and its intensity level. However, hearing adolescents showed more intensive responses of pleasure than they reported, while deaf adolescents showed less intensive expressions of happiness than they reported themselves. Thus, hearing people might not be able to detect and fully comprehend how the deaf feel in general circumstances. This further indicates that the deaf adolescents cannot get enough supports from the hearing people when they express their feelings, and consequently, have a possibility of causing misunderstandings, conflicts, or even a break in relationships.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.