• Title/Summary/Keyword: Multi-emotion recognition

Search Result 51, Processing Time 0.027 seconds

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.

Implementation of Multi Channel Network Platform based Augmented Reality Facial Emotion Sticker using Deep Learning (딥러닝을 이용한 증강현실 얼굴감정스티커 기반의 다중채널네트워크 플랫폼 구현)

  • Kim, Dae-Jin
    • Journal of Digital Contents Society
    • /
    • v.19 no.7
    • /
    • pp.1349-1355
    • /
    • 2018
  • Recently, a variety of contents services over the internet are becoming popular, among which MCN(Multi Channel Network) platform services have become popular with the generalization of smart phones. The MCN platform is based on streaming, and various factors are added to improve the service. Among them, augmented reality sticker service using face recognition is widely used. In this paper, we implemented the MCN platform that masks the augmented reality sticker on the face through facial emotion recognition in order to further increase the interest factor. We analyzed seven facial emotions using deep learning technology for facial emotion recognition, and applied the emotional sticker to the face based on it. To implement the proposed MCN platform, emotional stickers were applied to the clients and various servers that can stream the servers were designed.

Development of Driver's Emotion and Attention Recognition System using Multi-modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 운전자의 감정 및 주의력 인식 기술 개발)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.754-761
    • /
    • 2008
  • As the automobile industry and technologies are developed, driver's tend to more concern about service matters than mechanical matters. For this reason, interests about recognition of human knowledge and emotion to make safe and convenient driving environment for driver are increasing more and more. recognition of human knowledge and emotion are emotion engineering technology which has been studied since the late 1980s to provide people with human-friendly services. Emotion engineering technology analyzes people's emotion through their faces, voices and gestures, so if we use this technology for automobile, we can supply drivels with various kinds of service for each driver's situation and help them drive safely. Furthermore, we can prevent accidents which are caused by careless driving or dozing off while driving by recognizing driver's gestures. the purpose of this paper is to develop a system which can recognize states of driver's emotion and attention for safe driving. First of all, we detect a signals of driver's emotion by using bio-motion signals, sleepiness and attention, and then we build several types of databases. by analyzing this databases, we find some special features about drivers' emotion, sleepiness and attention, and fuse the results through Multi-Modal method so that it is possible to develop the system.

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.

An Authoring Framework for Emotion-Aware User Interface of Mobile Applications (모바일 어플리케이션의 감정 적응형 사용자 인터페이스 저작 프레임워크)

  • Lee, Eunjung;Kim, Gyu-Wan;Kim, Woo-Bin
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.3
    • /
    • pp.376-386
    • /
    • 2015
  • Since affective computing has been introduced in 90s, affect recognition technology has achieved substantial progress recently. However, the application of user emotion recognition into software user interface is in its early stages. In this paper, we describe a new approach for developing mobile user interface which could react differently depending on user emotion states. First, an emotion reaction model is presented which determines user interface reactions for each emotional state. We introduce a pair of mappings from user states to different user interface versions. The reacting versions are implemented by a set of variations for a view. Further, we present an authoring framework to help developers/designers to create emotion-aware reactions based on the proposed emotion reaction model. The authoring framework is necessary to alleviate the burden of creating and handling multi versions for views at the development process. A prototype implementation is presented as an extension of the existing authoring tool DAT4UX. Moreover, a proof-of-concept application featuring an emotion-aware interface is developed using the tool.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

Emotion Labor and Emotional Exhaustion : The Role of Emotional Intelligence (감정노동, 감성지능이 종업원의 감정고갈에 미치는 영향에 관한 연구)

  • Hong, Yong-Ki
    • Management & Information Systems Review
    • /
    • v.25
    • /
    • pp.243-273
    • /
    • 2008
  • A new research paradigm is emerging within organizational behavior, in both theory and empiricism, based on the increasing recognition of the importance of emotions to organizational life. This paper suggest that emotion intelligence play a moderate variables in relationship of emotion labor and emotional exhaustion. More specifically, it is proposed that emotional intelligence, the ability to understand and manage emotions in the employee self and others, contribute to effective emotions management in organizations. Four major aspects of emotion labor, appraisal and expression of emotion in oneself, appraisal and recognition of emotion in others, regulation of emotion in oneself and use of emotion to facilitate performance, are described. Also, the emotional intelligence are consists of four aspects, frequency of appropriate emotional display, attentiveness to required displayed rules, variety of emotions to be displayed and emotional dissonance. Then I propose how emotional intelligence contributes to of relations the emotion labor and emotional exhaustion. The purpose of this research is to investigate the impact of emotion labor to employee's emotional exhaustion to explore the moderating effects of the emotional intelligence between the emotion labor and emotional exhaustion. To complete the research the data were collected through a questionnaire from 147 employees from service company. After multi-hierarchical regression analysis, the outcomes of this study are the employee's emotional exhaustion are affected negatively by the three factors: major aspects of emotion labor, regulation of emotion in oneself, use of emotion to facilitate performance, make the moderation effect between emotion labor and emotional intelligence. These results indicate that instilling in others an appreciation of the importance of work activities: encouraging of true expression individual emotions, generating and maintaining well emotional climate and cooperation situations, and managing a meaningful environment for an organizational life.

  • PDF

Video-based Facial Emotion Recognition using Active Shape Models and Statistical Pattern Recognizers (Active Shape Model과 통계적 패턴인식기를 이용한 얼굴 영상 기반 감정인식)

  • Jang, Gil-Jin;Jo, Ahra;Park, Jeong-Sik;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.139-146
    • /
    • 2014
  • This paper proposes an efficient method for automatically distinguishing various facial expressions. To recognize the emotions from facial expressions, the facial images are obtained by digital cameras, and a number of feature points were extracted. The extracted feature points are then transformed to 49-dimensional feature vectors which are robust to scale and translational variations, and the facial emotions are recognized by statistical pattern classifiers such Naive Bayes, MLP (multi-layer perceptron), and SVM (support vector machine). Based on the experimental results with 5-fold cross validation, SVM was the best among the classifiers, whose performance was obtained by 50.8% for 6 emotion classification, and 78.0% for 3 emotions.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

Emotion Recognition Method based on Feature and Decision Fusion using Speech Signal and Facial Image (음성 신호와 얼굴 영상을 이용한 특징 및 결정 융합 기반 감정 인식 방법)

  • Joo, Jong-Tae;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.11-14
    • /
    • 2007
  • 인간과 컴퓨터간의 상호교류 하는데 있어서 감정 인식은 필수라 하겠다. 그래서 본 논문에서는 음성 신호 및 얼굴 영상을 BL(Bayesian Learning)과 PCA(Principal Component Analysis)에 적용하여 5가지 감정 (Normal, Happy, Sad, Anger, Surprise) 으로 패턴 분류하였다. 그리고 각각 신호의 단점을 보완하고 인식률을 높이기 위해 결정 융합 방법과 특징 융합 방법을 이용하여 감정융합을 실행하였다. 결정 융합 방법은 각각 인식 시스템을 통해 얻어진 인식 결과 값을 퍼지 소속 함수에 적용하여 감정 융합하였으며, 특정 융합 방법은 SFS(Sequential Forward Selection)특정 선택 방법을 통해 우수한 특정들을 선택한 후 MLP(Multi Layer Perceptron) 기반 신경망(Neural Networks)에 적용하여 감정 융합을 실행하였다.

  • PDF