• Title/Summary/Keyword: 감정 학습

Search Result 396, Processing Time 0.03 seconds

Estimation of Valence and Arousal from a single Image using Face Generating Autoencoder (얼굴 생성 오토인코더를 이용한 단일 영상으로부터의 Valence 및 Arousal 추정)

  • Kim, Do Yeop;Park, Min Seong;Chang, Ju Yong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.79-82
    • /
    • 2020
  • 얼굴 영상으로부터 사람의 감정을 예측하는 연구는 최근 딥러닝의 발전과 함께 주목받고 있다. 본 연구에서 우리는 연속적인 변수를 사용하여 감정을 표현하는 dimensional model에 기반하여 얼굴 영상으로부터 감정 상태를 나타내는 지표인 valance/arousal(V/A)을 예측하는 딥러닝 네트워크를 제안한다. 그러나 V/A 예측 모델의 학습에 사용되는 기존의 데이터셋들은 데이터 불균형(data imbalance) 문제를 가진다. 이를 해소하기 위해, 우리는 오토인코더 구조를 가지는 얼굴 영상 생성 네트워크를 학습하고, 이로부터 얻어지는 균일한 분포의 데이터로부터 V/A 예측 네트워크를 학습한다. 실험을 통해 우리는 제안하는 얼굴 생성 오토인코더가 in-the-wild 환경의 데이터셋으로부터 임의의 valence, arousal에 대응하는 얼굴 영상을 성공적으로 생생함을 보인다. 그리고, 이를 통해 학습된 V/A 예측 네트워크가 기존의 under-sampling, over-sampling 방영들과 비교하여 더 높은 인식 성능을 달성함을 보인다. 마지막으로 기존의 방법들과 제안하는 V/A 예측 네트워크의 성능을 정량적으로 비교한다.

  • PDF

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

A Sentiment Analysis Tool for Korean Twitter (한국어 트위터의 감정 분석 도구)

  • Seo, Hyung-Won;Jeon, Kil-Ho;Choi, Myung-Gil;Nam, Yoo-Rim;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.94-97
    • /
    • 2011
  • 본 논문은 자동으로 한글 트위터 메시지(트윗: tweet)에 포함된 감정을 분석하는 방법에 대하여 기술한다. 제안된 시스템에 의하여 수집된 트윗들은 어떤 질의에 대해 긍정 혹은 부정으로 분류된다. 이것은 일반적으로 어떤 상품을 구매하기 원하는 고객이나, 상품에 대한 고객들의 평가를 수집하기 원하는 기업에게 유용하다. 영문 트윗에 대한 연구는 이미 활발하게 진행되고 있지만 한글 트윗, 특히 감정 분류에 대한 연구는 아직 공개된 것이 없다. 수집된 트윗들은 기계 학습(Naive Bayes, Maximum Entropy, 그리고 SVM)을 이용하여 분류하였고 한글 특성에 따라 자질 선택의 기본 단위를 2음절과 3음절로 나누어 실험하였다. 기존의 영어에 대한 연구는 80% 이상의 정확도를 가지는 반면에, 본 실험에서는 60% 정도의 정확도를 얻을 수 있었다.

  • PDF

Fuzzy Model for Speech Emotion Recognition (음성으로부터의 감정 인식을 위한 퍼지모델 제안)

  • Moon, Byung-Hyun;Jang, In-Hoon;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.115-118
    • /
    • 2008
  • 본 논문에서는 음성으로부터 감정을 인식하고 감성적인 운율로 음성 출력을 산출해 내는 시스템을 제안 한다. 음성적인 운율로부터 감정을 인식하기 위해서 퍼지룰(rule)을 이용한다. 본 논문에서 감정 인식 시스템은 음성 샘플들로 학습 데이터를 구축하고 이를 기반으로 하여 추출된 20개의 특징 집합으로부터 가장 중요한 특징들을 자동적으로 선택한다. 화남, 놀람, 행복, 슬픔, 보통의 5가지 감정 상태를 구분하기 위하여 접근법에 기반한 퍼지를 이용하였다.

  • PDF

A Document Sentiment Classification System Based on the Feature Weighting Method Improved by Measuring Sentence Sentiment Intensity (문장 감정 강도를 반영한 개선된 자질 가중치 기법 기반의 문서 감정 분류 시스템)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.491-497
    • /
    • 2009
  • This paper proposes a new feature weighting method for document sentiment classification. The proposed method considers the difference of sentiment intensities among sentences in a document. Sentiment features consist of sentiment vocabulary words and the sentiment intensity scores of them are estimated by the chi-square statistics. Sentiment intensity of each sentence can be measured by using the obtained chi-square statistics value of each sentiment feature. The calculated intensity values of each sentence are finally applied to the TF-IDF weighting method for whole features in the document. In this paper, we evaluate the proposed method using support vector machine. Our experimental results show that the proposed method performs about 2.0% better than the baseline which doesn't consider the sentiment intensity of a sentence.

Analysis and Recognition of Depressive Emotion through NLP and Machine Learning (자연어처리와 기계학습을 통한 우울 감정 분석과 인식)

  • Kim, Kyuri;Moon, Jihyun;Oh, Uran
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.449-454
    • /
    • 2020
  • This paper proposes a machine learning-based emotion analysis system that detects a user's depression through their SNS posts. We first made a list of keywords related to depression in Korean, then used these to create a training data by crawling Twitter data - 1,297 positive and 1,032 negative tweets in total. Lastly, to identify the best machine learning model for text-based depression detection purposes, we compared RNN, LSTM, and GRU in terms of performance. Our experiment results verified that the GRU model had the accuracy of 92.2%, which is 2~4% higher than other models. We expect that the finding of this paper can be used to prevent depression by analyzing the users' SNS posts.

Automated Emotional Tagging of Lifelog Data with Wearable Sensors (웨어러블 센서를 이용한 라이프로그 데이터 자동 감정 태깅)

  • Park, Kyung-Wha;Kim, Byoung-Hee;Kim, Eun-Sol;Jo, Hwi-Yeol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.386-391
    • /
    • 2017
  • In this paper, we propose a system that automatically assigns user's experience-based emotion tags from wearable sensor data collected in real life. Four types of emotional tags are defined considering the user's own emotions and the information which the user sees and listens to. Based on the collected wearable sensor data from multiple sensors, we have trained a machine learning-based tagging system that combines the known auxiliary tools from the existing affective computing research and assigns emotional tags. In order to show the usefulness of this multi-modality-based emotion tagging system, quantitative and qualitative comparison with the existing single-modality-based emotion recognition approach are performed.

Emotion Classification of User's Utterance for a Dialogue System (대화 시스템을 위한 사용자 발화 문장의 감정 분류)

  • Kang, Sang-Woo;Park, Hong-Min;Seo, Jung-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.4
    • /
    • pp.459-480
    • /
    • 2010
  • A dialogue system includes various morphological analyses for recognizing a user's intention from the user's utterances. However, a user can represent various intentions via emotional states in addition to morphological expressions. Thus, a user's emotion recognition can analyze a user's intention in various manners. This paper presents a new method to automatically recognize a user's emotion for a dialogue system. For general emotions, we define nine categories using a psychological approach. For an optimal feature set, we organize a combination of sentential, a priori, and context features. Then, we employ a support vector machine (SVM) that has been widely used in various learning tasks to automatically classify a user's emotions. The experiment results show that our method has a 62.8% F-measure, 15% higher than the reference system.

  • PDF

Robust Speech Recognition using Vocal Tract Normalization for Emotional Variation (성도 정규화를 이용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo;Bang, Hyun-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.773-778
    • /
    • 2009
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, vocal tract normalization method is used to develop the robust speech recognition system for emotional variations. Experimental results from the isolated word recognition using HMM showed that the vocal tract normalization method reduced the error rate of the conventional recognition system by 41.9% when emotional test data was used.

The Effects of Professor Presence and Interaction on PAD and Satisfaction in a University Class (대학 수업의 교수실재감과 상호작용이 PAD와 수업만족도에 미치는 영향)

  • Jeong, Yun-Hee;Park, Ji-Yeon
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.7
    • /
    • pp.144-157
    • /
    • 2017
  • Although student satisfaction is important in university development, there have been many studies in this area. Especially student satisfaction is closely related to emotional aspect, but most studies have tended to study it with cognitive view. To suggest the model of student satisfaction with hedonic view, the model which we present in this study includes professor presence and interaction, PAD(pleasure, arousal, dominance), satisfaction(dependent variable). Through reviewing previous studies, we expect that these professor presence and students' interaction effect PAD, in turn, PAD effect satisfaction. Survey research is employed to test hypotheses involving professor presence, students' interaction, PAD and satisfaction. Previous researches, such as education, marketing, game, have been referenced to measure constructs. We collected data involving students in a university, and used 219 respondents to analyze these data using LISREL structural modeling. Professor presence had positive effects on professor-student interaction, pleasure, arousal, and dominance. Also professor-student interaction had positive effect on pleasure and arousal, and student-student interaction had positive effects on pleasure and arousal, dominance. As a result, PAD had effects on students' satisfaction. In the final section, we discussed several limitations of our study and suggested directions for future research. We concluded with a discussion of managerial implications, including the potential to advance understanding learning in a university.