• Title/Summary/Keyword: 감정 학습

Search Result 397, Processing Time 0.024 seconds

Emotion and Speech Act classification in Dialogue using Multitask Learning (대화에서 멀티태스크 학습을 이용한 감정 및 화행 분류)

  • Shin, Chang-Uk;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.532-536
    • /
    • 2018
  • 심층인공신경망을 이용한 대화 모델링 연구가 활발하게 진행되고 있다. 본 논문에서는 대화에서 발화의 감정과 화행을 분류하기 위해 멀티태스크(multitask) 학습을 이용한 End-to-End 시스템을 제안한다. 우리는 감정과 화행을 동시에 분류하는 시스템을 개발하기 위해 멀티태스크 학습을 수행한다. 또한 불균형 범주 분류를 위해 계단식분류(cascaded classification) 구조를 사용하였다. 일상대화 데이터셋을 사용하여 실험을 수행하였고 macro average precision으로 성능을 측정하여 감정 분류 60.43%, 화행 분류 74.29%를 각각 달성하였다. 이는 baseline 모델 대비 각각 29.00%, 1.54% 향상된 성능이다. 본 논문에서는 제안하는 구조를 이용하여, 발화의 감정 및 화행 분류가 End-to-End 방식으로 모델링 가능함을 보였다. 그리고, 두 분류 문제를 하나의 구조로 적절히 학습하기 위한 방법과 분류 문제에서의 범주 불균형 문제를 해결하기 위한 분류 방법을 제시하였다.

  • PDF

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Implementation of Intel1igent Virtual Character Based on Reinforcement Learning and Emotion Model (강화학습과 감정모델 기반의 지능적인 가상 캐릭터의 구현)

  • Woo Jong Hao;Park Jung-Eun;Oh Kyung-Whan
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.431-435
    • /
    • 2005
  • 학습과 감정은 지능형 시스템을 구현하는데 있어 가장 중요한 요소이다. 본 논문에서는 강화학습을 이용하여 사용자와 상호작용을 하면서 학습을 수행하고 내부적인 감정모델을 가지고 있는 지능적인 가상 캐릭터를 구현하였다. 가상 캐릭터는 여러 가지 사물들로 이루어진 3D의 가상 환경 내에서 내부상태에 의해 자율적으로 동작하며, 또한 사용자는 가상 캐릭터에게 반복적인 명령을 통해 원하는 행동을 학습시킬 수 있다. 이러한 명령은 인공신경망을 사용하여 마우스의 제스처를 인식하여 수행할 수 있고 감정의 표현을 위해 Emotion-Mood-Personality 모델을 새로 제안하였다. 그리고 실험을 통해 사용자와 상호작용을 통한 감정의 변화를 살펴보았고 가상 캐릭터의 훈련에 따른 학습이 올바르게 수행되는 것을 확인하였다.

  • PDF

Analyzing facial expression of a learner in e-Learning system (e-Learning에서 나타날 수 있는 학습자의 얼굴 표정 분석)

  • Park, Jung-Hyun;Jeong, Sang-Mok;Lee, Wan-Bok;Song, Ki-Sang
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.160-163
    • /
    • 2006
  • If an instruction system understood the interest and activeness of a learner in real time, it could provide some interesting factors when a learner is tired of learning. It could work as an adaptive tutoring system to help a learner to understand something difficult to understand. Currently the area of the facial expression recognition mainly deals with the facial expression of adults focusing on anger, hatred, fear, sadness, surprising and gladness. These daily facial expressions couldn't be one of expressions of a learner in e-Learning. They should first study the facial expressions of a learner in e-Learning to recognize the feeling of a learner. Collecting as many expression pictures as possible, they should study the meaning of each expression. This study, as a prior research, analyzes the feelings of learners and facial expressions of learners in e-Learning in relation to the feelings to establish the facial expressions database.

  • PDF

A Training Method for Emotionally Robust Speech Recognition using Frequency Warping (주파수 와핑을 이용한 감정에 강인한 음성 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.528-533
    • /
    • 2010
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variation on the speech signal and the speech recognition system were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, a training method that cover the speech variations is proposed to develop the emotionally robust speech recognition system. Experimental results from the isolated word recognition using HMM showed that propose method reduced the error rate of the conventional recognition system by 28.4% when emotional test data was used.

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

A Weight Boosting Method of Sentiment Features for Korean Document Sentiment Classification (한국어 문서 감정분류를 위한 감정 자질 가중치 강화 기법)

  • Hwang, Jaewon;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.201-206
    • /
    • 2008
  • 본 논문은 한국어 문서 감정분류에 기반이 되는 감정 자질의 가중치 강화를 통해 감정분류의 성능 향상을 얻을 수 있는 기법을 제안한다. 먼저, 어휘 자원인 감정 자질을 확보하고, 확장된 감정 자질이 감정 분류에 얼마나 기여하는지를 평가한다. 그리고 학습 데이터를 이용하여 얻을 수 있는 감정 자질의 카이 제곱 통계량(${\chi}^2$ statics)값을 이용하여 각 문장의 감정 강도를 구한다. 이렇게 구한 문장의 감정 강도의 값을 TF-IDF 가중치 기법에 접목하여 감정 자질의 가중치를 강화시킨다. 마지막으로 긍정 문서에서는 긍정 감정 자질만 강화하고 부정 문서에서는 부정 감정 자질만 강화하여 학습하였다. 본 논문에서는 문서 분류에 뛰어난 성능을 보여주는 지지 벡터 기계(Support Vector Machine)를 사용하여 제안한 방법의 성능을 평가한다. 평가 결과, 일반적인 정보 검색에서 사용하는 내용어(Content Word) 기반의 자질을 사용한 경우 보다 약 2.0%의 성능 향상을 보였다.

  • PDF

Implementation of Intelligent Virtual Character Based on Reinforcement Learning and Emotion Model (강화학습과 감정모델 기반의 지능적인 가상 캐릭터의 구현)

  • Woo Jong-Ha;Park Jung-Eun;Oh Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.259-265
    • /
    • 2006
  • Learning and emotions are very important parts to implement intelligent robots. In this paper, we implement intelligent virtual character based on reinforcement learning which interacts with user and have internal emotion model. Virtual character acts autonomously in 3D virtual environment by internal state. And user can learn virtual character specific behaviors by repeated directions. Mouse gesture is used to perceive such directions based on artificial neural network. Emotion-Mood-Personality model is proposed to express emotions. And we examine the change of emotion and learning behaviors when virtual character interact with user.

Synonym Emotional Adjectives in Coordination: Analyzing [Emotional Adjective + '-ko(and)'] + Emotional Adjective] Structures in Korean (감정형용사 유의어 결합 연구 -[[감정형용사 + '-고'] + 감정형용사] 구성-)

  • Park, JINA;Jeong, Yong-Ho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.565-577
    • /
    • 2024
  • This discussion looked at how emotional adjectives are connected in the format [[emotional adjective + '-ko(and)'] + emotional adjective]. As a result, it was confirmed that there are quite a few cases in which two or more emotional adjectives are used to express emotions in Korean. This can help Korean learners understand and express the individual lexical meanings of emotional adjectives more clearly by identifying emotional adjectives that are used together with the corresponding configuration. It was believed that it could help Korean language learners express complex emotions or create rich emotional expressions when expressing their emotions in Korean. It is hoped that the examples and frequency of [[emotional adjective+'-ko(and)'+emotional adjective] shown in this discussion will be of some help in teaching and learning Korean emotional vocabulary.

Comparison of EEG Topography Labeling and Annotation Labeling Techniques for EEG-based Emotion Recognition (EEG 기반 감정인식을 위한 주석 레이블링과 EEG Topography 레이블링 기법의 비교 고찰)

  • Ryu, Je-Woo;Hwang, Woo-Hyun;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.3
    • /
    • pp.16-24
    • /
    • 2019
  • Recently, research on emotion recognition based on EEG has attracted great interest from human-robot interaction field. In this paper, we propose a method of labeling using image-based EEG topography instead of evaluating emotions through self-assessment and annotation labeling methods used in MAHNOB HCI. The proposed method evaluates the emotion by machine learning model that learned EEG signal transformed into topographical image. In the experiments using MAHNOB-HCI database, we compared the performance of training EEG topography labeling models of SVM and kNN. The accuracy of the proposed method was 54.2% in SVM and 57.7% in kNN.