• 제목/요약/키워드: Emotion Classification

검색결과 302건 처리시간 0.031초

Discrimination of Three Emotions using Parameters of Autonomic Nervous System Response

  • Jang, Eun-Hye;Park, Byoung-Jun;Eum, Yeong-Ji;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제30권6호
    • /
    • pp.705-713
    • /
    • 2011
  • Objective: The aim of this study is to compare results of emotion recognition by several algorithms which classify three different emotional states(happiness, neutral, and surprise) using physiological features. Background: Recent emotion recognition studies have tried to detect human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 217 students participated in this experiment. While three kinds of emotional stimuli were presented to participants, ANS responses(EDA, SKT, ECG, RESP, and PPG) as physiological signals were measured in twice first one for 60 seconds as the baseline and 60 to 90 seconds during emotional states. The obtained signals from the session of the baseline and of the emotional states were equally analyzed for 30 seconds. Participants rated their own feelings to emotional stimuli on emotional assessment scale after presentation of emotional stimuli. The emotion classification was analyzed by Linear Discriminant Analysis(LDA, SPSS 15.0), Support Vector Machine (SVM), and Multilayer perceptron(MLP) using difference value which subtracts baseline from emotional state. Results: The emotional stimuli had 96% validity and 5.8 point efficiency on average. There were significant differences of ANS responses among three emotions by statistical analysis. The result of LDA showed that an accuracy of classification in three different emotions was 83.4%. And an accuracy of three emotions classification by SVM was 75.5% and 55.6% by MLP. Conclusion: This study confirmed that the three emotions can be better classified by LDA using various physiological features than SVM and MLP. Further study may need to get this result to get more stability and reliability, as comparing with the accuracy of emotions classification by using other algorithms. Application: This could help get better chances to recognize various human emotions by using physiological signals as well as be applied on human-computer interaction system for recognizing human emotions.

이종 음성 DB 환경에 강인한 감성 분류 체계에 대한 연구 (A Study on Robust Emotion Classification Structure Between Heterogeneous Speech Databases)

  • 윤원중;박규식
    • 한국음향학회지
    • /
    • 제28권5호
    • /
    • pp.477-482
    • /
    • 2009
  • 고객센터 (call-center)와 같은 기업환경의 감성인식 시스템은 감성 훈련용 음성과 불특정 고객들의 질의 음성간의 녹취 환경차이로 인해 상당한 시스템 성능 저하와 불안정성을 겪게 된다. 본 논문에서는 이러한 문제점을 극복하기 위해 기존의 전통적인 평상/화남 감성 분류체계를 남 녀 성별에 따른 감성별 특성 변화를 적용하여 2단계 분류체계로 확장하였다. 실험 결과, 제안한 방법은 녹취 환경 차이로 인한 시스템 불안정성을 해소할 수 있을 뿐 아니라 약 25% 가까운 인식 성능 개선을 가져올 수 있었다.

다중 모달 생체신호를 이용한 딥러닝 기반 감정 분류 (Deep Learning based Emotion Classification using Multi Modal Bio-signals)

  • 이지은;유선국
    • 한국멀티미디어학회논문지
    • /
    • 제23권2호
    • /
    • pp.146-154
    • /
    • 2020
  • Negative emotion causes stress and lack of attention concentration. The classification of negative emotion is important to recognize risk factors. To classify emotion status, various methods such as questionnaires and interview are used and it could be changed by personal thinking. To solve the problem, we acquire multi modal bio-signals such as electrocardiogram (ECG), skin temperature (ST), galvanic skin response (GSR) and extract features. The neural network (NN), the deep neural network (DNN), and the deep belief network (DBN) is designed using the multi modal bio-signals to analyze emotion status. As a result, the DBN based on features extracted from ECG, ST and GSR shows the highest accuracy (93.8%). It is 5.7% higher than compared to the NN and 1.4% higher than compared to the DNN. It shows 12.2% higher accuracy than using only single bio-signal (GSR). The multi modal bio-signal acquisition and the deep learning classifier play an important role to classify emotion.

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • 제8권2호
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

다중 회귀 기반의 음악 감성 분류 기법 (Multiple Regression-Based Music Emotion Classification Technique)

  • 이동현;박정욱;서영석
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권6호
    • /
    • pp.239-248
    • /
    • 2018
  • 4차 산업혁명 시대가 도래하면서 기존 IoT에 감성지능이 포함된 신기술들이 연구되고 있다. 그 중 현재까지 다양하게 진행된 음악 서비스 제공을 위한 감성 분석 연구에서는 인공지능, 패턴인식 등을 활용한 사용자의 감성 인식 및 분류 등에만 초점을 맞추고 있는 상황이나, 사용자의 특정 감성에 해당하는 음악들을 어떻게 자동적으로 분류할지에 대한 감성별 음악 분류기법들에 대한 연구는 매우 부족한 상황이다. 본 연구에서는 최근 각광을 받고 있는 사람들의 감성과 관련된 음악관련 서비스를 개발할 시, 음악을 감성 범위에 따라 높은 정확도로 분류할 수 있도록 하는 감성 기반 자동 음악 분류기법을 제안한다. 데이터수집 시 Russell 모델을 바탕으로 설문조사를 하였으며, 음악의 특성으로 평균파장크기(Average amplitude), peak평균(Peak-average), 파장 수(The number of wavelength), 평균파장 길이(Average wavelength), BPM(Beats per minute)을 추출하였다. 해당 데이터들을 바탕으로 회귀 분석을 이용하여 다중회귀식을 도출하였으며, 각 감성에 대한 표준 수치들을 도출하여 새로운 음악 데이터와 해당 각 감성에 대한 표준 수치들과의 거리 비교를 통해 음악의 감성을 분류시키는 작업을 실시하였다. 이를 통해 나온 결과에 회귀분석을 통하여 나온 데이터를 대입하여 해당 데이터와 각 감성들의 비율을 통해 최종적으로 판단된 감성을 추출하였다. 본 연구에서 실험한 감성 일치율의 2가지 방식에 대해서 제안한 기법의 경우 70.94%, 86.21%의 일치율이 나왔고, 설문참가자들의 경우 66.83%, 76.85%의 일치율이 나옴으로써, 연구 기법을 통한 감성의 판단이 설문참가자들의 평균적인 판단보다 4.11%, 9.36%의 향상된 수치를 제공함을 알 수 있었다.

성별 구분을 통한 음성 감성인식 성능 향상에 대한 연구 (A Study on The Improvement of Emotion Recognition by Gender Discrimination)

  • 조윤호;박규식
    • 대한전자공학회논문지SP
    • /
    • 제45권4호
    • /
    • pp.107-114
    • /
    • 2008
  • 본 논문은 남/여 성별에 기반해 음성을 평상, 기쁨, 슬픔, 화남의 4가지 감성 상태로 분류하는 감성인식 시스템을 구축하였다. 제안된 시스템은 입력 음성으로부터 1차적으로 남/여 성별을 분류하고, 분류된 성별을 기반으로 남/여 각기 최적의 특징벡터 열을 적용하여 감성인식을 수행함으로써 감성인식 성공률을 향상시켰다. 또한 음성인식에서 주로 사용되는 ZCPA(Zero Crossings with Peak Amplitudes)를 감성인식용 특징벡터로 사용하여 성능을 향상시켰으며, 남/여 각각의 특징 벡터 열을 최적화하기 위해 SFS(Sequential Forward Selection) 기법을 사용하였다. 감성 패턴 분류기로는 k-NN과 SVM을 비교하여 실험하였다. 실험결과 제안 시스템은 4가지 감성상태에 대해 약 85.3%의 높은 감성 인식 성공률을 달성할 수 있어 향후 감성을 인식하는 콜센터, 휴머노이드형 로봇이나 유비쿼터스(Ubiquitous) 환경 등 다양한 분야에서 감성인식 정보를 유용하게 사용될 수 있을 것으로 기대된다.

확률변수를 이용한 음악에 따른 감정분석에의 최적 EEG 채널 선택 (A Selection of Optimal EEG Channel for Emotion Analysis According to Music Listening using Stochastic Variables)

  • 변성우;이소민;이석필
    • 전기학회논문지
    • /
    • 제62권11호
    • /
    • pp.1598-1603
    • /
    • 2013
  • Recently, researches on analyzing relationship between the state of emotion and musical stimuli are increasing. In many previous works, data sets from all extracted channels are used for pattern classification. But these methods have problems in computational complexity and inaccuracy. This paper proposes a selection of optimal EEG channel to reflect the state of emotion efficiently according to music listening by analyzing stochastic feature vectors. This makes EEG pattern classification relatively simple by reducing the number of dataset to process.

Multimodal Parametric Fusion for Emotion Recognition

  • Kim, Jonghwa
    • International journal of advanced smart convergence
    • /
    • 제9권1호
    • /
    • pp.193-201
    • /
    • 2020
  • The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.

Investigating the Impact of Discrete Emotions Using Transfer Learning Models for Emotion Analysis: A Case Study of TripAdvisor Reviews

  • Dahee Lee;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • 제34권2호
    • /
    • pp.372-399
    • /
    • 2024
  • Online reviews play a significant role in consumer purchase decisions on e-commerce platforms. To address information overload in the context of online reviews, factors that drive review helpfulness have received considerable attention from scholars and practitioners. The purpose of this study is to explore the differential effects of discrete emotions (anger, disgust, fear, joy, sadness, and surprise) on perceived review helpfulness, drawing on cognitive appraisal theory of emotion and expectation-confirmation theory. Emotions embedded in 56,157 hotel reviews collected from TripAdvisor.com were extracted based on a transfer learning model to measure emotion variables as an alternative to dictionary-based methods adopted in previous research. We found that anger and fear have positive impacts on review helpfulness, while disgust and joy exert negative impacts. Moreover, hotel star-classification significantly moderates the relationships between several emotions (disgust, fear, and joy) and perceived review helpfulness. Our results extend the understanding of review assessment and have managerial implications for hotel managers and e-commerce vendors.