• 제목/요약/키워드: emotion engineering

검색결과 791건 처리시간 0.031초

감정변화에 따른 음성정보 분석에 관한 연구 (Study of Emotion in Speech)

  • 장인창;박미경;김태수;박면웅
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2004년도 추계학술대회 논문집
    • /
    • pp.1123-1126
    • /
    • 2004
  • Recognizing emotion in speech is required lots of spoken language corpus not only at the different emotional statues, but also in individual languages. In this paper, we focused on the changes speech signals in different emotions. We compared the features of speech information like formant and pitch according to the 4 emotions (normal, happiness, sadness, anger). In Korean, pitch data on monophthongs changed in each emotion. Therefore we suggested the suitable analysis techniques using these features to recognize emotions in Korean.

  • PDF

Half-Against-Half Multi-class SVM Classify Physiological Response-based Emotion Recognition

  • ;고광은;박승민;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제23권3호
    • /
    • pp.262-267
    • /
    • 2013
  • The recognition of human emotional state is one of the most important components for efficient human-human and human- computer interaction. In this paper, four emotions such as fear, disgust, joy, and neutral was a main problem of classifying emotion recognition and an approach of visual-stimuli for eliciting emotion based on physiological signals of skin conductance (SC), skin temperature (SKT), and blood volume pulse (BVP) was used to design the experiment. In order to reach the goal of solving this problem, half-against-half (HAH) multi-class support vector machine (SVM) with Gaussian radial basis function (RBF) kernel was proposed showing the effective techniques to improve the accuracy rate of emotion classification. The experimental results proved that the proposed was an efficient method for solving the emotion recognition problems with the accuracy rate of 90% of neutral, 86.67% of joy, 85% of disgust, and 80% of fear.

Behavior Decision Model Based on Emotion and Dynamic Personality

  • Yu, Chan-Woo;Choi, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.101-106
    • /
    • 2005
  • In this paper, we propose a behavior decision model for a robot, which is based on artificial emotion, various motivations and dynamic personality. Our goal is making a robot which can express its emotion human-like way. To achieve this goal, we applied several emotion and personality theories in psychology. Especially, we introduced the concept of dynamic personality model for a robot. Drawing on this concept, we could make a behavior decision model so that the emotion expression of the robot has adaptability to various environments through interactions between human and the robot.

  • PDF

정서특정적 생리의 탐색을 모색하는 감성공학의 패러다임과 실천방법 (Sensory Engineering Model in Search of Emotion-Specific Physiology -An Introduction and Proposal)

  • 우제린
    • 감성과학
    • /
    • 제4권2호
    • /
    • pp.1-13
    • /
    • 2001
  • Emotion-Specific Physiology may still remain to bean elusive entity even to many of the proponents and seekers, but an ever-growing body of experimental evidence sheds much brighter prospects for the future researches in that direction. Once such Emotion-Physiology pairs are identified, there exist a high hope that some Sense-Friendly Features that are causally related, or highly correlated, to each pair may be identifiable in the nature or man-made objects. On the premise that certain emotions, if and when engendered by a consumer good, may be conducive to an urge “to own or to identify oneself with the product”, presented here is a model of Sensory Engineering that is oriented objectively towards identifying the Emotion-Specific Physiology in order to have the Sense-Friendly Features reproduced in product designs. Relevant and complementary concepts and some suggested procedures in implementing the proposed model are offered.

  • PDF

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

색상 기반 회화 감성 추출 방법에 관한 연구 (A Study on Method for Extracting Emotion from Painting Based on Color)

  • 심현오;박성주;윤경현
    • 한국멀티미디어학회논문지
    • /
    • 제19권4호
    • /
    • pp.717-724
    • /
    • 2016
  • Paintings can evoke emotions in viewers. In this paper, we propose a method for extracting emotion from paintings by using the colors that comprise the paintings. For this, we generate color spectrum from input painting and compare the color spectrum and color combination for finding most similarity color combination. The found color combinations are mapped with emotional keywords. Thus, we extract emotional keyword as the emotion evoked by the painting. Also, we vary the form of algorithms for matching color spectrum and color combinations and extract and compare results by using each algorithm.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

초등학교 저학년 학생을 위한 감성과학 기반 융합인재교육(STEAM) 프로그램 개발 (Development of STEAM Program Based on Emotion Science for Students of Early Elementary School)

  • 권지은;곽소정;김혜진;이세정
    • 감성과학
    • /
    • 제20권4호
    • /
    • pp.79-88
    • /
    • 2017
  • 감성이 중요시되고 있는 시대가 발전함에 따라 미래 시대에 필요한 인재를 양성하기 위한 교육 분야에서도 감성과학과 관련된 교육이 요구되는 시점이다. 본 논문은 초등학교 저학년 학생을 위한 감성과학을 학습할 수 있는 융합인재교육 프로그램을 개발하고 수업에 적용해봄으로써, 가능성과 효과적인 방법을 제안하고자 한다. 이를 위하여 첫째, 감성과학을 교육하기에 적합한 융합인재교육(STEAM) 방식을 채택하여 초등학교 1~2학년을 대상으로 한 '도형으로 만드는 마음'을 개발하였다. 감성과학 관련 STEAM의 이론적 배경과 벤치마킹을 실시하고, 초등학교 해당 학년 교과서를 기반으로 한 구체적인 수업 내용과 활동 및 교재와 키트 등을 개발하였다. 둘째, 개발된 프로그램을 두 학급에 시범적으로 적용하여 만족도 조사와 교사 인터뷰 등을 통해 결과를 분석하였다. 분석 결과는 전체 만족도 평균은 매우 높게 (4.40/5) 나왔으며 특히, '수업 참여도'에 대한 만족도가 높은 것으로 분석되었다. 셋째, 분석 결과를 바탕으로 개발한 프로그램에 대한 가능성과 가치, 한계점 등을 논한다. 본 연구 결과는 과학에 대한 직접적인 이해가 어려운 초등학교 저학년을 대상으로 감성과학에 대해 쉽고 흥미롭게 접근할 수 있는 교육 프로그램이라 할 수 있다. 이러한 교육을 통해 감성과학을 효과적으로 이해하고, 감성 중심 시대를 선도할 인재를 양성하는데 도움이 될 것으로 기대한다.

A Survey on Image Emotion Recognition

  • Zhao, Guangzhe;Yang, Hanting;Tu, Bing;Zhang, Lei
    • Journal of Information Processing Systems
    • /
    • 제17권6호
    • /
    • pp.1138-1156
    • /
    • 2021
  • Emotional semantics are the highest level of semantics that can be extracted from an image. Constructing a system that can automatically recognize the emotional semantics from images will be significant for marketing, smart healthcare, and deep human-computer interaction. To understand the direction of image emotion recognition as well as the general research methods, we summarize the current development trends and shed light on potential future research. The primary contributions of this paper are as follows. We investigate the color, texture, shape and contour features used for emotional semantics extraction. We establish two models that map images into emotional space and introduce in detail the various processes in the image emotional semantic recognition framework. We also discuss important datasets and useful applications in the field such as garment image and image retrieval. We conclude with a brief discussion about future research trends.