• 제목/요약/키워드: emotion engineering

검색결과 789건 처리시간 0.029초

Emotional Robotics based on iT_Media

  • Yoon, Joong-Sun;Yoh, Myeung-Sook;Cho, Bong-Kug
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.804-809
    • /
    • 2004
  • Intelligence is thought to be related to interaction rather than a deep but passive thinking. Interactive tangible media "iT_Media" is proposed to explore these issues. Personal robotics is a major area to investigate these ideas. A new design methodology for personal and emotional robotics is proposed. Sciences of the artificial and intelligence have been investigated. A short history of artificial intelligence is presented in terms of logic, heuristic, and mobility; a science of intelligence is presented in terms of imitation and understanding; intelligence issues for robotics and intelligence measures are described. A design methodology for personal robots based on science of emotion is investigated. We investigate three different aspects of design: visceral, behavioral, reflective. We also discuss affect and emotion in robots, robots that sense emotion, robots that induce emotion in people, and implications and ethical issues of emotional robots. Personal robotics for the elderly seems to be a major area in which to explore these ideas.

  • PDF

운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식 (Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model)

  • 곽현석;김수현;곽윤근
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2002년도 추계학술대회논문집
    • /
    • pp.762-766
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far. This is the why people don't want to get familiar with multi-service robots of today. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. Pitch and Energy extracted from the human speech are good and important factors to classify the each emotion (neutral, happy, sad and angry etc.), which are called prosodic features. HMM is the powerful and effective theory among several methods to construct the statistical model with characteristic vector which is made up with the mixture of prosodic features

  • PDF

음성의 특정 주파수 범위를 이용한 잡음환경에서의 감정인식 (Noise Robust Emotion Recognition Feature : Frequency Range of Meaningful Signal)

  • 김은호;현경학;곽윤근
    • 한국정밀공학회지
    • /
    • 제23권5호
    • /
    • pp.68-76
    • /
    • 2006
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Hence this paper describes the realization of emotion recognition. For emotion recognition from voice, we propose a new feature called frequency range of meaningful signal. With this feature, we reached average recognition rate of 76% in speaker-dependent. From the experimental results, we confirm the usefulness of the proposed feature. We also define the noise environment and conduct the noise-environment test. In contrast to other features, the proposed feature is robust in a noise-environment.

모의 지능로봇에서 음성신호에 의한 감정인식 (Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot)

  • 장광동;권오욱
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

실시간 Social Distance 추적에 의한 감성 상호작용 인식 기술 개발 (Recognition of Emotion Interaction by Measuring Social Distance in Real Time)

  • 이현우;우진철;조아영;조영호;황민철
    • 감성과학
    • /
    • 제20권3호
    • /
    • pp.89-96
    • /
    • 2017
  • 본 연구는 비콘 기반의 웨어러블 디바이스를 통해 Social Distance로부터 감성 상호작용을 인식하기 위한 기술을 개발하였다. 인식된 상호작용은 Photoplethysmogram(PPG)로부터 추정된 심혈관 동시성과 비교하여 평가되었다. 상호작용은 Social Distance가 일정 시간 이상 유지되었을 경우 인식되었으며, 심혈관 동시성은 PPG로부터 계산된 Beats Per Minute(BPM) 간의 상관분석을 통해 추정되었다. Social Distance로부터 유효한 상호작용을 인식하기 위한 유지시간을 결정하기 위해 상호작용 대상일 때와 아닐 때의 심혈관 동시성에 대해 Mann-Whitney U test를 실시하였다. 15개 집단(집단 당 2명)이 실험에 참여하였으며, 이들은 일상생활에서 비콘 및 PPG 웨어러블 디바이스를 착용하도록 요청받았다. 그 결과, 본 연구에서 인식한 상호작용 대상은 더 높은 심혈관 동시성을 보이는 것으로 나타났으며, 유효 상호작용 시간은 통계적 유의차를 보이는 11초로 결정되었다(p=.045). 결과적으로 실 공간에서의 사회관계망에 대한 실시간 측정과 평가를 할 가능성을 높였다.

Emotion Recognition Based on Frequency Analysis of Speech Signal

  • Sim, Kwee-Bo;Park, Chang-Hyun;Lee, Dong-Wook;Joo, Young-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제2권2호
    • /
    • pp.122-126
    • /
    • 2002
  • In this study, we find features of 3 emotions (Happiness, Angry, Surprise) as the fundamental research of emotion recognition. Speech signal with emotion has several elements. That is, voice quality, pitch, formant, speech speed, etc. Until now, most researchers have used the change of pitch or Short-time average power envelope or Mel based speech power coefficients. Of course, pitch is very efficient and informative feature. Thus we used it in this study. As pitch is very sensitive to a delicate emotion, it changes easily whenever a man is at different emotional state. Therefore, we can find the pitch is changed steeply or changed with gentle slope or not changed. And, this paper extracts formant features from speech signal with emotion. Each vowels show that each formant has similar position without big difference. Based on this fact, in the pleasure case, we extract features of laughter. And, with that, we separate laughing for easy work. Also, we find those far the angry and surprise.

Gesture-Based Emotion Recognition by 3D-CNN and LSTM with Keyframes Selection

  • Ly, Son Thai;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • International Journal of Contents
    • /
    • 제15권4호
    • /
    • pp.59-64
    • /
    • 2019
  • In recent years, emotion recognition has been an interesting and challenging topic. Compared to facial expressions and speech modality, gesture-based emotion recognition has not received much attention with only a few efforts using traditional hand-crafted methods. These approaches require major computational costs and do not offer many opportunities for improvement as most of the science community is conducting their research based on the deep learning technique. In this paper, we propose an end-to-end deep learning approach for classifying emotions based on bodily gestures. In particular, the informative keyframes are first extracted from raw videos as input for the 3D-CNN deep network. The 3D-CNN exploits the short-term spatiotemporal information of gesture features from selected keyframes, and the convolutional LSTM networks learn the long-term feature from the features results of 3D-CNN. The experimental results on the FABO dataset exceed most of the traditional methods results and achieve state-of-the-art results for the deep learning-based technique for gesture-based emotion recognition.

뇌파 스펙트럼 분석과 베이지안 접근법을 이용한 정서 분류 (Emotion Classification Using EEG Spectrum Analysis and Bayesian Approach)

  • 정성엽;윤현중
    • 산업경영시스템학회지
    • /
    • 제37권1호
    • /
    • pp.1-8
    • /
    • 2014
  • This paper proposes an emotion classifier from EEG signals based on Bayes' theorem and a machine learning using a perceptron convergence algorithm. The emotions are represented on the valence and arousal dimensions. The fast Fourier transform spectrum analysis is used to extract features from the EEG signals. To verify the proposed method, we use an open database for emotion analysis using physiological signal (DEAP) and compare it with C-SVC which is one of the support vector machines. An emotion is defined as two-level class and three-level class in both valence and arousal dimensions. For the two-level class case, the accuracy of the valence and arousal estimation is 67% and 66%, respectively. For the three-level class case, the accuracy is 53% and 51%, respectively. Compared with the best case of the C-SVC, the proposed classifier gave 4% and 8% more accurate estimations of valence and arousal for the two-level class. In estimation of three-level class, the proposed method showed a similar performance to the best case of the C-SVC.

여성복 디테일 종류에 따른 감성과 상대적 영향력 (Effects of Design Detail Types of Ladies Wear on Sensibility and Emotion)

  • 정경용;나영주
    • 한국의류산업학회지
    • /
    • 제7권2호
    • /
    • pp.162-168
    • /
    • 2005
  • The pictures of design details, such as collar type, sleeve type, skirt type, and skirt length, and color tone were evaluated by 377 persons in terms of sensibility and emotion. The data were analyzed by SPSS using ANOVA and Factor analysis to find out the most effective types of details on consumer's sensibility and emotion, and the methods were introduced. The most effective type is skirt length on sensibility and emotion of women's dress. The second type is different according to sensibility and emotion. Sensibility and emotion were composed of three concept: contemporary, mature and character. Sleeve type is second determinant to contemporary concept, and color tone is to mature concept, collar type is to character concept. 41 each details of design were positioned into 3D-concept space to connect each detail type and fashion concept of women's dress.