• 제목/요약/키워드: Vector emotion

검색결과 106건 처리시간 0.025초

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

음성을 이용한 화자 및 문장독립 감정인식 (Speaker and Context Independent Emotion Recognition using Speech Signal)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

잡음 환경에서의 음성 감정 인식을 위한 특징 벡터 처리 (Feature Vector Processing for Speech Emotion Recognition in Noisy Environments)

  • 박정식;오영환
    • 말소리와 음성과학
    • /
    • 제2권1호
    • /
    • pp.77-85
    • /
    • 2010
  • This paper proposes an efficient feature vector processing technique to guard the Speech Emotion Recognition (SER) system against a variety of noises. In the proposed approach, emotional feature vectors are extracted from speech processed by comb filtering. Then, these extracts are used in a robust model construction based on feature vector classification. We modify conventional comb filtering by using speech presence probability to minimize drawbacks due to incorrect pitch estimation under background noise conditions. The modified comb filtering can correctly enhance the harmonics, which is an important factor used in SER. Feature vector classification technique categorizes feature vectors into either discriminative vectors or non-discriminative vectors based on a log-likelihood criterion. This method can successfully select the discriminative vectors while preserving correct emotional characteristics. Thus, robust emotion models can be constructed by only using such discriminative vectors. On SER experiment using an emotional speech corpus contaminated by various noises, our approach exhibited superior performance to the baseline system.

  • PDF

운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식 (Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model)

  • 곽현석;김수현;곽윤근
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2002년도 추계학술대회논문집
    • /
    • pp.762-766
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far. This is the why people don't want to get familiar with multi-service robots of today. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. Pitch and Energy extracted from the human speech are good and important factors to classify the each emotion (neutral, happy, sad and angry etc.), which are called prosodic features. HMM is the powerful and effective theory among several methods to construct the statistical model with characteristic vector which is made up with the mixture of prosodic features

  • PDF

음성신호기반의 감정분석을 위한 특징벡터 선택 (Discriminative Feature Vector Selection for Emotion Classification Based on Speech)

  • 최하나;변성우;이석필
    • 전기학회논문지
    • /
    • 제64권9호
    • /
    • pp.1363-1368
    • /
    • 2015
  • Recently, computer form were smaller than before because of computing technique's development and many wearable device are formed. So, computer's cognition of human emotion has importantly considered, thus researches on analyzing the state of emotion are increasing. Human voice includes many information of human emotion. This paper proposes a discriminative feature vector selection for emotion classification based on speech. For this, we extract some feature vectors like Pitch, MFCC, LPC, LPCC from voice signals are divided into four emotion parts on happy, normal, sad, angry and compare a separability of the extracted feature vectors using Bhattacharyya distance. So more effective feature vectors are recommended for emotion classification.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • 제24권4E호
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

감정 분류를 위한 한국어 감정 자질 추출 기법과 감정 자질의 유용성 평가 (A Korean Emotion Features Extraction Method and Their Availability Evaluation for Sentiment Classification)

  • 황재원;고영중
    • 인지과학
    • /
    • 제19권4호
    • /
    • pp.499-517
    • /
    • 2008
  • 본 논문에서는 한국어 감정 분류에 기반이 되는 감정 자질 추출의 효과적인 추출 방법을 제안하고 평가하여, 그 유용성을 보인다. 한국어 감정 자질 추출은 감정을 지닌 대표적인 어휘로부터 시작하여 확장할 수 있으며, 이와 같이 추출된 감정 자질들은 문서의 감정을 분류하는데 중요한 역할을 한다. 문서 감정 분류에 핵심이 되는 감정 자질의 추출을 위해서는 영어 단어 시소러스 유의어 정보를 이용하여 자질들을 확장하고, 영한사전을 이용하여 확장된 자질들을 번역하여 감정 자질들을 추출하였다. 추출된 한국어 감정 자질들을 평가하기 위하여, 이진 분류 기법인 지지 벡터 기계(Support Vector Machine)를 사용해서 한국어 감정 자질로 표현된 입력문서의 감정을 분류하였다. 실험 결과, 추출된 감정 자질을 사용한 경우가 일반적인 정보 검색에서 사용하는 내용어(Content Word) 기반의 자질을 사용한 경우보다 약 14.1%의 성능 향상을 보였다.

  • PDF

KOBIE: 애완형 감성로봇 (KOBIE: A Pet-type Emotion Robot)

  • 류정우;박천수;김재홍;강상승;오진환;손주찬;조현규
    • 로봇학회논문지
    • /
    • 제3권2호
    • /
    • pp.154-163
    • /
    • 2008
  • This paper presents the concept for the development of a pet-type robot with an emotion engine. The pet-type robot named KOBIE (KOala roBot with Intelligent Emotion) is able to interact with a person through touch. KOBIE is equipped with tactile sensors on the body for interaction with a person through recognition of his/her touching behaviors such as "Stroke","Tickle","Hit". We have covered KOBIE with synthetic fur fabric in order to can make him/her feel affection as well. KOBIE is able to also express an emotional status that varies according to the circumstances under which it is presented. The emotion engine of KOBIE's emotion expression system generates an emotional status in an emotion vector space which is associated with a predefined needs and mood models. In order to examine the feasibility of our emotion expression system, we verified a changing emotional status in our emotion vector space by a touching behavior. We specially examined the reaction of children who have interacted with three kind of pet-type robots: KOBIE, PARO, AIBO for roughly 10 minutes to investigate the children's preference for pet-type robots.

  • PDF

형판 벡터와 신경망을 이용한 감성인식 (Emotion Recognition Using Template Vector and Neural-Network)

  • 주영훈;오재흥
    • 한국지능시스템학회논문지
    • /
    • 제13권6호
    • /
    • pp.710-715
    • /
    • 2003
  • 본 논문에서는 사람의 식별과 감정을 인식하기 위한 새로운 방법을 제안한다. 제안된 방법은 색차 정보에 의한 형판의 위치 인식과 형판 벡터 추출에 기반 한다. 단일 색차 공간만을 이용할 경우 피부색 영역을 정확히 추출하기 힘들다. 이를 보완하기 위해서 여러 가지 색차 공간을 병행하여 피부색 영역을 추출하며, 이를 응용하여 각각의 형판을 추출하는 방법을 제안한다. 그리고, 사람의 식별과 감정 인식을 위해서 추출된 형판에 대한 각각의 특징 벡터를 신경회로망을 이용하여 학습하여 사용한다. 마지막으로, 제안된 방법은 실제 실험을 통하여 그 가능성을 보인다.

모의 지능로봇에서의 음성 감정인식 (Speech Emotion Recognition on a Simulated Intelligent Robot)

  • 장광동;김남;권오욱
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF