• 제목/요약/키워드: Emotional Speech Recognition

검색결과 69건 처리시간 0.02초

Speech Emotion Recognition with SVM, KNN and DSVM

  • Hadhami Aouani ;Yassine Ben Ayed
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.40-48
    • /
    • 2023
  • Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

감정 표현 방법: 운율과 음질의 역할 (How to Express Emotion: Role of Prosody and Voice Quality Parameters)

  • 이상민;이호준
    • 한국컴퓨터정보학회논문지
    • /
    • 제19권11호
    • /
    • pp.159-166
    • /
    • 2014
  • 본 논문에서는 감정을 통해 단어의 의미가 변화될 때 운율과 음질로 표현되는 음향 요소가 어떠한 역할을 하는지 분석한다. 이를 위해 6명의 발화자에 의해 5가지 감정 상태로 표현된 60개의 데이터를 이용하여 감정에 따른 운율과 음질의 변화를 살펴본다. 감정에 따른 운율과 음질의 변화를 찾기 위해 8개의 음향 요소를 분석하였으며, 각 감정 상태를 표현하는 주요한 요소를 판별 해석을 통해 통계적으로 분석한다. 그 결과 화남의 감정은 음의 세기 및 2차 포먼트 대역너비와 깊은 연관이 있음을 확인할 수 있었고, 기쁨의 감정은 2차와 3차 포먼트 값 및 음의 세기와 연관이 있으며, 슬픔은 음질 보다는 주로 음의 세기와 높낮이 정보에 영향을 받는 것을 확인할 수 있었으며, 공포는 음의 높낮이와 2차 포먼트 값 및 그 대역너비와 깊은 관계가 있음을 알 수 있었다. 이러한 결과는 감정 음성 인식 시스템뿐만 아니라, 감정 음성 합성 시스템에서도 적극 활용될 수 있을 것으로 예상된다.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

ARM 플랫폼 기반의 음성 감성인식 시스템 구현 (Implementation of the Speech Emotion Recognition System in the ARM Platform)

  • 오상헌;박규식
    • 한국멀티미디어학회논문지
    • /
    • 제10권11호
    • /
    • pp.1530-1537
    • /
    • 2007
  • 본 논문은 마이크로폰을 통해 실시간으로 습득된 음성으로부터 사람의 음성 감성상태를 평상, 기쁨, 슬픔, 화남 등 4가지로 구별할 수 있는 ARM 플랫폼 기반의 음성 감성인식 시스템 구현에 관한 것이다. 일반적으로 마이크로폰으로 수신된 음성은 화자 주변의 환경 잡음과 마이크로폰의 시스템 특성 때문에 입력 음성 신호가 왜곡되고 이로 인해 시스템의 성능이 저하된다. 본 논문에서는 이러한 잡음 영향을 최소화하기 위해 비교적 단순한 구조와 적은 연산 량을 가진 이동평균(MA, Moving Average) 필터를 입력 음성의 특징벡터 열에 적용하였다. 또한, 효율적으로 감성 특징벡터를 최적화할 수 있는 SFS(Sequential Forward Selection)기법을 적용해 제안 시스템의 성능을 최적화하였으며 감성 패턴 분류기로는 SVM(Support Vector Machine)을 사용하였다. 실험 결과 제안 감성인식 시스템은 모의실험에서 약 65%, ARM 플랫폼에서 약 62%의 인식률을 보였다.

  • PDF

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

음성 신호를 사용한 감정인식의 특징 파라메터 비교 (Comparison of feature parameters for emotion recognition using speech signal)

  • 김원구
    • 대한전자공학회논문지SP
    • /
    • 제40권5호
    • /
    • pp.371-377
    • /
    • 2003
  • 본 논문에서 음성신호를 사용하여 인간의 감정를 인식하기 위한 특징 파라메터 비교에 관하여 연구하였다. 이를 위하여 여러 가지 감정 상태에 따라 분류된 한국어 음성 데이터 베이스를 이용하여 얻어진 음성 신호의 피치와 에너지의 평균, 표준편차와 최대 값 등 통계적인 정보 나타내는 파라메터와 음소의 특성을 나타내는 MFCC 파라메터가 사용되었다. 파라메터들의 성능을 평가하기 위하여 문장 및 화자 독립 감정 인식 시스템을 구현하여 인식 실험을 수행하였다. 성능 평가를 위한 실험에서는 운율적 특징으로 피치와 에너지와 각각의 미분 값을 사용하였고, 음소의 특성을 나타내는 특징으로 MFCC와 그 미분 값을 사용하였다. 벡터 양자화 방법을 사용한 화자 및 문장 독립 인식 시스템을 사용한 실험 결과에서 MFCC와 델타 MFCC를 사용한 경우가 피치와 에너지를 사용한 방법보다 우수한 성능을 나타내었다.

Speech Emotion Recognition in People at High Risk of Dementia

  • Dongseon Kim;Bongwon Yi;Yugwon Won
    • 대한치매학회지
    • /
    • 제23권3호
    • /
    • pp.146-160
    • /
    • 2024
  • Background and Purpose: The emotions of people at various stages of dementia need to be effectively utilized for prevention, early intervention, and care planning. With technology available for understanding and addressing the emotional needs of people, this study aims to develop speech emotion recognition (SER) technology to classify emotions for people at high risk of dementia. Methods: Speech samples from people at high risk of dementia were categorized into distinct emotions via human auditory assessment, the outcomes of which were annotated for guided deep-learning method. The architecture incorporated convolutional neural network, long short-term memory, attention layers, and Wav2Vec2, a novel feature extractor to develop automated speech-emotion recognition. Results: Twenty-seven kinds of Emotions were found in the speech of the participants. These emotions were grouped into 6 detailed emotions: happiness, interest, sadness, frustration, anger, and neutrality, and further into 3 basic emotions: positive, negative, and neutral. To improve algorithmic performance, multiple learning approaches were applied using different data sources-voice and text-and varying the number of emotions. Ultimately, a 2-stage algorithm-initial text-based classification followed by voice-based analysis-achieved the highest accuracy, reaching 70%. Conclusions: The diverse emotions identified in this study were attributed to the characteristics of the participants and the method of data collection. The speech of people at high risk of dementia to companion robots also explains the relatively low performance of the SER algorithm. Accordingly, this study suggests the systematic and comprehensive construction of a dataset from people with dementia.

음성 감정인식에서의 톤 정보의 중요성 연구 (On the Importance of Tonal Features for Speech Emotion Recognition)

  • 이정인;강홍구
    • 방송공학회논문지
    • /
    • 제18권5호
    • /
    • pp.713-721
    • /
    • 2013
  • 본 연구는 음성의 감정인식에 있어서 크로마 피쳐를 기반으로 한 음성 토널 특성에 대하여 기술하였다. 토널 정보가 갖는 장조와 단조와 같은 정보가 음악의 분위기에 미치는 영향과 유사하게 음성의 감정을 인지하는 데에도 토널 정보의 영향이 존재한다. 감정과 토널 정보의 관계를 분석하기 위해서, 본 연구에서는 크로마 피쳐로부터 재합성된 신호를 이용하여 청각 실험을 수행하였고, 인지실험결과 긍정과 부정적 감정에 대한 구분이 가능한 것으로 확인되었다. 인지 실험을 바탕으로 음성에 적합한 토널 피쳐를 적용하여 감정인식 실험을 진행하였고, 토널 피쳐를 사용하였을 경우 감정인식 성능이 향상되는 것을 확인 할 수 있다.

Emotion Recognition Based on Frequency Analysis of Speech Signal

  • Sim, Kwee-Bo;Park, Chang-Hyun;Lee, Dong-Wook;Joo, Young-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제2권2호
    • /
    • pp.122-126
    • /
    • 2002
  • In this study, we find features of 3 emotions (Happiness, Angry, Surprise) as the fundamental research of emotion recognition. Speech signal with emotion has several elements. That is, voice quality, pitch, formant, speech speed, etc. Until now, most researchers have used the change of pitch or Short-time average power envelope or Mel based speech power coefficients. Of course, pitch is very efficient and informative feature. Thus we used it in this study. As pitch is very sensitive to a delicate emotion, it changes easily whenever a man is at different emotional state. Therefore, we can find the pitch is changed steeply or changed with gentle slope or not changed. And, this paper extracts formant features from speech signal with emotion. Each vowels show that each formant has similar position without big difference. Based on this fact, in the pleasure case, we extract features of laughter. And, with that, we separate laughing for easy work. Also, we find those far the angry and surprise.