• Title/Summary/Keyword: Mel spectrogram features

Search Result 14, Processing Time 0.017 seconds

Emotion Recognition using Various Combinations of Audio Features and Textual Information (음성특징의 다양한 조합과 문장 정보를 이용한 감정인식)

  • Seo, Seunghyun;Lee, Bowon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.137-139
    • /
    • 2019
  • 본 논문은 다양한 음성 특징과 텍스트를 이용한 멀티 모드 순환신경망 네트워크를 사용하여 음성을 통한 범주형(categorical) 분류 방법과 Arousal-Valence(AV) 도메인에서의 분류방법을 통해 감정인식 결과를 제시한다. 본 연구에서는 음성 특징으로는 MFCC, Energy, Velocity, Acceleration, Prosody 및 Mel Spectrogram 등의 다양한 특징들의 조합을 이용하였고 이에 해당하는 텍스트 정보를 순환신경망 기반 네트워크를 통해 융합하여 범주형 분류 방법과 과 AV 도메인에서의 분류 방법을 이용해 감정을 이산적으로 분류하였다. 실험 결과, 음성 특징의 조합으로 MFCC Energy, Velocity, Acceleration 각 13 차원과 35 차원의 Prosody 의 조합을 사용하였을 때 범주형 분류 방법에서는 75%로 다른 특징 조합들 보다 높은 결과를 보였고 AV 도메인 에서도 같은 음성 특징의 조합이 Arousal 55.3%, Valence 53.1%로 각각 가장 높은 결과를 보였다.

  • PDF

Principal component analysis based frequency-time feature extraction for seismic wave classification (지진파 분류를 위한 주성분 기반 주파수-시간 특징 추출)

  • Min, Jeongki;Kim, Gwantea;Ku, Bonhwa;Lee, Jimin;Ahn, Jaekwang;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.687-696
    • /
    • 2019
  • Conventional feature of seismic classification focuses on strong seismic classification, while it is not suitable for classifying micro-seismic waves. We propose a feature extraction method based on histogram and Principal Component Analysis (PCA) in frequency-time space suitable for classifying seismic waves including strong, micro, and artificial seismic waves, as well as noise classification. The proposed method essentially employs histogram and PCA based features by concatenating the frequency and time information for binary classification which consist strong-micro-artificial/noise and micro/noise and micro/artificial seismic waves. Based on the recent earthquake data from 2017 to 2018, effectiveness of the proposed feature extraction method is demonstrated by comparing it with existing methods.

Sound event detection based on multi-channel multi-scale neural networks for home monitoring system used by the hard-of-hearing (청각 장애인용 홈 모니터링 시스템을 위한 다채널 다중 스케일 신경망 기반의 사운드 이벤트 검출)

  • Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.600-605
    • /
    • 2020
  • In this paper, we propose a sound event detection method using a multi-channel multi-scale neural networks for sound sensing home monitoring for the hearing impaired. In the proposed system, two channels with high signal quality are selected from several wireless microphone sensors in home. The three features (time difference of arrival, pitch range, and outputs obtained by applying multi-scale convolutional neural network to log mel spectrogram) extracted from the sensor signals are applied to a classifier based on a bidirectional gated recurrent neural network to further improve the performance of sound event detection. The detected sound event result is converted into text along with the sensor position of the selected channel and provided to the hearing impaired. The experimental results show that the sound event detection method of the proposed system is superior to the existing method and can effectively deliver sound information to the hearing impaired.

Performance comparison of wake-up-word detection on mobile devices using various convolutional neural networks (다양한 합성곱 신경망 방식을 이용한 모바일 기기를 위한 시작 단어 검출의 성능 비교)

  • Kim, Sanghong;Lee, Bowon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.454-460
    • /
    • 2020
  • Artificial intelligence assistants that provide speech recognition operate through cloud-based voice recognition with high accuracy. In cloud-based speech recognition, Wake-Up-Word (WUW) detection plays an important role in activating devices on standby. In this paper, we compare the performance of Convolutional Neural Network (CNN)-based WUW detection models for mobile devices by using Google's speech commands dataset, using the spectrogram and mel-frequency cepstral coefficient features as inputs. The CNN models used in this paper are multi-layer perceptron, general convolutional neural network, VGG16, VGG19, ResNet50, ResNet101, ResNet152, MobileNet. We also propose network that reduces the model size to 1/25 while maintaining the performance of MobileNet is also proposed.