• Title/Summary/Keyword: Cepstral parameters

Search Result 59, Processing Time 0.024 seconds

Improvements on MFCC by Elaboration of the Filter Banks and Windows

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.131-144
    • /
    • 2007
  • In an effort to improve the performance of mel frequency cepstral coefficients (MFCC), we investigate the effects of varying the parameters for the filter banks and their associated windows on speech recognition rates. Specifically, the mel and bark scales are combined with various types of filter bank windows. Comparison and evaluation of the suggested methods are performed by two independent ways of speech recognition and the Fisher discriminant objective function. It is shown that the Hanning window based on the bark scale yields 28.1% relative performance improvements over the triangular window with the mel scale in speech recognition error rate. Further work on incorporating PCA and/or LDA would be desirable as a postprocessor to MFCC extraction.

  • PDF

Correlation between Physical Fatigue and Speech Signals (육체피로와 음성신호와의 상관관계)

  • Kim, Taehun;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.11-17
    • /
    • 2015
  • This paper deals with the correlation between physical fatigue and speech signals. A treadmill task to increase fatigue and a set of subjective questionnaire for rating tiredness were designed. The results from the questionnaire and the collected bio-signals showed that the designed task imposes physical fatigue. The t-test for two-related-samples between the speech signals and fatigue showed that the parameters statistically significant to fatigue are fundamental frequency, first and second formant frequencies, long term average spectral slope, smoothed pitch perturbation quotient, relative average perturbation, pitch perturbation quotient, cepstral peak prominence, and harmonics to noise ratio. According to the experimental results, it is shown that mouth is opened small and voice is changed to be breathy as the physical fatigue accumulates.

Phonation types of Korean fricatives and affricates

  • Lee, Goun
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.51-57
    • /
    • 2017
  • The current study compared the acoustic features of the two phonation types for Korean fricatives (plain: /s/, fortis : /s'/) and the three types for affricates (aspirated : /$ts^h$/, lenis : /ts/, and fortis : /ts'/) in order to determine the phonetic status of the plain fricative /s/. Considering the different manners of articulation between fricatives and affricates, we examined four acoustic parameters (rise time, intensity, fundamental frequency, and Cepstral Peak Prominence (CPP) values) of the 20 Korean native speakers' productions. The results showed that unlike Korean affricates, F0 cannot distinguish two fricatives, and voice quality (CPP values) only distinguishes phonation types of Korean fricatives and affricates by grouping non-fortis sibilants together. Therefore, based on the similarity found in /$ts^h$/ and /ts/ and the idiosyncratic pattern found in /s/, this research concludes that non-fortis fricative /s/ cannot be categorized as belonging to either phonation type.

A Speech Synthesis System based on Cepstral Parameters and Multiband Excitation Signal (켑스트럼 파라미터와 다중대역 여기신호를 사용한 음성 합성 시스팀)

  • 김기순
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.211-215
    • /
    • 1995
  • 명료하고 자연스러운 한국어 음성을 생성하기 위하여 다중대역 여기신호를 이용한 음성 합성 시스팀을 제안한다. 분석계에서는 켑스트럼 파라미터를 사용하여 유성/무성 판별 스펙트럼을 이용한 유/무성 구간 자동판별법을 제안하고, 현재 단순 임펄스와 백색잡음만으로도 구성된 음원과 간단한 유성/무성 판별로 구동되어지는 합성음의 음질상의 한계를 개선하기 위하여 합성계에서는 음질개선 방안으로 유성음 구동시 다중대역 여기신호를 도입하여 합성시 이용한다. 제안된 방법에 대한 청취실험을 한 결과, 유성음 부분 특히 잡음이 많이 섞여 있는 유성음화 마찰음과 모음의 천이부분 등에서 일반적으로 사용되고 있는 간단한 유성/무성 파라미터를 사용한 합성음에 비하여 다중대역 여기신호를 사용한 합성음의 명료도가 매우 우수함을 확인하였다.

  • PDF

Personal Information Extraction Using A Microphone Array (마이크로폰어레이를 이용한 사용자 정보추출)

  • Kim, Hye-Jin;Yoon, Ho-Sub
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.131-136
    • /
    • 2008
  • This paper proposes a method to extract the personal information using a microphone array. Useful personal information, particularly customers, is age and gender. On the basis of this information, service applications for robots can satisfy users by offering services adaptive to the special needs of specific user groups that may include adults and children as well as females and males. We applied Gaussian Mixture Model (GMM) as a classifier and Mel Frequency Cepstral coefficients (MFCCs) as a voice feature. The major aim of this paper is to discover the voice source parameters of age and gender and to classify these two characteristics simultaneously. For the ubiquitous environment, voices obtained by the selected channels in a microphone array are useful to reduce background noise.

  • PDF

Locating the damaged storey of a building using distance measures of low-order AR models

  • Xing, Zhenhua;Mita, Akira
    • Smart Structures and Systems
    • /
    • v.6 no.9
    • /
    • pp.991-1005
    • /
    • 2010
  • The key to detecting damage to civil engineering structures is to find an effective damage indicator. The damage indicator should promptly reveal the location of the damage and accurately identify the state of the structure. We propose to use the distance measures of low-order AR models as a novel damage indicator. The AR model has been applied to parameterize dynamical responses, typically the acceleration response. The premise of this approach is that the distance between the models, fitting the dynamical responses from damaged and undamaged structures, may be correlated with the information about the damage, including its location and severity. Distance measures have been widely used in speech recognition. However, they have rarely been applied to civil engineering structures. This research attempts to improve on the distance measures that have been studied so far. The effect of varying the data length, number of parameters, and other factors was carefully studied.

Measuring Correlation between Mental Fatigues and Speech Features (정신피로와 음성특징과의 상관관계 측정)

  • Kim, Jungin;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.3-8
    • /
    • 2014
  • This paper deals with how mental fatigue has an effect on human voice. For this a monotonous task to increase the feeling of the fatigue and a set of subjective questionnaire for rating the fatigue were designed. From the experiments the designed task was proven to be monotonous based on the results of the questionnaire responses. To investigate a statistical relationship between speech features extracted from the collected speech data and fatigue, the T test for two-related-samples was used. Statistical analysis shows that speech parameters deeply related to the fatigue are the first formant bandwidth, Jitter, H1-H2, cepstral peak prominence, and harmonics-to-noise ratio. According to the experimental results, it can be seen that voice is changed to be breathy as mental fatigue proceeds.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • Journal of IKEEE
    • /
    • v.17 no.3
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.