• Title/Summary/Keyword: cepstrum coefficients mean

Search Result 9, Processing Time 0.021 seconds

Comparison of Characteristic Vector of Speech for Gender Recognition of Male and Female (남녀 성별인식을 위한 음성 특징벡터의 비교)

  • Jeong, Byeong-Goo;Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1370-1376
    • /
    • 2012
  • This paper proposes a gender recognition algorithm which classifies a male or female speaker. In this paper, characteristic vectors for the male and female speaker are analyzed, and recognition experiments for the proposed gender recognition by a neural network are performed using these characteristic vectors for the male and female. Input characteristic vectors of the proposed neural network are 10 LPC (Linear Predictive Coding) cepstrum coefficients, 12 LPC cepstrum coefficients, 12 FFT (Fast Fourier Transform) cepstrum coefficients and 1 RMS (Root Mean Square), and 12 LPC cepstrum coefficients and 8 FFT spectrum. The proposed neural network trained by 20-20-2 network are especially used in this experiment, using 12 LPC cepstrum coefficients and 8 FFT spectrum. From the experiment results, the average recognition rates obtained by the gender recognition algorithm is 99.8% for the male speaker and 96.5% for the female speaker.

Isolated-Word Speech Recognition in Telephone Environment Using Perceptual Auditory Characteristic (인지적 청각 특성을 이용한 고립 단어 전화 음성 인식)

  • Choi, Hyung-Ki;Park, Ki-Young;Kim, Chong-Kyo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.2
    • /
    • pp.60-65
    • /
    • 2002
  • In this paper, we propose GFCC(gammatone filter frequency cepstrum coefficient) parameter which was based on the auditory characteristic for accomplishing better speech recognition rate. And it is performed the experiment of speech recognition for isolated word acquired from telephone network. For the purpose of comparing GFCC parameter with other parameter, the experiment of speech recognition are carried out using MFCC and LPCC parameter. Also, for each parameter, we are implemented CMS(cepstral mean subtraction)which was applied or not in order to compensate channel distortion in telephone network. Accordingly, we found that the recognition rate using GFCC parameter is better than other parameter in the experimental result.

A Study on Robust Feature Vector Extraction for Fault Detection and Classification of Induction Motor in Noise Circumstance (잡음 환경에서의 유도 전동기 고장 검출 및 분류를 위한 강인한 특징 벡터 추출에 관한 연구)

  • Hwang, Chul-Hee;Kang, Myeong-Su;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.187-196
    • /
    • 2011
  • Induction motors play a vital role in aeronautical and automotive industries so that many researchers have studied on developing a fault detection and classification system of an induction motor to minimize economical damage caused by its fault. With this reason, this paper extracts robust feature vectors from the normal/abnormal vibration signals of the induction motor in noise circumstance: partial autocorrelation (PARCOR) coefficient, log spectrum powers (LSP), cepstrum coefficients mean (CCM), and mel-frequency cepstrum coefficient (MFCC). Then, we classified different types of faults of the induction motor by using the extracted feature vectors as inputs of a neural network. To find optimal feature vectors, this paper evaluated classification performance with 2 to 20 different feature vectors. Experimental results showed that five to six features were good enough to give almost 100% classification accuracy except features by CCM. Furthermore, we considered that vibration signals could include noise components caused by surroundings. Thus, we added white Gaussian noise to original vibration signals, and then evaluated classification performance. The evaluation results yielded that LSP was the most robust in noise circumstance, then PARCOR and MFCC followed by LSP, respectively.

Formant-broadened CMS Using the Log-spectrum Transformed from the Cepstrum (켑스트럼으로부터 변환된 로그 스펙트럼을 이용한 포먼트 평활화 켑스트럴 평균 차감법)

  • 김유진;정혜경;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.361-373
    • /
    • 2002
  • In this paper, we propose a channel normalization method to improve the performance of CMS (cepstral mean subtraction) which is widely adopted to normalize a channel variation for speech and speaker recognition. CMS which estimates the channel effects by averaging long-term cepstrum has a weak point that the estimated channel is biased by the formants of voiced speech which include a useful speech information. The proposed Formant-broadened Cepstral Mean Subtraction (FBCMS) is based on the facts that the formants can be found easily in log spectrum which is transformed from the cepstrum by fourier transform and the formants correspond to the dominant poles of all-pole model which is usually modeled vocal tract. The FBCMS evaluates only poles to be broadened from the log spectrum without polynomial factorization and makes a formant-broadened cepstrum by broadening the bandwidths of formant poles. We can estimate the channel cepstrum effectively by averaging formant-broadened cepstral coefficients. We performed the experiments to compare FBCMS with CMS, PFCMS using 4 simulated telephone channels. In the experiment of channel estimation, we evaluated the distance cepstrum of real channel from the cepstrum of estimated channel and found that we were able to get the mean cepstrum closer to the channel cepstrum due to an softening the bias of mean cepstrum to speech. In the experiment of text-independent speaker identification, we showed the result that the proposed method was superior than the conventional CMS and comparable to the pole-filtered CMS. Consequently, we showed the proposed method was efficiently able to normalize the channel variation based on the conventional CMS.

Voice conversion using low dimensional vector mapping (낮은 차원의 벡터 변환을 통한 음성 변환)

  • Lee, Kee-Seung;Doh, Won;Youn, Dae-Hee
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.4
    • /
    • pp.118-127
    • /
    • 1998
  • In this paper, we propose a voice personality transformation method which makes one person's voice sound like another person's voice. In order to transform the voice personality, vocal tract transfer function is used as a transformation parameter. Comparing with previous methods, the proposed method can obtain high-quality transformed speech with low computational complexity. Conversion between the vocal tract transfer functions is implemented by a linear mapping based on soft clustering. In this process, mean LPC cepstrum coefficients and mean removed LPC cepstrum modeled by the low dimensional vector are used as transformation parameters. To evaluate the performance of the proposed method, mapping rules are generated from 61 Korean words uttered by two male and one female speakers. These rules are then applied to 9 sentences uttered by the same persons, and objective evaluation and subjective listening tests for the transformed speech are performed.

  • PDF

Voice personality transformation using an orthogonal vector space conversion (직교 벡터 공간 변환을 이용한 음성 개성 변환)

  • Lee, Ki-Seung;Park, Kun-Jong;Youn, Dae-Hee
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.96-107
    • /
    • 1996
  • A voice personality transformation algorithm using orthogonal vector space conversion is proposed in this paper. Voice personality transformation is the process of changing one person's acoustic features (source) to those of another person (target). In this paper, personality transformation is achieved by changing the LPC cepstrum coefficients, excitation spectrum and pitch contour. An orthogonal vector space conversion technique is proposed to transform the LPC cepstrum coefficients. The LPC cepstrum transformation is implemented by principle component decomposition by applying the Karhunen-Loeve transformation and minimum mean-square error coordinate transformation(MSECT). Additionally, we propose a pitch contour modification method to transform the prosodic characteristics of any speaker. To do this, reference pitch patterns for source and target speaker are firstly built up, and speaker's one. The experimental results show the effectiveness of the proposed algorithm in both subjective and objective evaluations.

  • PDF

Speaker Verification Using Hidden LMS Adaptive Filtering Algorithm and Competitive Learning Neural Network (Hidden LMS 적응 필터링 알고리즘을 이용한 경쟁학습 화자검증)

  • Cho, Seong-Won;Kim, Jae-Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.2
    • /
    • pp.69-77
    • /
    • 2002
  • Speaker verification can be classified in two categories, text-dependent speaker verification and text-independent speaker verification. In this paper, we discuss text-dependent speaker verification. Text-dependent speaker verification system determines whether the sound characteristics of the speaker are equal to those of the specific person or not. In this paper we obtain the speaker data using a sound card in various noisy conditions, apply a new Hidden LMS (Least Mean Square) adaptive algorithm to it, and extract LPC (Linear Predictive Coding)-cepstrum coefficients as feature vectors. Finally, we use a competitive learning neural network for speaker verification. The proposed hidden LMS adaptive filter using a neural network reduces noise and enhances features in various noisy conditions. We construct a separate neural network for each speaker, which makes it unnecessary to train the whole network for a new added speaker and makes the system expansion easy. We experimentally prove that the proposed method improves the speaker verification performance.

Engine Fault Diagnosis Using Sound Source Analysis Based on Hidden Markov Model (HMM기반 소음분석에 의한 엔진고장 진단기법)

  • Le, Tran Su;Lee, Jong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.244-250
    • /
    • 2014
  • The Most Serious Engine Faults Are Those That Occur Within The Engine. Traditional Engine Fault Diagnosis Is Highly Dependent On The Engineer'S Technical Skills And Has A High Failure Rate. Neural Networks And Support Vector Machine Were Proposed For Use In A Diagnosis Model. In This Paper, Noisy Sound From Faulty Engines Was Represented By The Mel Frequency Cepstrum Coefficients, Zero Crossing Rate, Mean Square And Fundamental Frequency Features, Are Used In The Hidden Markov Model For Diagnosis. Our Experimental Results Indicate That The Proposed Method Performs The Diagnosis With A High Accuracy Rate Of About 98% For All Eight Fault Types.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.