• Title/Summary/Keyword: cepstral

Search Result 298, Processing Time 0.025 seconds

A study on Auto-Segmentation Improvement for a Large Speech DB (대용량 음성 D/B 구축을 위한 AUTO-SEGMENTATION에 관한 연구)

  • Lee Byong-soon;Chang Sungwook;Yang Sung-il;Kwon Y.
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.209-212
    • /
    • 2000
  • 본 논문은 음성인식에 필요한 대용량 음성 D/B 구축을 위한 auto-segmentation의 향상에 관한 논문이다. 50개의 우리말 음소(잡음, 묵음 포함)를 정하고 음성특징으로 MFCC(Mel Frequency Cepstral Coefficients), $\Delta$MFCC, $\Delta\Delta$MFCC, 39차를 추출한 다음 HMM 훈련과 CCS(Constrained Clustering Segmentation) 알고리즘(1)을 사용하여auto-segmentation을 수행하였다. 이 과정에서 대부분의 음소는 오류범위$(\pm25ms)$ 안에서 분절이 이루어지지만, 짧은 묵음, 모음+유성자음('ㅁ', 'ㄴ', 'ㄹ', 'o') 등에서 자주 오류범위를 넘어 분절이 발생하였다. 이러한 음운환경에 따른 경계의 오류를 구간별로 Wavelet 변환 신호의 MLR(Maximum Likelihood Ratio) 값을 이용, 기존 문제점을 보완하여 오류의 범위를 줄임으로서 auto-segmentation의 성능 향상을 얻을 수 있었다.

  • PDF

Estimation scatterer spacing with spectral response (주파수 응답특성을 이용한 산란체 간격 추정)

  • Kim Eunhye;Yoon Kwan-seob;Na Jungyul
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.447-450
    • /
    • 2002
  • 음향수조 내에 일정한 간격으로 배열된 단순 모양의 산란체들로부터 획득된 후방산란 신호를 분석하여 산란체 간격(scatterer spacing)을 추측 할 수 있는 방법을 연구하였다. 수신신호의 산란특성을 켑스트럼 피크(cepstral peaks)를 이용하여 산란체 간격으로 해석하였다. 임펄스 응답신호를 이용한 수치계산으로 산란체 간격 추정방법을 검증한 후, 수조 실험으로 획득한 후방 산란 신호에 적용해 그 결과를 비교해 보았다.

  • PDF

Combination of Classifiers Decisions for Multilingual Speaker Identification

  • Nagaraja, B.G.;Jayanna, H.S.
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.928-940
    • /
    • 2017
  • State-of-the-art speaker recognition systems may work better for the English language. However, if the same system is used for recognizing those who speak different languages, the systems may yield a poor performance. In this work, the decisions of a Gaussian mixture model-universal background model (GMM-UBM) and a learning vector quantization (LVQ) are combined to improve the recognition performance of a multilingual speaker identification system. The difference between these classifiers is in their modeling techniques. The former one is based on probabilistic approach and the latter one is based on the fine-tuning of neurons. Since the approaches are different, each modeling technique identifies different sets of speakers for the same database set. Therefore, the decisions of the classifiers may be used to improve the performance. In this study, multitaper mel-frequency cepstral coefficients (MFCCs) are used as the features and the monolingual and cross-lingual speaker identification studies are conducted using NIST-2003 and our own database. The experimental results show that the combined system improves the performance by nearly 10% compared with that of the individual classifier.

Korean Digit Recognition Using Cepstrum coefficients and Frequency Sensitive Competitive Learning (Cepstrum 계수와 Frequency Sensitive Competitive Learning 신경회로망을 이용한 한국어 인식.)

  • Lee, Su-Hyuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.329-331
    • /
    • 1994
  • In this paper, we present a speaker-dependent Korean Isolated digit recognition system. At the preprocessing step, LPC cepstral coefficients are extracted from speech signal, and are used as the input of a Frequency Sensitive Competitive Learning(FSCL) neural network. We carried out the postprocessing based on the winning-neuron histogram. Experimetal results Indicate the possibility of commercial auto-dial telephones.

  • PDF

Robust Speech Recognition by Utilizing Class Histogram Equalization (클래스 히스토그램 등화 기법에 의한 강인한 음성 인식)

  • Suh, Yung-Joo;Kim, Hor-Rin;Lee, Yun-Keun
    • MALSORI
    • /
    • no.60
    • /
    • pp.145-164
    • /
    • 2006
  • This paper proposes class histogram equalization (CHEQ) to compensate noisy acoustic features for robust speech recognition. CHEQ aims to compensate for the acoustic mismatch between training and test speech recognition environments as well as to reduce the limitations of the conventional histogram equalization (HEQ). In contrast to HEQ, CHEQ adopts multiple class-specific distribution functions for training and test environments and equalizes the features by using their class-specific training and test distributions. According to the class-information extraction methods, CHEQ is further classified into two forms such as hard-CHEQ based on vector quantization and soft-CHEQ using the Gaussian mixture model. Experiments on the Aurora 2 database confirmed the effectiveness of CHEQ by producing a relative word error reduction of 61.17% over the baseline met-cepstral features and that of 19.62% over the conventional HEQ.

  • PDF

A STUDY ON THE SPEECH SYNTHESIS-BY-RULE SYSTEM APPLIED MULTIBAND EXCITATION SIGNAL

  • Kyung, Younjeong;Kim, Geesoon;Lee, Hwangsoo;Lee, Yanghee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1098-1103
    • /
    • 1994
  • In this paper, we design and implement the Korean speech synthesis by rule system. This system is applied the multiband excitation signal on voiced sounds. The multiband excitation signal is obtained by mixing impluse spectrum and which noise spectrum. We find that the quality of synthesized speech is improved using this application. Also, we classify the voiced sounds by cepstral euclidian distance measure for reducing overhead memory. The representative excitation signal of the same group's voiced sounds is used as excitation signal on synthesis. This method does not affect the quality of synthesized speech. As the result of experiment, this method eliminates the "buzziness" of synthesized speech and reduces the spectral distortion of synthesized speech.ed speech.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Speech Recognition Using Noise Processing in Spectral Dimension (스펙트럴 차원의 잡음처리를 이용한 음성인식)

  • Lee, Gwang-seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.738-741
    • /
    • 2009
  • This research is concerned for improving the result of speech recognition under the noisy speech. We knew that spectral subtraction and recovery of valleys in spectral envelope obtained from noisy speech are more effective for the improvement of the recognition. In this research, the averaged spectral envelope obtained from vowel spectrums are used for the emphasis of valleys. The vocalic spectral information at lower frequency range is emphasized and the spectrum obtained from consonants is not changed. In simulation, the emphasis coefficients are varied on cepstral domain. This method is used for the recognition of noisy digits and is improved.

  • PDF

Enhancement of Mobile Authentication System Performance based on Multimodal Biometrics (다중 생체인식 기반의 모바일 인증 시스템 성능 개선)

  • Jeong, Kanghun;Kim, Sanghoon;Moon, Hyeonjoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.342-345
    • /
    • 2013
  • 본 논문은 모바일 환경에서의 다중생체인식을 통한 개인인증 시스템을 제안한다. 다중생체인식을 위하여 얼굴인식과 화자인식을 선택하였으며, 시스템의 인식 시나리오는 다음을 따른다. 얼굴인식을 위하여 Modified census transform (MCT) 기반의 얼굴검출과 k-means 클러스터 분석 (cluster analysis) 알고리즘 기반의 눈 검출을 통해 얼굴영역 전처리를 수행하고, principal component analysis (PCA) 기반의 얼굴인증 시스템을 구현한다. 화자인식을 위하여 음성의 끝점 추출과 Mel frequency cepstral coefficient(MFCC) 특징을 추출하고, dynamic time warping (DTW) 기반의 화자 인증 시스템을 구현한다. 그리고 각각의 생체인식을 본 논문에서 제안된 방법을 기반으로 융합하여 인식률을 향상시킨다.

Music Genre Classification Based on Timbral Texture and Rhythmic Content Features

  • Baniya, Babu Kaji;Ghimire, Deepak;Lee, Joonwhon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.204-207
    • /
    • 2013
  • Music genre classification is an essential component for music information retrieval system. There are two important components to be considered for better genre classification, which are audio feature extraction and classifier. This paper incorporates two different kinds of features for genre classification, timbral texture and rhythmic content features. Timbral texture contains several spectral and Mel-frequency Cepstral Coefficient (MFCC) features. Before choosing a timbral feature we explore which feature contributes less significant role on genre discrimination. This facilitates the reduction of feature dimension. For the timbral features up to the 4-th order central moments and the covariance components of mutual features are considered to improve the overall classification result. For the rhythmic content the features extracted from beat histogram are selected. In the paper Extreme Learning Machine (ELM) with bagging is used as classifier for classifying the genres. Based on the proposed feature sets and classifier, experiment is performed with well-known datasets: GTZAN databases with ten different music genres, respectively. The proposed method acquires the better classification accuracy than the existing approaches.