• Title/Summary/Keyword: 분산 음성 인식

Search Result 56, Processing Time 0.02 seconds

State-Dependent Weighting of Multiple Feature Parameters in HMM Recognizer (HMM 인식기에서 상태별 다중 특징 파라미터 가중)

  • 손종목;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.47-52
    • /
    • 1999
  • In this paper, we proposed a new approach to weight each feature parameter by considering the dispersion of feature parameters and its degree of contribution to recognition rate. We determined the total distribution factor that is proportional to recognition rate of each feature parameter and the dispersion factor according to the dispersion of each feature parameter. Then. we determined state-dependent weighting using the total distribution factor and dispersion factor. To verify the validity of the proposed approach, recognition experiments were performed using the PLU(Phoneme-Like Unit)-based HMM. Experimental results showed the improvement of 7.7% at the recognition rate using the proposed method.

  • PDF

Speech extraction based on AuxIVA with weighted source variance and noise dependence for robust speech recognition (강인 음성 인식을 위한 가중화된 음원 분산 및 잡음 의존성을 활용한 보조함수 독립 벡터 분석 기반 음성 추출)

  • Shin, Ui-Hyeop;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.326-334
    • /
    • 2022
  • In this paper, we propose speech enhancement algorithm as a pre-processing for robust speech recognition in noisy environments. Auxiliary-function-based Independent Vector Analysis (AuxIVA) is performed with weighted covariance matrix using time-varying variances with scaling factor from target masks representing time-frequency contributions of target speech. The mask estimates can be obtained using Neural Network (NN) pre-trained for speech extraction or diffuseness using Coherence-to-Diffuse power Ratio (CDR) to find the direct sounds component of a target speech. In addition, outputs for omni-directional noise are closely chained by sharing the time-varying variances similarly to independent subspace analysis or IVA. The speech extraction method based on AuxIVA is also performed in Independent Low-Rank Matrix Analysis (ILRMA) framework by extending the Non-negative Matrix Factorization (NMF) for noise outputs to Non-negative Tensor Factorization (NTF) to maintain the inter-channel dependency in noise output channels. Experimental results on the CHiME-4 datasets demonstrate the effectiveness of the presented algorithms.

Cepstrum PDF Normalization Method for Speech Recognition in Noise Environment (잡음환경에서의 음성인식을 위한 켑스트럼의 확률분포 정규화 기법)

  • Suk Yong Ho;Lee Hwang-Soo;Choi Seung Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4
    • /
    • pp.224-229
    • /
    • 2005
  • In this paper, we Propose a novel cepstrum normalization method which normalizes the probability density function (pdf) of cepstrum for robust speech recognition in additive noise environments. While the conventional methods normalize the first- and/or second-order statistics such as the mean and/or variance of the cepstrum. the proposed method fully normalizes the statistics of cepstrum by making the pdfs of clean and noisy cepstrum identical to each other For the target Pdf, the generalized Gaussian distribution is selected to consider various densities. In recognition phase, we devise a table lookup method to save computational costs. From the speaker-independent isolated-word recognition experiments, we show that the Proposed method gives improved Performance compared with that of the conventional methods, especially in heavy noise environments.

Generating Speech feature vectors for Effective Emotional Recognition (효과적인 감정인식을 위한 음성 특징 벡터 생성)

  • Sim, In-woo;Han, Eui Hwan;Cha, Hyung Tai
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.687-690
    • /
    • 2019
  • 본 논문에서는 효과적인 감정인식을 위한 효과적인 특징 벡터를 생성한다. 이를 위해서 음성 데이터 셋 RAVDESS를 이용하였으며, 그 중 neutral, calm, happy, sad 총 4가지 감정을 나타내는 음성 신호를 사용하였다. 본 논문에서는 기존에 감정인식에 사용되는 MFCC1~13차 계수와 pitch, ZCR, peakenergy 중에서 효과적인 특징을 추출하기 위해 클래스 간, 클래스 내 분산의 비를 이용하였다. 실험결과 감정인식에 사용되는 특징 벡터들 중 peakenergy, pitch, MFCC2, MFCC3, MFCC4, MFCC12, MFCC13이 효과적임을 확인하였다.

Applying feature normalization based on pole filtering to short-utterance speech recognition using deep neural network (심층신경망을 이용한 짧은 발화 음성인식에서 극점 필터링 기반의 특징 정규화 적용)

  • Han, Jaemin;Kim, Min Sik;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.64-68
    • /
    • 2020
  • In a conventional speech recognition system using Gaussian Mixture Model-Hidden Markov Model (GMM-HMM), the cepstral feature normalization method based on pole filtering was effective in improving the performance of recognition of short utterances in noisy environments. In this paper, the usefulness of this method for the state-of-the-art speech recognition system using Deep Neural Network (DNN) is examined. Experimental results on AURORA 2 DB show that the cepstral mean and variance normalization based on pole filtering improves the recognition performance of very short utterances compared to that without pole filtering, especially when there is a large mismatch between the training and test conditions.

A study on the Voiced, Unvoiced and Silence Classification (유, 무성음 및 묵음 식별에 관한 연구)

  • 김명환;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.3 no.2
    • /
    • pp.46-58
    • /
    • 1984
  • 본 논문은 한국어 음성 인식을 위한 유성음, 무성음, 묵음 식별에 관한 연구이다. 주어진 음성 구간을 3가지 음성 신호 부류로 식별하기 위하여 패턴 인식 방법을 사용하였다. 여기에 사용한 분석 파 라메타는 음성 신호의 영교차율, 대수 에너지, 정규화 된 첫 번째 자동 상관 계수, 선형 예측 분석에서 얻은 첫 번째 예측 계수, 그리고 예측 오차의 에너지이다. 한편 측정된 파라메타들이 다차원 가우스 확 률 밀도 함수에 따라 분산되었다는 가정하에서 어어진 최소 거리 법칙에 기본을 두고 음성 구간을 결정 하였다. 측정된 파라메타들을 여러 가지 방법으로 조합하여 식별한 결과 영교차율, 첫 번째 예측계수, 예측 오차의 에너지를 측정 파라메타로 사용했을 때 1%보다 적은 식별 오차율을 얻었다.

  • PDF

Bayesian Fusion of Confidence Measures for Confidence Scoring (베이시안 신뢰도 융합을 이용한 신뢰도 측정)

  • 김태윤;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.410-419
    • /
    • 2004
  • In this paper. we propose a method of confidence measure fusion under Bayesian framework for speech recognition. Centralized and distributed schemes are considered for confidence measure fusion. Centralized fusion is feature level fusion which combines the values of individual confidence scores and makes a final decision. In contrast. distributed fusion is decision level fusion which combines the individual decision makings made by each individual confidence measuring method. Optimal Bayesian fusion rules for centralized and distributed cases are presented. In isolated word Out-of-Vocabulary (OOV) rejection experiments. centralized Bayesian fusion shows over 13% relative equal error rate (EER) reduction compared with the individual confidence measure methods. In contrast. the distributed Bayesian fusion shows no significant performance increase.

Performance Assessment of Speech Recogniger using Lombard Speech (롬바드 음성을 이용한 음성인식기의 성능 평가)

  • Jung, Sung-Yun;Chung, Hyun-Yeol;Kim, Kyung-Tae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.5
    • /
    • pp.59-68
    • /
    • 1994
  • This paper describes the performance assessment test and analysis of test results on a Korean speech recognizer which recognizes Lombard effect received speech in noisy environment, as a basic performance assessment research. In the assessement test, standard speech data were first manipulated close to speech uttered in a noisy environment, and then performance assessment tests were carried out along with the assessment items (the type of noise, SNR) in two ways-one with Lombard effect received speech(LES), the other with not received(NLES). As a result, when 90% of recognition rate is set to be a recognition limit, it was achieved at 10dB SNR point with LES, while at 30dB with NLES. This 20dB of SNR difference indicates Lombard effect should be considered in real world assessment test. The type of noises didn't affect performance of recognizers in out tests. ANOVA analysis, in evaluating several kinds of recognizers, showed every assessment item affecting the recognition performance could be quantified.

  • PDF

Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition (음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정)

  • Song, Taeyup;Lee, Kyungsun;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.321-327
    • /
    • 2015
  • In this paper, we propose an algorithm for achieving robust Visual Voice Activity Detection (VVAD) for enhanced speech recognition. In conventional VVAD algorithms, the motion of lip region is found by applying an optical flow or Chaos inspired measures for detecting visual speech frames. The optical flow-based VVAD is difficult to be adopted to driving scenarios due to its computational complexity. While invariant to illumination changes, Chaos theory based VVAD method is sensitive to motion translations caused by driver's head movements. The proposed Local Variance Histogram (LVH) is robust to the pixel intensity changes from both illumination change and translation change. Hence, for improved performance in environmental changes, we adopt the novel threshold estimation using total variance change. In the experimental results, the proposed VVAD algorithm achieves robustness in various driving situations.

Classification of Diphthongs using Acoustic Phonetic Parameters (음향음성학 파라메터를 이용한 이중모음의 분류)

  • Lee, Suk-Myung;Choi, Jeung-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.167-173
    • /
    • 2013
  • This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.