• 제목/요약/키워드: Speech recognition model

검색결과 623건 처리시간 0.039초

잡음 환경에서의 음성 감정 인식을 위한 특징 벡터 처리 (Feature Vector Processing for Speech Emotion Recognition in Noisy Environments)

  • 박정식;오영환
    • 말소리와 음성과학
    • /
    • 제2권1호
    • /
    • pp.77-85
    • /
    • 2010
  • This paper proposes an efficient feature vector processing technique to guard the Speech Emotion Recognition (SER) system against a variety of noises. In the proposed approach, emotional feature vectors are extracted from speech processed by comb filtering. Then, these extracts are used in a robust model construction based on feature vector classification. We modify conventional comb filtering by using speech presence probability to minimize drawbacks due to incorrect pitch estimation under background noise conditions. The modified comb filtering can correctly enhance the harmonics, which is an important factor used in SER. Feature vector classification technique categorizes feature vectors into either discriminative vectors or non-discriminative vectors based on a log-likelihood criterion. This method can successfully select the discriminative vectors while preserving correct emotional characteristics. Thus, robust emotion models can be constructed by only using such discriminative vectors. On SER experiment using an emotional speech corpus contaminated by various noises, our approach exhibited superior performance to the baseline system.

  • PDF

병렬 결합된 혼합 모델 기반의 특징 보상 기술 (Feature Compensation Method Based on Parallel Combined Mixture Model)

  • 김우일;이흥규;권오일;고한석
    • 한국음향학회지
    • /
    • 제22권7호
    • /
    • pp.603-611
    • /
    • 2003
  • 본 논문에서는 잡음 환경에서 보다 강인한 성능을 얻기 위하여 음성 모델 기반의 효과적인 특징 보상 기법을 제안한다. 일반적인 모델 기반의 특징 보상 기법은 오열 음성 데이터베이스를 이용한 훈련 과정을 필요로 하므로 온라인 상에서의 적응 과정에 적합하지 않다. 제안한 방법에서는 보정 인자 추정 과정에서 병렬 모델 결합 기법을 도입함으로써 훈련 과정을 필요하지 않게 하였다. 모델의 결합 과정이 HMM 전체가 아닌 가우시안 혼합 (Mixture) 모델에만 적용이 되므로, 계산이 비교적 간단하게 되어 온라인 상에서의 모델 결합을 가능하게 하였다. 병렬적 모델 결합의 도입은 잡음 모델의 독립적인 이용을 가능하게 하였고, 본 논문에서는 MAP (Maximum A Posteriori) 적응을 통해 잡음 모델 갱신을 실시하였다 또한 잡음 오열 과정에 대한 근사화를 통해 연속적 형태의 채널 정규화 기법을 유도하여 적용하였다. 보다 효율적인 구현을 위하여 선택적인 모델 결합 방식을 도입함으로써 연산량을 줄일 수 있는 방법을 제시하였다. 제안한 특징 보상 기법이 부가적인 배경 잡음과 채널 왜곡이 존재하는 잡음 환경에서 음성 인식 시스템의 성능을 향상시키는데 효과적임을 실험을 통해 확인할 수 있었다.

음성인식을 위한 청각신경 정보처리 모델링 (Auditory Neural Information Processing Modeling for Speech Recognition)

  • 이희규;이광형
    • 한국음향학회지
    • /
    • 제9권3호
    • /
    • pp.42-47
    • /
    • 1990
  • 음성처리 및 인식기기의 기능을 향상시키기 위해서는 생체공학적인 방법을 이용한 인체의 청각신경 정보처리 시스템의 연구가 중요하다. 그래서 본 논문에서는 와우각의 메카니즘을 분석한 기저막의 IIR 디지털 필터 모델링이 연구되었다. 특히 음소검출필터와 측징 추출을 위한 변별기능을 이용한 자음인식의 다층신경 모델을 구성한다. 이 모델은 자음인식에 있어서 90% 이상의 높은 감지율을 나타내고 있다.

  • PDF

Speech Recognition using MSHMM based on Fuzzy Concept

  • Ann, Tae-Ock
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권2E호
    • /
    • pp.55-61
    • /
    • 1997
  • This paper proposes a MSHMM(Multi-Section Hidden Markov Model) recognition method based on Fuzzy Concept, as a method on the speech recognition of speaker-independent. In this recognition method, training data are divided into several section and multi-observation sequences given proper probabilities by fuzzy rule according to order of short distance from MSVQ codebook per each section are obtained. Thereafter, the HMM per each section using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. In this paper, other experiments to compare with the results of these experiments are implemented by the various conventional recognition methods(DP, MSVQ, DMS, general HMM) under the same data. Through results of all-round experiment, it is proved that the proposed MSHMM based on fuzzy concept is superior to DP method, MSVQ method, DMS model and general HMM model in recognition rate and computational time, and does not decreases recognition rate as 92.91% in spite of increment of speaker number.

  • PDF

HMM Based Endpoint Detection for Speech Signals

  • 이용형;오창혁
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2001년도 추계학술발표회 논문집
    • /
    • pp.75-76
    • /
    • 2001
  • An endpoint detection method for speech signals utilizing hidden Markov model(HMM) is proposed. It turns out that the proposed algorithm is quite satisfactory to apply isolated word speech recognition.

  • PDF

FFT와 MFB Spectral Entropy를 이용한 GMM 기반의 감정인식 (Speech Emotion Recognition Based on GMM Using FFT and MFB Spectral Entropy)

  • 이우석;노용완;홍광석
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.99-100
    • /
    • 2008
  • This paper proposes a Gaussian Mixture Model (GMM) - based speech emotion recognition methods using four feature parameters; 1) Fast Fourier Transform(FFT) spectral entropy, 2) delta FFT spectral entropy, 3) Mel-frequency Filter Bank (MFB) spectral entropy, and 4) delta MFB spectral entropy. In addition, we use four emotions in a speech database including anger, sadness, happiness, and neutrality. We perform speech emotion recognition experiments using each pre-defined emotion and gender. The experimental results show that the proposed emotion recognition using FFT spectral-based entropy and MFB spectral-based entropy performs better than existing emotion recognition based on GMM using energy, Zero Crossing Rate (ZCR), Linear Prediction Coefficient (LPC), and pitch parameters. In experimental Results, we attained a maximum recognition rate of 75.1% when we used MFB spectral entropy and delta MFB spectral entropy.

  • PDF

DSP를 이용한 자동차 소음에 강인한 음성인식기 구현 (Implementation of a Robust Speech Recognizer in Noisy Car Environment Using a DSP)

  • 정익주
    • 음성과학
    • /
    • 제15권2호
    • /
    • pp.67-77
    • /
    • 2008
  • In this paper, we implemented a robust speech recognizer using the TMS320VC33 DSP. For this implementation, we had built speech and noise database suitable for the recognizer using spectral subtraction method for noise removal. The recognizer has an explicit structure in aspect that a speech signal is enhanced through spectral subtraction before endpoints detection and feature extraction. This helps make the operation of the recognizer clear and build HMM models which give minimum model-mismatch. Since the recognizer was developed for the purpose of controlling car facilities and voice dialing, it has two recognition engines, speaker independent one for controlling car facilities and speaker dependent one for voice dialing. We adopted a conventional DTW algorithm for the latter and a continuous HMM for the former. Though various off-line recognition test, we made a selection of optimal conditions of several recognition parameters for a resource-limited embedded recognizer, which led to HMM models of the three mixtures per state. The car noise added speech database is enhanced using spectral subtraction before HMM parameter estimation for reducing model-mismatch caused by nonlinear distortion from spectral subtraction. The hardware module developed includes a microcontroller for host interface which processes the protocol between the DSP and a host.

  • PDF

음성인식 기반 응급상황관제 (Emergency dispatching based on automatic speech recognition)

  • 이규환;정지오;신대진;정민화;강경희;장윤희;장경호
    • 말소리와 음성과학
    • /
    • 제8권2호
    • /
    • pp.31-39
    • /
    • 2016
  • In emergency dispatching at 119 Command & Dispatch Center, some inconsistencies between the 'standard emergency aid system' and 'dispatch protocol,' which are both mandatory to follow, cause inefficiency in the dispatcher's performance. If an emergency dispatch system uses automatic speech recognition (ASR) to process the dispatcher's protocol speech during the case registration, it instantly extracts and provides the required information specified in the 'standard emergency aid system,' making the rescue command more efficient. For this purpose, we have developed a Korean large vocabulary continuous speech recognition system for 400,000 words to be used for the emergency dispatch system. The 400,000 words include vocabulary from news, SNS, blogs and emergency rescue domains. Acoustic model is constructed by using 1,300 hours of telephone call (8 kHz) speech, whereas language model is constructed by using 13 GB text corpus. From the transcribed corpus of 6,600 real telephone calls, call logs with emergency rescue command class and identified major symptom are extracted in connection with the rescue activity log and National Emergency Department Information System (NEDIS). ASR is applied to emergency dispatcher's repetition utterances about the patient information. Based on the Levenshtein distance between the ASR result and the template information, the emergency patient information is extracted. Experimental results show that 9.15% Word Error Rate of the speech recognition performance and 95.8% of emergency response detection performance are obtained for the emergency dispatch system.

한국어 방송 음성 인식에 관한 연구 (A Study on the Korean Broadcasting Speech Recognition)

  • 김석동;송도선;이행세
    • 한국음향학회지
    • /
    • 제18권1호
    • /
    • pp.53-60
    • /
    • 1999
  • 이 논문은 한국 방송 음성 인식에 관한 연구이다. 여기서 우리는 대규모 어휘를 갖는 연속 음성 인식을 위한 방법을 제시한다. 주요 관점은 언어 모델과 탐색 방법이다. 사용된 음성 모델은 기본음소 Semi-continuous HMM이고 언어 모델은 N-gram 방법이다. 탐색 방법은 음성과 언어 정보를 최대한 활용하기 위해 3단계의 방법을 사용하였다. 첫째로, 단어의 끝 부분과 그에 관련된 정보를 만들기 위한 순방향 Viterbi Beam탐색을 하였으며, 둘째로 단어 의 시작 부분과 그에 관련된 정보를 만드는 역방향 Viterbi Beam탐색, 그리고 마지막으로 이들 두 결과와 확률적인 언어 모델을 결합하여 최종 인식결과를 얻기 위해 A/sup */ 탐색을 한다. 이 방법을 사용하여 12,000개의 단어에 대한 화자 독립으로 최고 96.0%의 단어 인식률과 99.2%의 음절 인식률을 얻었다.

  • PDF

${\nabla}^2G$ 연산자의 신호 분석 특성을 이용한 음성 인식 신경 회로망에 관한 연구 (Neural Network for Speech Recognition Using Signal Analysis Characteristics by ${\nabla}^2G$ Operator)

  • 이종혁;정용근;남기곤;윤태훈;김재창;박의열;이양성
    • 전자공학회논문지B
    • /
    • 제29B권10호
    • /
    • pp.90-99
    • /
    • 1992
  • In this paper, we propose a neural network model for speech recognition. The model consists of feature extraction parts and recognition parts. The interconnection model based on ${\Delta}^2$G operator was used for frequency analysis. Two features, global feature and local feature, were extracted from this model. Recognition parts consist of global grouping stage and local grouping stage. When the input pattern was coded by slope method, the recognition rate of speakers, A and B, was 100%. When the test was performed with the data of 9 speakers, the recognition rate of 91.4% was obtained.

  • PDF