• 제목/요약/키워드: Speech recognition model

검색결과 618건 처리시간 0.027초

바타차랴 알고리즘에서 HMM 특징 추출을 이용한 음성 인식 최적 학습 모델 (Speech Recognition Optimization Learning Model using HMM Feature Extraction In the Bhattacharyya Algorithm)

  • 오상엽
    • 디지털융복합연구
    • /
    • 제11권6호
    • /
    • pp.199-204
    • /
    • 2013
  • 음성 인식 시스템은 정확하지 않게 입력된 음성으로부터 학습 모델을 구성하고 유사한 음소 모델로 인식하기 때문에 인식률 저하를 가져온다. 따라서 본 논문에서는 바타차랴 알고리즘을 이용한 음성 인식 최적 학습 모델 구성 방법을 제안하였다. 음소가 갖는 특징을 기반으로 학습 데이터의 음소에 HMM 특징 추출 방법을 이용하였으며 유사한 학습 모델은 바타챠랴 알고리즘을 이용하여 정확한 학습 모델로 인식할 수 있도록 하였다. 바타챠랴 알고리즘을 이용하여 최적의 학습 모델을 구성하여 인식 성능을 평가하였다. 본 논문에서 제안한 시스템을 적용한 결과 음성 인식률에서 98.7%의 인식률을 나타내었다.

Improved Bimodal Speech Recognition Study Based on Product Hidden Markov Model

  • Xi, Su Mei;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권3호
    • /
    • pp.164-170
    • /
    • 2013
  • Recent years have been higher demands for automatic speech recognition (ASR) systems that are able to operate robustly in an acoustically noisy environment. This paper proposes an improved product hidden markov model (HMM) used for bimodal speech recognition. A two-dimensional training model is built based on dependently trained audio-HMM and visual-HMM, reflecting the asynchronous characteristics of the audio and video streams. A weight coefficient is introduced to adjust the weight of the video and audio streams automatically according to differences in the noise environment. Experimental results show that compared with other bimodal speech recognition approaches, this approach obtains better speech recognition performance.

자율이동로봇의 명령 교시를 위한 HMM 기반 음성인식시스템의 구현 (Implementation of Hidden Markov Model based Speech Recognition System for Teaching Autonomous Mobile Robot)

  • 조현수;박민규;이민철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.281-281
    • /
    • 2000
  • This paper presents an implementation of speech recognition system for teaching an autonomous mobile robot. The use of human speech as the teaching method provides more convenient user-interface for the mobile robot. In this study, for easily teaching the mobile robot, a study on the autonomous mobile robot with the function of speech recognition is tried. In speech recognition system, a speech recognition algorithm using HMM(Hidden Markov Model) is presented to recognize Korean word. Filter-bank analysis model is used to extract of features as the spectral analysis method. A recognized word is converted to command for the control of robot navigation.

  • PDF

잡음음성 음향모델 적응에 기반한 잡음에 강인한 음성인식 (Noise Robust Speech Recognition Based on Noisy Speech Acoustic Model Adaptation)

  • 정용주
    • 말소리와 음성과학
    • /
    • 제6권2호
    • /
    • pp.29-34
    • /
    • 2014
  • In the Vector Taylor Series (VTS)-based noisy speech recognition methods, Hidden Markov Models (HMM) are usually trained with clean speech. However, better performance is expected by training the HMM with noisy speech. In a previous study, we could find that Minimum Mean Square Error (MMSE) estimation of the training noisy speech in the log-spectrum domain produce improved recognition results, but since the proposed algorithm was done in the log-spectrum domain, it could not be used for the HMM adaptation. In this paper, we modify the previous algorithm to derive a novel mathematical relation between test and training noisy speech in the cepstrum domain and the mean and covariance of the Multi-condition TRaining (MTR) trained noisy speech HMM are adapted. In the noisy speech recognition experiments on the Aurora 2 database, the proposed method produced 10.6% of relative improvement in Word Error Rates (WERs) over the MTR method while the previous MMSE estimation of the training noisy speech produced 4.3% of relative improvement, which shows the superiority of the proposed method.

자동차 잡음환경 고립단어 음성인식에서의 VTS와 PMC의 성능비교 (Performance Comparison between the PMC and VTS Method for the Isolated Speech Recognition in Car Noise Environments)

  • 정용주;이승욱
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.251-261
    • /
    • 2003
  • There has been many research efforts to overcome the problems of speech recognition in noisy conditions. Among the noise-robust speech recognition methods, model-based adaptation approaches have been shown quite effective. Particularly, the PMC (parallel model combination) method is very popular and has been shown to give considerably improved recognition results compared with the conventional methods. In this paper, we experimented with the VTS (vector Taylor series) algorithm which is also based on the model parameter transformation but has not attracted much interests of the researchers in this area. To verify the effectiveness of it, we employed the algorithm in the continuous density HMM (Hidden Markov Model). We compared the performance of the VTS algorithm with the PMC method and could see that the it gave better results than the PMC method.

  • PDF

Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems

  • Sanghun Jeon;Jieun Lee;Dohyeon Yeo;Yong-Ju Lee;SeungJun Kim
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.22-34
    • /
    • 2024
  • Exposure to varied noisy environments impairs the recognition performance of artificial intelligence-based speech recognition technologies. Degraded-performance services can be utilized as limited systems that assure good performance in certain environments, but impair the general quality of speech recognition services. This study introduces an audiovisual speech recognition (AVSR) model robust to various noise settings, mimicking human dialogue recognition elements. The model converts word embeddings and log-Mel spectrograms into feature vectors for audio recognition. A dense spatial-temporal convolutional neural network model extracts features from log-Mel spectrograms, transformed for visual-based recognition. This approach exhibits improved aural and visual recognition capabilities. We assess the signal-to-noise ratio in nine synthesized noise environments, with the proposed model exhibiting lower average error rates. The error rate for the AVSR model using a three-feature multi-fusion method is 1.711%, compared to the general 3.939% rate. This model is applicable in noise-affected environments owing to its enhanced stability and recognition rate.

주파수 변이를 이용한 Parallel Model Combination 모델 적응에 기반한 잡음에 강한 음성인식 (Noise Robust Speech Recognition Based on Parallel Model Combination Adaptation Using Frequency-Variant)

  • 최숙남;정현열
    • 한국음향학회지
    • /
    • 제32권3호
    • /
    • pp.252-261
    • /
    • 2013
  • 일반적인 음성인식 시스템은 조용한 인식 환경에서는 높은 인식성능을 나타내지만 잡음이 존재하는 실제 환경에서는 그 성능이 급격히 저하한다. 본 논문에서는 다양한 잡음환경에서도 강인한 음성인식기를 구현하기 위하여, 주파수의 변이도를 이용하여 음성인식을 위한 환경 정보를 얻고 이를 음성 인식을 위한 모델 개선에 적용하여 성능향상을 도모하는 환경정보 지식에 기반한 주파수 변이 적응 PMC (Parallel Model Combination adaptation using frequency-variant based on environment - awareness : FV-PMC) 방법을 제안한다. 이 방법은 미리 분류된 각 잡음 군간의 평균 주파수 변이도를 미리 계산하여 임계치로 설정하고 미지의 잡음이 포함된 음성이 입력되면 각 잡음 군과의 주파수 변이도를 다시 계산하여 해당 잡음군의 임계치 보다 높을 경우 그 잡음 군의 잡음이 포함된 음성으로 간주하여 이 잡음 군이 포함된 음성을 이용하여 생성된 인식모델을 이용하여 음성인식을 수행한다. 제안한 FV-PMC 방법을 이용하여 잡음을 분류 하였을 경우 평균 분류 정확도는 56%를 보였고 이를 이용해 음성인식 실험을 실시한 결과 Set A의 평균인식률은 79.05%, Set B의 평균인식률은 79.43%, Set C의 평균인식률은 83.37%로 나타났다. 전체 평균인식률 80.62%로 기존의 깨끗한 모델을 이용한 PMC 인식률 74.93% 보다 5.69% 향상된 결과를 보여 제안한 방법의 유효성을 확인할 수 있었다.

Korean Broadcast News Transcription Using Morpheme-based Recognition Units

  • Kwon, Oh-Wook;Alex Waibel
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권1E호
    • /
    • pp.3-11
    • /
    • 2002
  • Broadcast news transcription is one of the hardest tasks in speech recognition because broadcast speech signals have much variability in speech quality, channel and background conditions. We developed a Korean broadcast news speech recognizer. We used a morpheme-based dictionary and a language model to reduce the out-of·vocabulary (OOV) rate. We concatenated the original morpheme pairs of short length or high frequency in order to reduce insertion and deletion errors due to short morphemes. We used a lexicon with multiple pronunciations to reflect inter-morpheme pronunciation variations without severe modification of the search tree. By using the merged morpheme as recognition units, we achieved the OOV rate of 1.7% comparable to European languages with 64k vocabulary. We implemented a hidden Markov model-based recognizer with vocal tract length normalization and online speaker adaptation by maximum likelihood linear regression. Experimental results showed that the recognizer yielded 21.8% morpheme error rate for anchor speech and 31.6% for mostly noisy reporter speech.

원어민 및 외국인 화자의 음성인식을 위한 심층 신경망 기반 음향모델링 (DNN-based acoustic modeling for speech recognition of native and foreign speakers)

  • 강병옥;권오욱
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.95-101
    • /
    • 2017
  • This paper proposes a new method to train Deep Neural Network (DNN)-based acoustic models for speech recognition of native and foreign speakers. The proposed method consists of determining multi-set state clusters with various acoustic properties, training a DNN-based acoustic model, and recognizing speech based on the model. In the proposed method, hidden nodes of DNN are shared, but output nodes are separated to accommodate different acoustic properties for native and foreign speech. In an English speech recognition task for speakers of Korean and English respectively, the proposed method is shown to slightly improve recognition accuracy compared to the conventional multi-condition training method.

분산 메모리 다중프로세서 환경에서의 병렬 음성인식 모델 (A Parallel Speech Recognition Model on Distributed Memory Multiprocessors)

  • 정상화;김형순;박민욱;황병한
    • 한국음향학회지
    • /
    • 제18권5호
    • /
    • pp.44-51
    • /
    • 1999
  • 본 논문에서는 음성과 자연언어의 통합처리를 위한 효과적인 병렬계산모델을 제안한다. 음소모델은 연속 Hidden Markov Model(HMM)에 기반을 둔 문맥종속형 음소를 사용하며, 언어모델은 지식베이스를 기반으로 한다. 또한 지식베이스를 구성하기 위해 계층구조의 semantic network과 병렬 marker-passing을 추론 메카니즘으로 쓰는 memory-based parsing 기술을 사용한다. 본 연구의 병렬 음성인식 알고리즘은 분산메모리 MIMD(Multiple Instruction Multiple Data) 구조의 다중 Transputer 시스템을 이용하여 구현되었다. 실험결과, 본 연구의 지식베이스 기반 음성인식 시스템의 인식률이 word network 기반 음성인식 시스템보다 높게 나타났으며 code-phoneme 통계정보를 활용하여 인식성능의 향상도 얻을 수 있었다. 또한, 성능향상도(speedup) 관련 실험들을 통하여 병렬 음성인식 시스템의 실시간 구현 가능성을 확인하였다.

  • PDF