• Title/Summary/Keyword: Continuous speech recognition

Search Result 224, Processing Time 0.029 seconds

Rapid Speaker Adaptation for Continuous Speech Recognition Using Merging Eigenvoices (Eigenvoice 병합을 이용한 연속 음성 인식 시스템의 고속 화자 적응)

  • Choi, Dong-Jin;Oh, Yung-Hwan
    • MALSORI
    • /
    • no.53
    • /
    • pp.143-156
    • /
    • 2005
  • Speaker adaptation in eigenvoice space is a popular method for rapid speaker adaptation. To improve the performance of the method, the number of speaker dependent models should be increased and eigenvoices should be re-estimated. However, principal component analysis takes much time to find eigenvoices, especially in a continuous speech recognition system. This paper describes a method to reduce computation time to estimate eigenvoices only for supplementary speaker dependent models and to merge them with the used eigenvoices. Experiment results show that the computation time is reduced by 73.7% while the performance is almost the same in case that the number of speaker dependent models is the same as used ones.

  • PDF

A Study on the Speaker Adaptation of a Continuous Speech Recognition using HMM (HMM을 이용한 연속 음성 인식의 화자적응화에 관한 연구)

  • Kim, Sang-Bum;Lee, Young-Jae;Koh, Si-Young;Hur, Kang-In
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.5-11
    • /
    • 1996
  • In this study, the method of speaker adaptation for uttered sentence using syllable unit hmm is proposed. Segmentation of syllable unit for sentence is performed automatically by concatenation of syllable unit hmm and viterbi segmentation. Speaker adaptation is performed using MAPE(Maximum A Posteriori Probabillity Estimation) which can adapt any small amount of adaptation speech data and add one sequentially. For newspaper editorial continuous speech, the recognition rates of adaptation of HMM was 71.8% which is approximately 37% improvement over that of unadapted HMM

  • PDF

Computational Complexity Reduction of Speech Recognizers Based on the Modified Bucket Box Intersection Algorithm (변형된 BBI 알고리즘에 기반한 음성 인식기의 계산량 감축)

  • Kim, Keun-Yong;Kim, Dong-Hwa
    • MALSORI
    • /
    • no.60
    • /
    • pp.109-123
    • /
    • 2006
  • Since computing the log-likelihood of Gaussian mixture density is a major computational burden for the speech recognizer based on the continuous HMM, several techniques have been proposed to reduce the number of mixtures to be used for recognition. In this paper, we propose a modified Bucket Box Intersection (BBI) algorithm, in which two relative thresholds are employed: one is the relative threshold in the conventional BBI algorithm and the other is used to reduce the number of the Gaussian boxes which are intersected by the hyperplanes at the boxes' edges. The experimental results show that the proposed algorithm reduces the number of Gaussian mixtures by 12.92% during the recognition phase with negligible performance degradation compared to the conventional BBI algorithm.

  • PDF

A Model for Post-processing of Speech Recognition Using Syntactic Unit of Morphemes (구문형태소 단위를 이용한 음성 인식의 후처리 모델)

  • 양승원;황이규
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.74-80
    • /
    • 2002
  • There are many researches on post-processing methods for the Korean continuous speech recognition enhancement using natural language processing techniques. It is very difficult to use a formal morphological analyzer for improving the speech recognition because the analysis technique of natural language processing is mainly for formal written languages. In this paper, we propose a speech recognition enhancement model using syntactic unit of morphemes. This approach uses the functional word level longest match which dose not consider spacing words. We describe the post-processing mechanism for the improving speech recognition by using proposed model which uses the relationship of phonological structure information between predicates md auxiliary predicates or bound nouns that are frequently occurred in Korean sentences.

  • PDF

Modified Phonetic Decision Tree For Continuous Speech Recognition

  • Kim, Sung-Ill;Kitazoe, Tetsuro;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.4E
    • /
    • pp.11-16
    • /
    • 1998
  • For large vocabulary speech recognition using HMMs, context-dependent subword units have been often employed. However, when context-dependent phone models are used, they result in a system which has too may parameters to train. The problem of too many parameters and too little training data is absolutely crucial in the design of a statistical speech recognizer. Furthermore, when building large vocabulary speech recognition systems, unseen triphone problem is unavoidable. In this paper, we propose the modified phonetic decision tree algorithm for the automatic prediction of unseen triphones which has advantages solving these problems through following two experiments in Japanese contexts. The baseline experimental results show that the modified tree based clustering algorithm is effective for clustering and reducing the number of states without any degradation in performance. The task experimental results show that our proposed algorithm also has the advantage of providing a automatic prediction of unseen triphones.

  • PDF

Effective Acoustic Model Clustering via Decision Tree with Supervised Decision Tree Learning

  • Park, Jun-Ho;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.71-84
    • /
    • 2003
  • In the acoustic modeling for large vocabulary speech recognition, a sparse data problem caused by a huge number of context-dependent (CD) models usually leads the estimated models to being unreliable. In this paper, we develop a new clustering method based on the C45 decision-tree learning algorithm that effectively encapsulates the CD modeling. The proposed scheme essentially constructs a supervised decision rule and applies over the pre-clustered triphones using the C45 algorithm, which is known to effectively search through the attributes of the training instances and extract the attribute that best separates the given examples. In particular, the data driven method is used as a clustering algorithm while its result is used as the learning target of the C45 algorithm. This scheme has been shown to be effective particularly over the database of low unknown-context ratio in terms of recognition performance. For speaker-independent, task-independent continuous speech recognition task, the proposed method reduced the percent accuracy WER by 3.93% compared to the existing rule-based methods.

  • PDF

Implementation of a Robust Speech Recognizer in Noisy Car Environment Using a DSP (DSP를 이용한 자동차 소음에 강인한 음성인식기 구현)

  • Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.67-77
    • /
    • 2008
  • In this paper, we implemented a robust speech recognizer using the TMS320VC33 DSP. For this implementation, we had built speech and noise database suitable for the recognizer using spectral subtraction method for noise removal. The recognizer has an explicit structure in aspect that a speech signal is enhanced through spectral subtraction before endpoints detection and feature extraction. This helps make the operation of the recognizer clear and build HMM models which give minimum model-mismatch. Since the recognizer was developed for the purpose of controlling car facilities and voice dialing, it has two recognition engines, speaker independent one for controlling car facilities and speaker dependent one for voice dialing. We adopted a conventional DTW algorithm for the latter and a continuous HMM for the former. Though various off-line recognition test, we made a selection of optimal conditions of several recognition parameters for a resource-limited embedded recognizer, which led to HMM models of the three mixtures per state. The car noise added speech database is enhanced using spectral subtraction before HMM parameter estimation for reducing model-mismatch caused by nonlinear distortion from spectral subtraction. The hardware module developed includes a microcontroller for host interface which processes the protocol between the DSP and a host.

  • PDF

A Study on Neural Networks for Korean Phoneme Recognition (한국어 음소 인식을 위한 신경회로망에 관한 연구)

  • 최영배
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1992.06a
    • /
    • pp.61-65
    • /
    • 1992
  • This paper presents a study on Neural Networks for Phoneme Recognition and performs phoneme recognition using TDNN(Time Delay Neural Network). Also, this paper proposes new training algorithm for speech recognition using neural nets that proper to large scale TDNN. Because phoneme recognition is indispensable for continuous speech recognition, this paper uses TDNN to get accurate recognition result of phoneme. And this paper proposes new training algorithm that can converge TDNN to optimal state regardless of the number of phoneme to be recognized. The result of recognition on three phoneme classes shows recognition rate of 9.1%. And this paper proves that proposed algorithm is a efficient method for high performance and reducing convergence time.

  • PDF

A Study on Korean Allophone Recognition Using Hierarchical Time-Delay Neural Network (계층구조 시간지연 신경망을 이용한 한국어 변이음 인식에 관한 연구)

  • 김수일;임해창
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.171-179
    • /
    • 1995
  • In many continuous speech recognition systems, phoneme is used as a basic recognition unit However, the coarticulation generated among neighboring phonemes makes difficult to recognize phonemes consistently. This paper proposes allophone as an alternative recognition unit. We have classified each phoneme into three different allophone groups by the location of phoneme within a syllable. For a recognition algorithm, time-delay neural network(TDNN) has been designed. To recognize all Korean allophones, TDNNs are constructed in modular fashion according to acoustic-phonetic features (e.g. voiced/unvoiced, the location of phoneme within a word). Each TDNN is trained independently, and then they are integrated hierarchically into a whole speech recognition system. In this study, we have experimented Korean plosives with phoneme-based recognition system and allophone-based recognition system. Experimental results show that allophone-based recognition is much less affected by the coarticulation.

  • PDF

A study on the Recognition of Continuous Digits using Syntactic Analysis and One-Stage DP (구문 분석과 One-Stage DP를 이용한 연속 숫자음 인식에 관한 연구)

  • Ann, Tae-Ock
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.97-104
    • /
    • 1995
  • This paper is a study on the recognition of continuous digits for the implementation of a voice dialing system, and proposes an method of speech recognition using syntactic analysis and One-Stage DP. In order to perform the speech recognition, first of all, we make DMS model by section division algorithm and let continuous digits data be recognized through the proposed One-Stage DP method using syntactic analysis. In this study, 7 continuous digits of 21 kinds which is pronounced by 8 male speakers two or three times, are used. The speaker dependent and speaker independent recognition are performed with the above data by way of the conventional One-Stage DP and the proposed One-Stage DP using syntactic analysis under the condition of laboratory environment. From the recognition experiments, it is shown that the proposed method was better than the established method. And, the recognition accuracy of speaker dependence and independence by the proposed One-Stage DP using syntactic analysis was about 91.7% and 89.7%.

  • PDF