• Title/Summary/Keyword: Speech recognition model

Search Result 623, Processing Time 0.025 seconds

Implementation of a Speech Recognition System for a Car Navigation System (차량 항법용 음성인식 시스템의 구현)

  • Lee, Tae-Han;Yang, Tae-Young;Park, Sang-Taick;Lee, Chung-Yong;Youn, Dae-Hee;Cha, Il-Hwan
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.9
    • /
    • pp.103-112
    • /
    • 1999
  • In this paper, a speaker-independent isolated world recognition system for a car navigation system is implemented using a general digital signal processor. This paper presents a method combining SNR normalization with RAS as a noise processing method. The semi-continuous hidden markov model is adopted and TMS320C31 is used in implementing the real-time system. Recognition word set is composed of 69 command words for a car navigation system. Experimental results showed that the recognition performance has a maximum of 93.62% in case of a combination of SNR normalization and spectral subtraction, and the performance improvement rate of the system is 3.69%, Presented noise processing method showed good speech recognition performance in 5dB SNR in car environment.

  • PDF

Language Model based on VCCV and Test of Smoothing Techniques for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 언어모델과 Smoothing 기법 평가)

  • Park, Seon-Hee;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.241-246
    • /
    • 2004
  • In this paper, we propose VCCV units as a processing unit of language model and compare them with clauses and morphemes of existing processing units. Clauses and morphemes have many vocabulary and high perplexity. But VCCV units have low perplexity because of the small lexicon and the limited vocabulary. The construction of language models needs an issue of the smoothing. The smoothing technique used to better estimate probabilities when there is an insufficient data to estimate probabilities accurately. This paper made a language model of morphemes, clauses and VCCV units and calculated their perplexity. The perplexity of VCCV units is lower than morphemes and clauses units. We constructed the N-grams of VCCV units with low perplexity and tested the language model using Katz, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

Self-Adaptation Algorithm Based on Maximum A Posteriori Eigenvoice for Korean Connected Digit Recognition (한국어 연결 숫자음 인식을 일한 최대 사후 Eigenvoice에 근거한 자기적응 기법)

  • Kim Dong Kook;Jeon Hyung Bae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.590-596
    • /
    • 2004
  • This paper Presents a new self-adaptation algorithm based on maximum a posteriori (MAP) eigenvoice for Korean connected digit recognition. The proposed MAP eigenvoice is developed by introducing a probability density model for the eigenvoice coefficients. The Proposed approach provides a unified framework that incorporates the Prior model into the conventional eigenvoice estimation. In self-adaptation system we use only one adaptation utterance that will be recognized, we use MAP eigenvoice that is most robust adaptation. In series of self-adaptation experiments on the Korean connected digit recognition task. we demonstrate that the performance of the proposed approach is better than that of the conventional eigenvoice algorithm for a small amount of adaptation data.

Recognition of Continuous Spoken Korean Language using HMM and Level Building (은닉 마르코프 모델과 레벨 빌딩을 이용한 한국어 연속 음성 인식)

  • 김경현;김상균;김항준
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.11
    • /
    • pp.63-75
    • /
    • 1998
  • Since many co-articulation problems are occurring in continuous spoken Korean language, several researches use words as a basic recognition unit. Though the word unit can solve this problem, it requires much memory and has difficulty fitting an input speech in a word list. In this paper, we propose an hidden Markov model(HMM) based recognition model that is an interconnection network of word HMMs for a syntax of sentences. To match suitably the input sentence into the continuous word list in the network, we use a level building search algorithm. This system represents the large sentence set with a relatively small memory and also has good extensibility. The experimental result of an airplane reservation system shows that it is proper method for a practical recognition system.

  • PDF

Decision Tree for Likely phoneme model schema support (유사 음소 모델 스키마 지원을 위한 결정 트리)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.367-372
    • /
    • 2013
  • In Speech recognition system, there is a problem with phoneme in the model training and it cause a stored mode regeneration process which come into being appear time and more costs. In this paper, we propose the methode of likely phoneme model schema using decision tree clustering. Proposed system has a robust and correct sound model which system apply the decision tree clustering methode form generate model, therefore this system reduce the regeneration process and provide a retrieve the phoneme unit in probability model. Also, this proposed system provide a additional likely phoneme model and configured robust correct sound model. System performance as a result of represent vocabulary dependence recognition rate of 98.3%, vocabulary independence recognition rate of 98.4%.

Dialect classification based on the speed and the pause of speech utterances (발화 속도와 휴지 구간 길이를 사용한 방언 분류)

  • Jonghwan Na;Bowon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2023
  • In this paper, we propose an approach for dialect classification based on the speed and pause of speech utterances as well as the age and gender of the speakers. Dialect classification is one of the important techniques for speech analysis. For example, an accurate dialect classification model can potentially improve the performance of speaker or speech recognition. According to previous studies, research based on deep learning using Mel-Frequency Cepstral Coefficients (MFCC) features has been the dominant approach. We focus on the acoustic differences between regions and conduct dialect classification based on the extracted features derived from the differences. In this paper, we propose an approach of extracting underexplored additional features, namely the speed and the pauses of speech utterances along with the metadata including the age and the gender of the speakers. Experimental results show that our proposed approach results in higher accuracy, especially with the speech rate feature, compared to the method only using the MFCC features. The accuracy improved from 91.02% to 97.02% compared to the previous method that only used MFCC features, by incorporating all the proposed features in this paper.

A Phase-related Feature Extraction Method for Robust Speaker Verification (열악한 환경에 강인한 화자인증을 위한 위상 기반 특징 추출 기법)

  • Kwon, Chul-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.613-620
    • /
    • 2010
  • Additive noise and channel distortion strongly degrade the performance of speaker verification systems, as it introduces distortion of the features of speech. This distortion causes a mismatch between the training and recognition conditions such that acoustic models trained with clean speech do not model noisy and channel distorted speech accurately. This paper presents a phase-related feature extraction method in order to improve the robustness of the speaker verification systems. The instantaneous frequency is computed from the phase of speech signals and features from the histogram of the instantaneous frequency are obtained. Experimental results show that the proposed technique offers significant improvements over the standard techniques in both clean and adverse testing environments.

Support Vector Machine Based Phoneme Segmentation for Lip Synch Application

  • Lee, Kun-Young;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.193-210
    • /
    • 2004
  • In this paper, we develop a real time lip-synch system that activates 2-D avatar's lip motion in synch with an incoming speech utterance. To realize the 'real time' operation of the system, we contain the processing time by invoking merge and split procedures performing coarse-to-fine phoneme classification. At each stage of phoneme classification, we apply the support vector machine (SVM) to reduce the computational load while retraining the desired accuracy. The coarse-to-fine phoneme classification is accomplished via two stages of feature extraction: first, each speech frame is acoustically analyzed for 3 classes of lip opening using Mel Frequency Cepstral Coefficients (MFCC) as a feature; secondly, each frame is further refined in classification for detailed lip shape using formant information. We implemented the system with 2-D lip animation that shows the effectiveness of the proposed two-stage procedure in accomplishing a real-time lip-synch task. It was observed that the method of using phoneme merging and SVM achieved about twice faster speed in recognition than the method employing the Hidden Markov Model (HMM). A typical latency time per a single frame observed for our method was in the order of 18.22 milliseconds while an HMM method applied under identical conditions resulted about 30.67 milliseconds.

  • PDF

Performance Improvement of Korean Connected Digit Recognition Using Various Discriminant Analyses (다양한 변별분석을 통한 한국어 연결숫자 인식 성능향상에 관한 연구)

  • Song Hwa Jeon;Kim Hyung Soon
    • MALSORI
    • /
    • no.44
    • /
    • pp.105-113
    • /
    • 2002
  • In Korean, each digit is monosyllable and some pairs are known to have high confusability, causing performance degradation of connected digit recognition systems. To improve the performance, in this paper, we employ various discriminant analyses (DA) including Linear DA (LDA), Weighted Pairwise Scatter LDA WPS-LDA), Heteroscedastic Discriminant Analysis (HDA), and Maximum Likelihood Linear Transformation (MLLT). We also examine several combinations of various DA for additional performance improvement. Experimental results show that applying any DA mentioned above improves the string accuracy, but the amount of improvement of each DA method varies according to the model complexity or number of mixtures per state. Especially, more than 20% of string error reduction is achieved by applying MLLT after WPS-LDA, compared with the baseline system, when class level of DA is defined as a tied state and 1 mixture per state is used.

  • PDF

The Development of IDMLP Neural Network for the Chip Implementation and it's Application to Speech Recognition (Chip 구현을 위한 IDMLP 신경 회로망의 개발과 음성인식에 대한 응용)

  • 김신진;박정운;정호선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.5
    • /
    • pp.394-403
    • /
    • 1991
  • This paper described the development of input driven multilayer perceptron(IDMLP) neural network and it's application to the Korean spoken digit recognition. The IDMPLP neural network used here and the learning algorithm for this network was proposed newly. In this model, weight value is integer and transfer function in the neuron is hard limit function. According to the result of the network learning for the some kinds of input data, the number of network layers is one or more by the difficulties of classifying the inputs. We tested the recognition of binaried data for the spoken digit 0 to 9 by means of the proposed network. The experimental results are 100% and 96% for the learning data and test data, respectively.

  • PDF