• Title/Summary/Keyword: Speech recognition model

Search Result 624, Processing Time 0.023 seconds

The Neighborhood Effects in Korean Word Recognition Using Computation Model (계산주의적 모델을 이용한 한국어 시각단어 재인에서 나타나는 이웃효과)

  • Park, Ki-Nam;Kwon, You-An;Lim, Heui-Seok;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.295-297
    • /
    • 2007
  • This study suggests a computational model to inquire the roles of phonological information and orthography information in the process of visual word recognition among the courses of language information processing and the representation types of the mental lexicon. As the result of the study, the computational model showed the phonological and orthographic neighborhood effect among language phenomena which are shown in Korean word recognition, and showed proofs which implies that the mental lexicon is represented as phonological information in the process of Korean word recognition.

  • PDF

A Study on the Continuous Speech Recognition for the Automatic Creation of International Phonetics (국제 음소의 자동 생성을 활용한 연속음성인식에 관한 연구)

  • Kim, Suk-Dong;Hong, Seong-Soo;Shin, Chwa-Cheul;Woo, In-Sung;Kang, Heung-Soon
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.83-90
    • /
    • 2007
  • One result of the trend towards globalization is an increased number of projects that focus on natural language processing. Automatic speech recognition (ASR) technologies, for example, hold great promise in facilitating global communications and collaborations. Unfortunately, to date, most research projects focus on single widely spoken languages. Therefore, the cost to adapt a particular ASR tool for use with other languages is often prohibitive. This work takes a more general approach. We propose an International Phoneticizing Engine (IPE) that interprets input files supplied in our Phonetic Language Identity (PLI) format to build a dictionary. IPE is language independent and rule based. It operates by decomposing the dictionary creation process into a set of well-defined steps. These steps reduce rule conflicts, allow for rule creation by people without linguistics training, and optimize run-time efficiency. Dictionaries created by the IPE can be used with the speech recognition system. IPE defines an easy-to-use systematic approach that can obtained 92.55% for the recognition rate of Korean speech and 89.93% for English.

  • PDF

Comparison of the Dynamic Time Warping Algorithm for Spoken Korean Isolated Digits Recognition (한국어 단독 숫자음 인식을 위한 DTW 알고리즘의 비교)

  • 홍진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.3 no.1
    • /
    • pp.25-35
    • /
    • 1984
  • This paper analysis the Dynamic Time Warping algorithms for time normalization of speech pattern and discusses the Dynamic Programming algorithm for spoken Korean isolated digits recognition. In the DP matching, feature vectors of the reference and test pattern are consisted of first three formant frequencies extracted by power spectrum density estimation algorithm of the ARMA model. The major differences in the various DTW algorithms include the global path constrains, the local continuity constraints on the path, and the distance weighting/normalization used to give the overall minimum distance. The performance criterias to evaluate these DP algorithms are memory requirement, speed of implementation, and recognition accuracy.

  • PDF

Vocal Effort Detection Based on Spectral Information Entropy Feature and Model Fusion

  • Chao, Hao;Lu, Bao-Yun;Liu, Yong-Li;Zhi, Hui-Lai
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.218-227
    • /
    • 2018
  • Vocal effort detection is important for both robust speech recognition and speaker recognition. In this paper, the spectral information entropy feature which contains more salient information regarding the vocal effort level is firstly proposed. Then, the model fusion method based on complementary model is presented to recognize vocal effort level. Experiments are conducted on isolated words test set, and the results show the spectral information entropy has the best performance among the three kinds of features. Meanwhile, the recognition accuracy of all vocal effort levels reaches 81.6%. Thus, potential of the proposed method is demonstrated.

A Study on Improved MDL Technique for Optimization of Acoustic Model (향상된 MDL 기법에 의한 음향모델의 최적화 연구)

  • Cho, Hoon-Young;Kim, Sang-Hun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.56-61
    • /
    • 2010
  • This paper describes optimization methods of acoustic models in HMM-based continuous speech recognition. Most of the conventional speech recognition systems use the same number of Gaussian mixture components for each HMM state. However, since the number of data samples available for each state is different from each other, it is possible to reduce the overall number of model parameters and the computational cost at the decoding step by optimizing the number of Gaussian mixture components. In this study, we introduced the Gaussian mixture weight term at the merging stage of Gaussian components in the minimum description length (MDL) based acoustic modeling optimization. Experimental results showed that the proposed method can obtain better ASR accuracy than the previous optimization method which does not consider the Gaussian mixture weight term.

A Study on Background Speaker Selection Method in Speaker Verification System (화자인증 시스템에서 선정 방법에 관한 연구)

  • Choi, Hong-Sub
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.135-146
    • /
    • 2002
  • Generally a speaker verification system improves its system recognition ratio by regularizing log likelihood ratio, using a speaker model and its background speaker model that are required to be verified. The speaker-based cohort method is one of the methods that are widely used for selecting background speaker model. Recently, Gaussian-based cohort model has been suggested as a virtually synthesized cohort model, and unlike a speaker-based model, this is the method that chooses only the probability distributions close to basic speaker's probability distribution among the several neighboring speakers' probability distributions and thereby synthesizes a new virtual speaker model. It shows more excellent results than the existing speaker-based method. This study compared the existing speaker-based background speaker models and virtual speaker models and then constructed new virtual background speaker model groups which combined them in a certain ratio. For this, this study constructed a speaker verification system that uses GMM (Gaussin Mixture Model), and found that the suggested method of selecting virtual background speaker model shows more improved performance.

  • PDF

A Study on the Speech Recognition Performance of the Multilayered Recurrent Prediction Neural Network (다층회귀예측신경망의 음성인식성능에 관한 연구)

  • 안점영
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.2
    • /
    • pp.313-319
    • /
    • 1999
  • We devise the 3 models of Multilayered Recurrent Prediction Neural Network(MLRPNN), which are obtained by modifying the Multilayered Perceptron(MLP) with 4 layers. We experimentally study the speech recognition performance of 3 models by a comparative method, according to the variation of the prediction order, the number of neurons in two hidden layers, initial values of connecting weights and transfer function, respectively. By the experiment, the recognition performance of each MLRPNN is better than that of MLP. At the model that returns the output of the upper hidden layer to the lower hidden layer, the recognition performance shows the best value. All MLRPNNs, which have 10 or 15 neurons in the upper and lower hidden layer and is predicted by 3rd or 4th order, show the improved speech recognition rate. On learning, these MLRPNNs have a better recognition rate when we set the initial weights between -0.5 and 0.5, and use the unipolar sigmoid transfer function in the lower hidden layer.

  • PDF

Speech Recognition Based on VQ/NN using Fuzzy (Fuzzy를 이용한 VQ/NN에 기초를 둔 음성 인식)

  • Ann, Tae-Ock
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.6
    • /
    • pp.5-11
    • /
    • 1996
  • This paper is the study for recognizing single vowels of speaker-independent, and we suppose a method of speech recognition using VQ(Vector Quantization)/NN(Neural Network). This method makes a VQ codebook, which is used for obtaining the observation sequence, and then claculates the probability value by comparing each codeword with the data, finally uses these probability values for the input value of the neural network. Korean signle vowels are selected for our recognition experiment, and ten male speakers pronounced eight single vowels ten times. We compare the performance of our method with those of fuzzy VQ/HMM and conventional VQ/NN According to the experiment result, the recognition rate by VQ/NN is 92.3%, by VQ/HMM using fuzzy is 93.8% and by VQ/NN using fuzzy is 95.7%. Therefore, it is shown that recognition rate of speech recognition by fuzzy VQ/NN is better than those of fuzzy VQ/HMM and conventional VQ/HMM because of its excellent learning ability.

  • PDF

On the Use of a Parallel-Branch Subunit Mod디 in Continuous HMM for improved Word Recognition (연속분포 HMM에서 평행분기 음성단위를 사용한 단어인식율 향상연구)

  • Park, Yong-Kyuo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.2E
    • /
    • pp.25-32
    • /
    • 1995
  • In this paper, we propose to use a parallel-branch subunit model for improved word recognition. The model is obtained by splitting off each subunit branch based on mixture component in continuous hidden Markov model(continuous HMM). According to simulation results, the proposed model yields higher recognition rate than the single-branch subunit model or the parallel-branch subunit model proposed by Rabiner et al[1]. We show that a proper combination of the number of mixture components and the number of branches for each subunit results in increased recognition rate. To study the recognition performance of the proposed algorithms, the speech material used in this work was a vocabulary with 1036 Korean words.

  • PDF

Robust Feature Parameter for Implementation of Speech Recognizer Using Support Vector Machines (SVM음성인식기 구현을 위한 강인한 특징 파라메터)

  • 김창근;박정원;허강인
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.195-200
    • /
    • 2004
  • In this paper we propose effective speech recognizer through two recognition experiments. In general, SVM is classification method which classify two class set by finding voluntary nonlinear boundary in vector space and possesses high classification performance under few training data number. In this paper we compare recognition performance of HMM and SVM at training data number and investigate recognition performance of each feature parameter while changing feature space of MFCC using Independent Component Analysis(ICA) and Principal Component Analysis(PCA). As a result of experiment, recognition performance of SVM is better than 1:.um under few training data number, and feature parameter by ICA showed the highest recognition performance because of superior linear classification.