• Title/Summary/Keyword: Syllable Number Prediction

Search Result 3, Processing Time 0.016 seconds

An Algorithm on Predicting Syllable Numbers of English Monosyllabic Loanwords in Korean (영어 단음절 차용어의 음절수 예측을 위한 알고리즘)

  • Cho Mi-Hui
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.251-256
    • /
    • 2005
  • When English monosyllabic words are adapted to the Korean language, the loanwords tend to carry extra syllables. The purpose of this paper is to find the syllable augmentation conditions in loanword adaptation and further to provide an algorithm to predict the syllable numbers of English monosylabic loanwords. Three syllable augmentation conditions are found as follows: 1) the existence of diphthong, 2) the existence of consonant clusters, and 3) the quality of the final consonant (and the preceding vowel). Based on these three conditions, an algorithm to predict the syllable number of English monosyllabic loanwords are proposed as three rules applied iteratively with ordering. In addition, the applications of the algorithm to data are given.

  • PDF

A Study on the Syllable Recognition Using Neural Network Predictive HMM

  • Kim, Soo-Hoon;Kim, Sang-Berm;Koh, Si-Young;Hur, Kang-In
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.2E
    • /
    • pp.26-30
    • /
    • 1998
  • In this paper, we compose neural network predictive HMM(NNPHMM) to provide the dynamic feature of the speech pattern for the HMM. The NNPHMM is the hybrid network of neura network and the HMM. The NNPHMM trained to predict the future vector, varies each time. It is used instead of the mean vector in the HMM. In the experiment, we compared the recognition abilities of the one hundred Korean syllables according to the variation of hidden layer, state number and prediction orders of the NNPHMM. The hidden layer of NNPHMM increased from 10 dimensions to 30 dimensions, the state number increased from 4 to 6 and the prediction orders increased from 10 dimensions to 30 dimension, the state number increased from 4 to 6 and the prediction orders increased from the second oder to the fourth order. The NNPHMM in the experiment is composed of multi-layer perceptron with one hidden layer and CMHMM. As a result of the experiment, the case of prediction order is the second, the average recognition rate increased 3.5% when the state number is changed from 4 to 5. The case of prediction order is the third, the recognition rate increased 4.0%, and the case of prediction order is fourth, the recognition rate increased 3.2%. But the recognition rate decreased when the state number is changed from 5 to 6.

  • PDF

A Study on Speech Recognition using Recurrent Neural Networks (회귀신경망을 이용한 음성인식에 관한 연구)

  • 한학용;김주성;허강인
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.62-67
    • /
    • 1999
  • In this paper, we investigates a reliable model of the Predictive Recurrent Neural Network for the speech recognition. Predictive Neural Networks are modeled by syllable units. For the given input syllable, then a model which gives the minimum prediction error is taken as the recognition result. The Predictive Neural Network which has the structure of recurrent network was composed to give the dynamic feature of the speech pattern into the network. We have compared with the recognition ability of the Recurrent Network proposed by Elman and Jordan. ETRI's SAMDORI has been used for the speech DB. In order to find a reliable model of neural networks, the changes of two recognition rates were compared one another in conditions of: (1) changing prediction order and the number of hidden units: and (2) accumulating previous values with self-loop coefficient in its context. The result shows that the optimum prediction order, the number of hidden units, and self-loop coefficient have differently responded according to the structure of neural network used. However, in general, the Jordan's recurrent network shows relatively higher recognition rate than Elman's. The effects of recognition rate on the self-loop coefficient were variable according to the structures of neural network and their values.

  • PDF