• Title/Summary/Keyword: Phonemes

Search Result 227, Processing Time 0.027 seconds

Alveolar Fricative Sound Errors by the Type of Morpheme in the Spontaneous Speech of 3- and 4-Year-Old Children (자발화에 나타난 형태소 유형에 따른 3-4세 아동의 치경마찰음 오류)

  • Kim, Soo-Jin;Kim, Jung-Mee;Yoon, Mi-Sun;Chang, Moon-Soo;Cha, Jae-Eun
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.129-136
    • /
    • 2012
  • Korean alveolar fricatives are late-developing speech sounds. Most previous research on phonemes used individual words or pseudo words to produce sounds, but word-level phonological analysis does not always reflect a child's practical articulation ability. Also, there has been limited research on articulation development looking at speech production by grammatical morphemes despite its importance in Korean language. Therefore, this research examines the articulation development and phonological patterns of the /s/ phoneme in terms of morphological types produced in children's spontaneous conversational speech. The subjects were twenty-two typically developing 3- and 4-year-old Koreans. All children showed normal levels in three screening tests: hearing, vocabulary, and articulation. Spontaneous conversational samples were recorded at the children's homes. The results are as follows. The error rates decreased with increasing age in all morphological contexts. Also, error percentages within an age group were significantly lower in lexical morphemes than in grammatical morphemes. The stopping of fricative sounds was the main error pattern in all morphological contexts and reduced as age increased. This research shows that articulation performance can differ significantly by morphological contexts. The present study provides data that can be used to identify the difficult context for articulatory evaluation and therapy of alveolar fricative sounds.

Reliability measure improvement of Phoneme character extract In Out-of-Vocabulary Rejection Algorithm (미등록어 거절 알고리즘에서 음소 특성 추출의 신뢰도 측정 개선)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.219-224
    • /
    • 2012
  • In the communication mobile terminal, Vocabulary recognition system has low recognition rates, because this problems are due to phoneme feature extract from inaccurate vocabulary. Therefore they are not recognize the phoneme and similar phoneme misunderstanding error. To solve this problem, this paper propose the system model, which based on the two step process. First, input phoneme is represent by number which measure the distance of phonemes through phoneme likelihood process. next step is recognize the result through the reliability measure. By this process, we minimize the phoneme misunderstanding error caused by inaccurate vocabulary and perform error correction rate for error provrd vocabulary using phoneme likelihood and reliability. System performance comparison as a result of recognition improve represent 2.7% by method using error pattern learning and semantic pattern.

Automatic Vowel Sequence Reproduction for a Talking Robot Based on PARCOR Coefficient Template Matching

  • Vo, Nhu Thanh;Sawada, Hideyuki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.3
    • /
    • pp.215-221
    • /
    • 2016
  • This paper describes an automatic vowel sequence reproduction system for a talking robot built to reproduce the human voice based on the working behavior of the human articulatory system. A sound analysis system is developed to record a sentence spoken by a human (mainly vowel sequences in the Japanese language) and to then analyze that sentence to give the correct command packet so the talking robot can repeat it. An algorithm based on a short-time energy method is developed to separate and count sound phonemes. A matching template using partial correlation coefficients (PARCOR) is applied to detect a voice in the talking robot's database similar to the spoken voice. Combining the sound separation and counting the result with the detection of vowels in human speech, the talking robot can reproduce a vowel sequence similar to the one spoken by the human. Two tests to verify the working behavior of the robot are performed. The results of the tests indicate that the robot can repeat a sequence of vowels spoken by a human with an average success rate of more than 60%.

An acoustic study of word-timing with references to Korean (한국어 분류에 관한 음향음성학적 연구)

  • 김대원
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.323-327
    • /
    • 1994
  • There have been three contrastive claims over the classification of Korean. To answer the classification question, timing variables which would determine the durations of syllable, word and foot were investigated with various words either in isolation or in sentence contexts using Soundcoup/16 on Macintosh P.C., and a total of 284 utterances, obtained from six Korean speakers, were used. It was found 1) that the durational pattern for words tended to maintain in utterances, regardless of position , subjects and dialects 2) that the syllable duration was determined both by the types of phoneme and by the number of phonemes, the word duration both by the syllable complexity and by the number of syllables, and the foot duration by the word complexity, 3) that there was a constractive relationship between foot length in syllables and foot duration and 4) that the foot duration varied generally with word complexity if the same word did not occur both in the first foot and in the second foot. On the basis of these, it was concluded that Korean is a word timed language where, all else being equal, including tempo, emphasis, etc., the inherent durational pattern for words tends to maintain in utterances. The main difference between stress timing, syllable timing and word timing were also discussed.

  • PDF

A Study on Measuring the Speaking Rate of Speaking Signal by Using Line Spectrum Pair Coefficients

  • Jang, Kyung-A;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3E
    • /
    • pp.18-24
    • /
    • 2001
  • Speaking rate represents how many phonemes in speech signal have in limited time. It is various and changeable depending on the speakers and the characters of each phoneme. The preprocessing to remove the effect of variety of speaking rate is necessary before recognizing the speech in the present speech recognition systems. So if it is possible to estimate the speaking rate in advance, the performance of speech recognition can be higher. However, the conventional speech vocoder decides the transmission rate for analyzing the fixed period no regardless of the variety rate of phoneme but if the speaking rate can be estimated in advance, it is very important information of speech to use in speech coding part as well. It increases the quality of sound in vocoder as well as applies the variable transmission rate. In this paper, we propose the method for presenting the speaking rate as parameter in speech vocoder. To estimate the speaking rate, the variety of phoneme is estimated and the Line Spectrum Pairs is used to estimate it. As a result of comparing the speaking rate performance with the proposed algorithm and passivity method worked by eye, error between two methods is 5.38% about fast utterance and 1.78% about slow utterance and the accuracy between two methods is 98% about slow utterance and 94% about fast utterances in 30 dB SNR and 10 dB SNR respectively.

  • PDF

Korean Speech Recognition Based on Syllable (음절을 기반으로한 한국어 음성인식)

  • Lee, Young-Ho;Jeong, Hong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.1
    • /
    • pp.11-22
    • /
    • 1994
  • For the conventional systme based on word, it is very difficult to enlarge the number of vocabulary. To cope with this problem, we must use more fundamental units of speech. For example, syllables and phonemes are such units, Korean speech consists of initial consonants, middle vowels and final consonants and has characteristic that we can obtain syllables from speech easily. In this paper, we show a speech recognition system with the advantage of the syllable characteristics peculiar to the Korean speech. The algorithm of recognition system is the Time Delay Neural Network. To recognize many recognition units, system consists of initial consonants, middle vowels, and final consonants recognition neural network. At first, our system recognizes initial consonants, middle vowels and final consonants. Then using this results, system recognizes isolated words. Through experiments, we got 85.12% recognition rate for 2735 data of initial consonants, 86.95% recognition rate for 3110 data of middle vowels, and 90.58% recognition rate for 1615 data of final consonants. And we got 71.2% recognition rate for 250 data of isolated words.

  • PDF

Perception of the English Epenthetic Stops by Korean Listeners

  • Han, Jeong-Im
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.87-103
    • /
    • 2004
  • This study investigates Korean listeners' perception of the English stop epenthesis between the sonorant and fricative segments. Specifically this study investigates 1) how often English epenthetic stops are perceived by native Korean listeners, given the fact that Korean does not allow consonant clusters in codas; and 2) whether perception of the epenthetic stops, which are optional phonetic variations, not phonemes, could be improved without any explicit training. 120 English non-words with a mono-syllable structure of CVC1C2, where C1=/m, n, $\eta$, 1/, and C2=/s, $\theta$, $\int$/, were given to two groups of native Korean listeners, and they were asked to detect the target stops such as [p], [t], and [k]. The number of their responses were computed to determine how often listeners succeed in recovering the string of segments produced by the native English speaker. The results of the present study show that English epenthetic stops are poorly identified by native Korean listeners with low English proficiency, even in the case where stimuli with strong acoustic cues are provided with, but perception of epenthetic stops is closely related with listeners' English proficiency, showing the possibility of the improvement of perception. It further shows that perception of epenthetic stops shows asymmetry between coronal and non-coronal consonants.

  • PDF

Sensitive Period of Auditory Perception and Linguistic Discrimination

  • Cha, Kyung-Whan;Jo, Hannah
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.59-67
    • /
    • 2014
  • The purpose of this study is to scientifically examine Kuhl's (2011), originally Johnson and Newport's (1989) critical period graph, from a perspective of auditory perception and linguistic discrimination. This study utilizes two types of experiments (auditory perception and linguistic phoneme discrimination) with five different age groups (5 years, 6-8 years, 9-13 years, 15-17 years, and 20-26 years) of Korean English learners. Auditory perception is examined via ultrasonic sounds that are commonly used in the medical field. In addition, each group is measured in terms of their ability to discriminate minimal pairs in Chinese. Since almost all Korean students already have some amount of English exposure, the researchers selected phonemes in Chinese, an unexposed foreign language for all of the subject groups. The results are almost completely in accordance with Kuhl's critical period graph for auditory perception and linguistic discrimination; a sensitive age is found at 8. The results show that the auditory capability of kindergarten children is significantly better than that of other students, measured by their ability to perceive ultrasonic sounds and to distinguish ten minimal pairs in Chinese. This finding strongly implies that human auditory ability is a key factor for the sensitive period of language acquisition.

Automatic pronunciation assessment of English produced by Korean learners using articulatory features (조음자질을 이용한 한국인 학습자의 영어 발화 자동 발음 평가)

  • Ryu, Hyuksu;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.103-113
    • /
    • 2016
  • This paper aims to propose articulatory features as novel predictors for automatic pronunciation assessment of English produced by Korean learners. Based on the distinctive feature theory, where phonemes are represented as a set of articulatory/phonetic properties, we propose articulatory Goodness-Of-Pronunciation(aGOP) features in terms of the corresponding articulatory attributes, such as nasal, sonorant, anterior, etc. An English speech corpus spoken by Korean learners is used in the assessment modeling. In our system, learners' speech is forced aligned and recognized by using the acoustic and pronunciation models derived from the WSJ corpus (native North American speech) and the CMU pronouncing dictionary, respectively. In order to compute aGOP features, articulatory models are trained for the corresponding articulatory attributes. In addition to the proposed features, various features which are divided into four categories such as RATE, SEGMENT, SILENCE, and GOP are applied as a baseline. In order to enhance the assessment modeling performance and investigate the weights of the salient features, relevant features are extracted by using Best Subset Selection(BSS). The results show that the proposed model using aGOP features outperform the baseline. In addition, analysis of relevant features extracted by BSS reveals that the selected aGOP features represent the salient variations of Korean learners of English. The results are expected to be effective for automatic pronunciation error detection, as well.

Sentiment Analysis Using Deep Learning Model based on Phoneme-level Korean (한글 음소 단위 딥러닝 모형을 이용한 감성분석)

  • Lee, Jae Jun;Kwon, Suhn Beom;Ahn, Sung Mahn
    • Journal of Information Technology Services
    • /
    • v.17 no.1
    • /
    • pp.79-89
    • /
    • 2018
  • Sentiment analysis is a technique of text mining that extracts feelings of the person who wrote the sentence like movie review. The preliminary researches of sentiment analysis identify sentiments by using the dictionary which contains negative and positive words collected in advance. As researches on deep learning are actively carried out, sentiment analysis using deep learning model with morpheme or word unit has been done. However, this model has disadvantages in that the word dictionary varies according to the domain and the number of morphemes or words gets relatively larger than that of phonemes. Therefore, the size of the dictionary becomes large and the complexity of the model increases accordingly. We construct a sentiment analysis model using recurrent neural network by dividing input data into phoneme-level which is smaller than morpheme-level. To verify the performance, we use 30,000 movie reviews from the Korean biggest portal, Naver. Morpheme-level sentiment analysis model is also implemented and compared. As a result, the phoneme-level sentiment analysis model is superior to that of the morpheme-level, and in particular, the phoneme-level model using LSTM performs better than that of using GRU model. It is expected that Korean text processing based on a phoneme-level model can be applied to various text mining and language models.