• Title/Summary/Keyword: Korean phoneme

Search Result 331, Processing Time 0.03 seconds

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

Speech Recognition of the Korean Vowel 'ㅜ' Based on Time Domain Bulk Indicators (시간 영역 벌크 지표에 기반한 한국어 모음 'ㅜ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.11
    • /
    • pp.591-600
    • /
    • 2016
  • Computing technologies are increasingly applied to most casual human environment networks, as computing technologies are further developed. In addition, the rapidly increasing interest in IoT has led to the wide acceptance of speech recognition as a means of HCI. In this study, we present a novel method for recognizing the Korean vowel 'ㅜ', as a part of a phoneme based Korean speech recognition system. The proposed method involves analyses of bulk indicators calculated in the time domain instead of analysis in the frequency domain, with consequent reduction in the computational cost. Four elementary algorithms for detecting typical waveform patterns of 'ㅜ' using bulk indicators are presented and combined to make final decisions. The experimental results show that the proposed method can achieve 90.1% recognition accuracy, and recognition speed of 0.68 msec per syllable.

A Preliminary Report on Perceptual Resolutions of Korean Consonant Cluster Simplification and Their Possible Change over Time

  • Cho, Tae-Hong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.83-92
    • /
    • 2010
  • The present study examined how listeners of Seoul Korean would recover deleted phonemes in consonant cluster simplification. In a phoneme monitoring experiment, listeners had to monitor for C2 (/k/ or /p/) in C1C2C3 when C2 was deleted (C1 was preserved) or preserved (C1 was deleted). The target consonant (C2) was either /k/ or /p/ (e.g., i$\b{lk}$-t${\partial}$lato vs. pa$\b{lp}$-t${\partial}$lato), and there were two listener groups, one group tested in 2002 and the other in 2009. Some points have emerged from the results. First, listeners were able to detect deleted phonemes as accurately and rapidly as preserved phonemes, showing that the physical presence of the acoustic information did not improve the listeners' performance. This suggests that listeners must have relied on language-specific phonological knowledge about the consonant cluster simplification, rather than relying on the low-level acoustic-phonetic information. Second, listener groups (participants in 2002 vs. 2009), differed in processing /p/ versus /k/: listeners in 2009 failed to detect /p/ more frequently than those in 2002, suggesting that the way the consonant cluster sequence is produced and perceived has changed over time. This result was interpreted as coming from statistical patterns of speech production in contemporary Seoul Korean as reported in a recent study by Cho & Kim (2009): /p/ is deleted far more often than /p/ is preserved, which is likely reflected in the way listeners process simplified variants. Finally, listeners processed /k/ more efficiently than /p/, especially when the target was physically present (in C-preserved condition), indicating that listeners benefited more from the presence of /k/ than of /p/. This was interpreted as supporting the view that velars are perceptually more robust than labials, which constrains shaping phonological patterns of the language. These results were then discussed in terms of their implications for theories of spoken word recognition.

  • PDF

Implementation of Korean Vowel 'ㅏ' Recognition based on Common Feature Extraction of Waveform Sequence (파형 시퀀스의 공통 특징 추출 기반 모음 'ㅏ' 인식 구현)

  • Roh, Wonbin;Lee, Jongwoo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.567-572
    • /
    • 2014
  • In recent years, computing and networking technologies have been developed, and the communication equipments have become smaller and the mobility has increased. In addition, the demand for easily-operated speech recognition has increased. This paper proposes method of recognizing the Korean phoneme 'ㅏ'. A phoneme is the smallest unit of sound, and it plays a significant role in speech recognition. However, the precise recognition of the phonemes has many obstacles since it has many variations in its pronunciation. This paper proposes a simple and efficient method that can be used to recognize a Korean vowel 'ㅏ'. The proposed method is based on the common features that are extracted from the 'ㅏ' waveform sequences, and this is simpler than when using the previous complex methods. The experimental results indicate that this method has a more than 90 percent accuracy in recognizing 'ㅏ'.

A minimal pair searching tool based on dictionary (사전 기반 최소대립쌍 검색 도구)

  • Kim, Tae-Hoon;Lee, Jae-Ho;Chang, Moon-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.117-122
    • /
    • 2014
  • The minimal pairs mean the pairs that have same phonotactics except just one sound in the sequences cause different lexical items. This paper proposes the searching tool of minimal pairs for efficiency of phonological researches with minimal pairs. We suggest a guide to develop Korean minimal pair searching programs by comparing to other programs. Proposing tool has user-friendly interface, minimizing key inputs, for linguistics who are not fluent in computer programs. And it serves the function which classifies the words in dictionary for the detailed researches. And for efficiency, it increases speed of dictionary loading by separating syllables through Unicode analysis, and optimizes dictionary structure for searching efficiency. The searching algorithm gains in speed by hashing algorithm using syllable counts. In our tool, the speed is improved more than earlier version about 5 times at converting dictionary and about 3 times at searching.

Construction of Linearly Aliened Corpus Using Unsupervised Learning (자율 학습을 이용한 선형 정렬 말뭉치 구축)

  • Lee, Kong-Joo;Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.387-394
    • /
    • 2004
  • In this paper, we propose a modified unsupervised linear alignment algorithm for building an aligned corpus. The original algorithm inserts null characters into both of two aligned strings (source string and target string), because the two strings are different from each other in length. This can cause some difficulties like the search space explosion for applications using the aligned corpus with null characters and no possibility of applying to several machine learning algorithms. To alleviate these difficulties, we modify the algorithm not to contain null characters in the aligned source strings. We have shown the usability of our approach by applying it to different areas such as Korean-English back-trans literation, English grapheme-phoneme conversion, and Korean morphological analysis.

A Study on the Diphone Recognition of Korean Connected Words and Eojeol Reconstruction (한국어 연결단어의 이음소 인식과 어절 형성에 관한 연구)

  • ;Jeong, Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.46-63
    • /
    • 1995
  • This thesis described an unlimited vocabulary connected speech recognition system using Time Delay Neural Network(TDNN). The recognition unit is the diphone unit which includes the transition section of two phonemes, and the number of diphone unit is 329. The recognition processing of korean connected speech is composed by three part; the feature extraction section of the input speech signal, the diphone recognition processing and post-processing. In the feature extraction section, the extraction of diphone interval in input speech signal is carried and then the feature vectors of 16th filter-bank coefficients are calculated for each frame in the diphone interval. The diphone recognition processing is comprised by the three stage hierachical structure and is carried using 30 Time Delay Neural Networks. particularly, the structure of TDNN is changed so as to increase the recognition rate. The post-processing section, mis-recognized diphone strings are corrected using the probability of phoneme transition and the probability o phoneme confusion and then the eojeols (Korean word or phrase) are formed by combining the recognized diphones.

  • PDF

SOUND SIMILARITY JUDGMENTS AND PHONOLOGICAL UNITS

  • Yoon, Yeo-Bom
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.142-143
    • /
    • 1997
  • The purpose of this paper is to assess the psychological status of the phoneme, syllable, and various postulated subsyllabic units in Korean by applying the Sound Similarity Judgment (SSJ) task, to compare the results with those in English, and to discuss the advantage and disadvantage of the SSJ task as a tool for linguistic research. In Experiment 1, 30 subjects listened to pairs of 56 eve words which were systematically varied from 'totally different' (e.g., pan-met) to 'identical' (e.g., pan-pan). Subjects were then asked to rate sound similarity of each pair on a 10-point scale. Not very surprisingly, there was a strong correlation between the number of phonemic segments matched and the similarity score provided by the subjects. This result was in accord with the previous results from English (e.g., Vitz & Winkler, 1973; Derwing & Nearey, 1986) and supported the assumption that the phoneme is the basic phonological unit in Korean and English. However, there were sharply contrasting results between the two languages. When the pairs shared two phonemes (e.g., pan-pat; pan-pen; pan-man), the pairs sharing the fIrst two phonemes were judged significantly more similar than the other two types of pairs. Quite to the contrary, in the comparable English experiments, the pairs sharing the last two phonemes were judged significantly more similar than the other two types of pairs. Experiment 2 was designed to conflrm the results of Experiment 1 by controlling the 'degree' of similarity between phonemes. For example, the pair pan-pam can be judged more similar than the pair pan-nan, although both pairs share the same number of phonemes. This could be interpreted either as confirming the result of Experiment 1 or as the fact that /n/ is more similar to /m/ than /p/ is to /n/ in terms of shared number of distinctive features. The results of Experiment 2 supported the former interpretation. Thus, the results of both experiments clearly showed that, although the 'number' of matched phonemes is the important predictor in judging sound similarity of monosyllabic pairs of both languages, the 'position' of the matched phonemes exerts a different influence in judging sound similarity in the two languages. This contrasting set of results may provide interesting implications for the internal structure of the syllable in the two languages.

  • PDF

Development and Evaluation of an Address Input System Employing Speech Recognition (음성인식 기능을 가진 주소입력 시스템의 개발과 평가)

  • 김득수;황철준;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2
    • /
    • pp.3-10
    • /
    • 1999
  • This paper describes the development and evaluation of a Korean address input system employing automatic speech recognition technique as user interface for input Korean address. Address consists of cities, provinces and counties. The system works on a window 95 environment of personal computer with built-in soundcard. In the speech recognition part, the Continuous density Hidden Markov Model(CHMM) for making phoneme like units(PLUs) and One Pass Dynamic Programming(OPDP) algorithm is used for recognition. For address recognition, Finite State Automata(FSA) suitable for Korean address structure is constructed. To achieve an acceptable performance against the variation of speakers, microphones, and environmental noises, Maximum a posteriori(MAP) estimation is implemented in adaptation. And to improve the recognition speed, fast search method using variable pruning threshold is newly proposed. In the evaluation tests conducted for the 100 connected words uttered by 3 males the system showed above average 96.0% of recognition accuracy for connected words after adaption and recognition speed within 2 seconds, showing the effectiveness of the system.

  • PDF

Utilization of Syllabic Nuclei Location in Korean Speech Segmentation into Phonemic Units (음절핵의 위치정보를 이용한 우리말의 음소경계 추출)

  • 신옥근
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.13-19
    • /
    • 2000
  • The blind segmentation method, which segments input speech data into recognition unit without any prior knowledge, plays an important role in continuous speech recognition system and corpus generation. As no prior knowledge is required, this method is rather simple to implement, but in general, it suffers from bad performance when compared to the knowledge-based segmentation method. In this paper, we introduce a method to improve the performance of a blind segmentation of Korean continuous speech by postprocessing the segment boundaries obtained from the blind segmentation. In the preprocessing stage, the candidate boundaries are extracted by a clustering technique based on the GLR(generalized likelihood ratio) distance measure. In the postprocessing stage, the final phoneme boundaries are selected from the candidates by utilizing a simple a priori knowledge on the syllabic structure of Korean, i.e., the maximum number of phonemes between any consecutive nuclei is limited. The experimental result was rather promising : the proposed method yields 25% reduction of insertion error rate compared that of the blind segmentation alone.

  • PDF