• Title/Summary/Keyword: Semi-syllable

Search Result 6, Processing Time 0.017 seconds

A Study on Korean Connected Digit Recognizer Based on Semi-syllable and Post-processing (반음절기반의 한국어 연속숫자음인식과 그 후처리에 대한 연구)

  • Jeong, Jae-Boo;Chung, Hoon;Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.1-15
    • /
    • 2001
  • This paper describes the effect of new recognition unit, a unit based on semisyllable, and its post processing method. A recognition unit based on semi-syllable expresses Korean connected digit's coarticulation effect. An existing method using semi-syllable limits next models, derived from current recognized models, to make complete connected digit sequence. However, this paper uses a new method to make complete connected digit sequence. The new post-processing method recognizes isolated digit words which include digits sequence from the digit combinations being able to occur from current recognized semi-syllable sequence. This method gives an improved accuracy rate than that of existing method. This new post processing provides two advantages. 1) It corrects current mis-recognized semi-syllable unit. 2) When people say each digit, they say it without regard to saying duration.

  • PDF

Fractal Dimension Method for Connected-digit Recognition (연속음 처리를 위한 프랙탈 차원 방법 고찰)

  • Kim, Tae-Sik
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.45-55
    • /
    • 2003
  • Strange attractor can be used as a presentation method for signal processing. Fractal dimension is well known method that extract features from attractor. Even though the method provides powerful capabilities for speech processing, there is drawback which should be solved in advance. Normally, the size of the raw signal should be long enough for processing if we use the fractal dimension method. However, in the area of connected-digits problem, normally, syllable or semi-syllable based processing is applied. In this case, there is no evidence that we have sufficient data or not to extract characteristics of attractor. This paper discusses the relationship between the size of the signal data and the calculation result of fractal dimension, and also discusses the efficient way to be applied to connected-digit recognition.

  • PDF

An Implementation of the Vocabulary Independent Speech Recognition System Using VCCV Unit (VCCV단위를 이용한 어휘독립 음성인식 시스템의 구현)

  • 윤재선;홍광석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.160-166
    • /
    • 2002
  • In this paper, we implement a new vocabulary-independent speech recognition system that uses CV, VCCV, VC recognition unit. Since these recognition units are extracted in the trowel region of syllable, the segmentation is easy and robust. And in the case of not existing VCCV unit, the units are replaced by combining VC and CV semi-syllable model. Clustering of vowel group and applying combination rule to the substitution model in the case of not existing of VCCV model lead to 5.2% recognition performance improvement from 90.4% (Model A) to 95.6% (Model C) in the first candidate. The recognition results that is 98.8% recognition rate in the second candidate confirm the effectiveness of the proposed method.

A Study on the Korean Broadcasting Speech Recognition (한국어 방송 음성 인식에 관한 연구)

  • 김석동;송도선;이행세
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.1
    • /
    • pp.53-60
    • /
    • 1999
  • This paper is a study on the korean broadcasting speech recognition. Here we present the methods for the large vocabuary continuous speech recognition. Our main concerns are the language modeling and the search algorithm. The used acoustic model is the uni-phone semi-continuous hidden markov model and the used linguistic model is the N-gram model. The search algorithm consist of three phases in order to utilize all available acoustic and linguistic information. First, we use the forward Viterbi beam search to find word end frames and to estimate related scores. Second, we use the backword Viterbi beam search to find word begin frames and to estimate related scores. Finally, we use A/sup */ search to combine the above two results with the N-grams language model and to get recognition results. Using these methods maximum 96.0% word recognition rate and 99.2% syllable recognition rate are achieved for the speaker-independent continuous speech recognition problem with about 12,000 vocabulary size.

  • PDF

Tonal Characteristics Based on Intonation Pattern of the Korean Emotion Words (감정단어 발화 시 억양 패턴을 반영한 멜로디 특성)

  • Yi, Soo Yon;Oh, Jeahyuk;Chong, Hyun Ju
    • Journal of Music and Human Behavior
    • /
    • v.13 no.2
    • /
    • pp.67-83
    • /
    • 2016
  • This study investigated the tonal characteristics in Korean emotion words by analyzing the pitch patterns transformed from word utterance. Participants were 30 women, ages 19-23. Each participant was instructed to talk about their emotional experiences using 4-syllable target words. A total of 180 utterances were analyzed in terms of the frequency of each syllable using the Praat. The data were transformed into meantones based on the semi-tone scale. When emotion words were used in the middle of a sentence, the pitch pattern was transformed to A3-A3-G3-G3 for '즐거워서(joyful)', C4-D4-B3-A3 for '행복해서(happy)', G3-A3-G3-G3 for '억울해서(resentful)', A3-A3-G3-A3 for '불안해서(anxious)', and C4-C4-A3-G3 for '침울해서(frustrated)'. When the emotion words were used at the end of a sentence, the pitch pattern was transformed to G4-G4-F4-F4 for '즐거워요(joyful)', D4-D4-A3-G3 for '행복해요(happy)', G3-G3-G3-A3 and F3-G3-E3-D3 for '억울해요(resentful)', A3-G3-F3-F3 for '불안해요(anxious)', and A3-A3-F3-F3 for '침울해요(frustrated)'. These results indicate the differences in pitch patterns depending on the conveyed emotions and the position of words in a sentence. This study presents the baseline data on the tonal characteristics of emotion words, thereby suggesting how pitch patterns could be utilized when creating a melody during songwriting for emotional expression.

Development of the video-based smart utterance deep analyser (SUDA) application (동영상 기반 자동 발화 심층 분석(SUDA) 어플리케이션 개발)

  • Lee, Soo-Bok;Kwak, Hyo-Jung;Yun, Jae-Min;Shin, Dong-Chun;Sim, Hyun-Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.63-72
    • /
    • 2020
  • This study aims to develop a video-based smart utterance deep analyser (SUDA) application that analyzes semiautomatically the utterances that child and mother produce during interactions over time. SUDA runs on the platform of Android, iPhones, and tablet PCs, and allows video recording and uploading to server. In this device, user modes are divided into three modes: expert mode, general mode and manager mode. In the expert mode which is useful for speech and language evaluation, the subject's utterances are analyzed semi-automatically by measuring speech and language factors such as disfluency, morpheme, syllable, word, articulation rate and response time, etc. In the general mode, the outcome of utterance analysis is provided in a graph form, and the manger mode is accessed only to the administrator controlling the entire system, such as utterance analysis and video deletion. SUDA helps to reduce clinicians' and researchers' work burden by saving time for utterance analysis. It also helps parents to receive detailed information about speech and language development of their child easily. Further, this device will contribute to building a big longitudinal data enough to explore predictors of stuttering recovery and persistence.