• Title/Summary/Keyword: spoken word

Search Result 111, Processing Time 0.025 seconds

A New Rijection Algorithm Using Word-Dependent Garbage Models

  • Lee, Gang-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2E
    • /
    • pp.27-31
    • /
    • 1997
  • This paper proposes a new rejection algorithm which distinguishes unregistered spoken words(or non-keywords) from registered vocabulary. Two kinds of garbage models are employed in this design ; the original garbage model and a new word garbage model. The original garbage model collects all non-keyword patterns where the new word garbage model collects patterns classified by recognizing each non-keyword pattern with registered vocabulary. These two types of garbage models work together to make a robust reject decision. The first stage of processing is the classification of an input pattern through the original garbage model. In the event that the first stage of processing is ambiguous, the new word dependent garbage model is used to classify thye input pattern as either a registered or non-registered word. This paper shows the efficiency of the new word dependent garbage model. A Dynamic Multisection method is used to test the performance of the algorithm. Results of this experiment show that the proposed algorithm performs at a higher level than that of the original garbage model.

  • PDF

Lexico-semantic interactions during the visual and spoken recognition of homonymous Korean Eojeols (한국어 시·청각 동음동철이의 어절 재인에 나타나는 어휘-의미 상호작용)

  • Kim, Joonwoo;Kang, Kathleen Gwi-Young;Yoo, Doyoung;Jeon, Inseo;Kim, Hyun Kyung;Nam, Hyeomin;Shin, Jiyoung;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2021
  • The present study investigated the mental representation and processing of an ambiguous word in the bimodal processing system by manipulating the lexical ambiguity of a visually or auditorily presented word. Homonyms (e.g., '물었다') with more than two meanings and control words (e.g., '고통을') with a single meaning were used in the experiments. The lemma frequency of words was manipulated while the relative frequency of multiple meanings of each homonym was balanced. In both experiments using the lexical decision task, a robust frequency effect and a critical interaction of word type by frequency were found. In Experiment 1, spoken homonyms yielded faster latencies relative to control words (i.e., ambiguity advantage) in the low frequency condition, while ambiguity disadvantage was found in the high frequency condition. A similar interactive pattern was found in visually presented homonyms in the subsequent Experiment 2. Taken together, the first key finding is that interdependent lexico-semantic processing can be found both in the visual and auditory processing system, which in turn suggests that semantic processing is not modality dependent, but rather takes place on the basis of general lexical knowledge. The second is that multiple semantic candidates provide facilitative feedback only when the lemma frequency of the word is relatively low.

Automatic Error Correction System for Erroneous SMS Strings (SMS 변형된 문자열의 자동 오류 교정 시스템)

  • Kang, Seung-Shik;Chang, Du-Seong
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.6
    • /
    • pp.386-391
    • /
    • 2008
  • Some spoken word errors that violate grammatical or writing rules occurs frequently in communication environments like mobile phone and messenger. These unexpected errors cause a problem in a language processing system for many applications like speech recognition, text-to-speech translation, and so on. In this paper, we proposed and implemented an automatic correction system of ill-formed words and word spacing errors in SMS sentences that has been the major errors of poor accuracy. We experimented three methods of constructing the word correction dictionary and evaluated the results of those methods. They are (1) manual construction of error words from the vocabulary list of ill-formed communication languages, (2) automatic construction of error dictionary from the manually constructed corpus, and (3) context-dependent method of automatic construction of error dictionary.

Phonological Process and Word Recognition in Continuous Speech: Evidence from Coda-neutralization (음운 현상과 연속 발화에서의 단어 인지 - 종성중화 작용을 중심으로)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.2
    • /
    • pp.17-25
    • /
    • 2010
  • This study explores whether Koreans exploit their native coda-neutralization process when recognizing words in Korean continuous speech. According to the phonological rules in Korean, coda-neutralization process must come before the liaison process, as long as the latter(i.e. liaison process) occurs between 'words', which results in liaison-consonants being coda-neutralized ones such as /b/, /d/, or /g/, rather than non-neutralized ones like /p/, /t/, /k/, /ʧ/, /ʤ/, or /s/. Consequently, if Korean listeners use their native coda-neutralization rules when processing speech input, word recognition will be hampered when non-neutralized consonants precede vowel-initial targets. Word-spotting and word-monitoring tasks were conducted in Experiment 1 and 2, respectively. In both experiments, listeners recognized words faster and more accurately when vowel-initial target words were preceded by coda-neutralized consonants than when preceded by coda non-neutralized ones. The results show that Korean listeners exploit the coda-neutralization process when processing their native spoken language.

  • PDF

Using Utterance and Semantic Level Confidence for Interactive Spoken Dialog Clarification

  • Jung, Sang-Keun;Lee, Cheong-Jae;Lee, Gary Geunbae
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.1-25
    • /
    • 2008
  • Spoken dialog tasks incur many errors including speech recognition errors, understanding errors, and even dialog management errors. These errors create a big gap between the user's intention and the system's understanding, which eventually results in a misinterpretation. To fill in the gap, people in human-to-human dialogs try to clarify the major causes of the misunderstanding to selectively correct them. This paper presents a method of clarification techniques to human-to-machine spoken dialog systems. We viewed the clarification dialog as a two-step problem-Belief confirmation and Clarification strategy establishment. To confirm the belief, we organized the clarification process into three systematic phases. In the belief confirmation phase, we consider the overall dialog system's processes including speech recognition, language understanding and semantic slot and value pairs for clarification dialog management. A clarification expert is developed for establishing clarification dialog strategy. In addition, we proposed a new design of plugging clarification dialog module in a given expert based dialog system. The experiment results demonstrate that the error verifiers effectively catch the word and utterance-level semantic errors and the clarification experts actually increase the dialog success rate and the dialog efficiency.

The Effects of Syllable Boundary Ambiguity on Spoken Word Recognition in Korean Continuous Speech

  • Kang, Jinwon;Kim, Sunmi;Nam, Kichun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.2800-2812
    • /
    • 2012
  • The purpose of this study was to examine the syllable-word boundary misalignment cost on word segmentation in Korean continuous speech. Previous studies have demonstrated the important role of syllabification in speech segmentation. The current study investigated whether the resyllabification process affects word recognition in Korean continuous speech. In Experiment I, under the misalignment condition, participants were presented with stimuli in which a word-final consonant became the onset of the next syllable. (e.g., /k/ in belsak ingan becomes the onset of the first syllable of ingan 'human'). In the alignment condition, they heard stimuli in which a word-final vowel was also the final segment of the syllable (e.g., /eo/ in heulmeo ingan is the end of both the syllable and word). The results showed that word recognition was faster and more accurate in the alignment condition. Experiment II aimed to confirm that the results of Experiment I were attributable to the resyllabification process, by comparing only the target words from each condition. The results of Experiment II supported the findings of Experiment I. Therefore, based on the current study, we confirmed that Korean, a syllable-timed language, has a misalignment cost of resyllabification.

Analysis on Sentence Error Types of Mathematical Problem Posing of Pre-Service Elementary Teachers (초등학교 예비교사들의 수학적 '문제 만들기'에 나타나는 문장의 오류 유형 분석)

  • Huh, Nan;Shin, Hocheol
    • Journal of the Korean School Mathematics Society
    • /
    • v.16 no.4
    • /
    • pp.797-820
    • /
    • 2013
  • This study intended on analyzing the error patterns of mathematic problem posing sentences by the 100 elementary pre-teachers and discussing about the solutions. The results showed that the problem posing sentences have five error patterns: phonological error patterns, word error patterns, sentence error patterns, meaning error patterns, and notation error patterns. Divided into fourteen specific error patterns, they are as in the following. 1) Phonological error patterns are consisted of the 'ㄹ' addition error pattern and the abbreviated word error pattern. 2) Words error patterns are divided with the inappropriate usage of word error pattern and the inadequate abbreviation error pattern, which are formulized four subgroups such as the case maker, ending of the word, inappropriate usage of word, and inadequate abbreviation of article or word error pattern in detail. 3) Sentence error patterns are assumed four kinds of forms: the reference, ellipsis of sentence component, word order, and incomplete sentence error pattern. 4) Meaning error patterns are composed the logical contradiction and the ambiguous meaning. 5) Notation error patterns are formed four patterns as the spacing, punctuation, orthography of Hangul, and spelling rules of foreign words in Korean. Furthermore, the solutions for these error patterns were discussed: First, it has to be perceived the differences between spoken and written language. Second, it has to be rejected the spoken expressions in written contexts. Third, it should be focused on the learning of the basic sentence patterns during the class. Forth, it is suggested that the word meaning should have the logical development perception based on what it means. Finally, it is proposed that the system of spelling of Korean has to be learned. In addition to these suggestions, a new understanding is necessary regarding writing education for college students.

  • PDF

Speech Recognition in the Car Noise Environment (자동차 소음 환경에서 음성 인식)

  • 김완구;차일환;윤대희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.2
    • /
    • pp.51-58
    • /
    • 1993
  • This paper describes the development of a speaker-dependent isolated word recognizer as applied to voice dialing in a car noise environment. for this purpose, several methods to improve performance under such condition are evaluated using database collected in a small car moving at 100km/h The main features of the recognizer are as follow: The endpoint detection error can be reduced by using the magnitude of the signal which is inverse filtered by the AR model of the background noise, and it can be compensated by using variants of the DTW algorithm. To remove the noise, an autocorrelation subtraction method is used with the constraint that residual energy obtainable by linear predictive analysis should be positive. By using the noise rubust distance measure, distortion of the feature vector is minimized. The speech recognizer is implemented using the Motorola DSP56001(24-bit general purpose digital signal processor). The recognition database is composed of 50 Korean names spoken by 3 male speakers. The recognition error rate of the system is reduced to 4.3% using a single reference pattern for each word and 1.5% using 2 reference patterns for each word.

  • PDF

A Study on Korean Spoken Language Understanding Model (한국어 구어 음성 언어 이해 모델에 관한 연구)

  • 노용완;홍광석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2435-2438
    • /
    • 2003
  • In this paper, we propose a Korean speech understanding model using dictionary and thesaurus. The proposed model search the dictionary for the same word with in input text. If it is not in the dictionary, the proposed model search the high level words in the high level word dictionary based on the thesaurus. We compare the probability of sentence understanding model with threshold probability, and we'll get the speech understanding rate. We evaluated the performance of the sentence speech understanding system by applying twenty questions game. As the experiment results, we got sentence speech understanding accuracy of 79.8%. In this case probability of high level word is 0.9 and threshold probability is 0.38.

  • PDF