• Title/Summary/Keyword: continuous speech

Search Result 314, Processing Time 0.023 seconds

Modular Fuzzy Neural Controller Driven by Voice Commands

  • Izumi, Kiyotaka;Lim, Young-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.32.3-32
    • /
    • 2001
  • This paper proposes a layered protocol to interpret voice commands of the user´s own language to a machine, to control it in real time. The layers consist of speech signal capturing layer, lexical analysis layer, interpretation layer and finally activation layer, where each layer tries to mimic the human counterparts in command following. The contents of a continuous voice command are captured by using Hidden Markov Model based speech recognizer. Then the concepts of Artificial Neural Network are devised to classify the contents of the recognized voice command ...

  • PDF

An Utterance Verification using Vowel String (모음 열을 이용한 발화 검증)

  • 유일수;노용완;홍광석
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.46-49
    • /
    • 2003
  • The use of confidence measures for word/utterance verification has become art essential component of any speech input application. Confidence measures have applications to a number of problems such as rejection of incorrect hypotheses, speaker adaptation, or adaptive modification of the hypothesis score during search in continuous speech recognition. In this paper, we present a new utterance verification method using vowel string. Using subword HMMs of VCCV unit, we create anti-models which include vowel string in hypothesis words. The experiment results show that the utterance verification rate of the proposed method is about 79.5%.

  • PDF

The Prosodic Characteristics of Korean Read Sentences in Dicourse Context (한국어 낭독체 담화문의 운율적 특징 - 단독발화문과 연속발화문의 비교를 통하여 -)

  • Seong Cheol-Jae
    • MALSORI
    • /
    • no.35_36
    • /
    • pp.1-12
    • /
    • 1998
  • This study aims to investigate the prosodic characteristics of Korean discourse sentences, especially focusing the initial and final part of a sentence. 50 disourse sentences were read in two different styles; one, sentence by sentence, the other, continuous of all 50's. First, we tried to get two kinds of ratios from the acoustic results: first, ratio of the final syllable to the initial syllable in first word in a sentence; second, ratio of the final syllable to the initial syllable in last word in a sentence. We, then, calculated statistical values of the ratios including mean, standard deviation, minimum, maximum, and p-values in t-test. With respect to duration, there were little difference between two different styles. If any, we could see tiny unharmonious durational aspect in the initial of continuous reading. More concisely, there could be observed some deviation from standard. In case of F0, there was prominent statistical difference between ratios of last words in two styles. This difference might play a role as a prosodic feature. Energy seems to show similar pattern with that of F0. The results showed that final syllable in last word was pronounced with about 85 % of initial syllable in the same context and the last words in continuous speech were strongly articulated compared with those of sentence by sentence reading.

  • PDF

The Design of Keyword Spotting System based on Auditory Phonetical Knowledge-Based Phonetic Value Classification (청음 음성학적 지식에 기반한 음가분류에 의한 핵심어 검출 시스템 구현)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.169-178
    • /
    • 2003
  • This study outlines two viewpoints the classification of phone likely unit (PLU) which is the foundation of korean large vocabulary speech recognition, and the effectiveness of Chiljongseong (7 Final Consonants) and Paljogseong (8 Final Consonants) of the korean language. The phone likely classifies the phoneme phonetically according to the location of and method of articulation, and about 50 phone-likely units are utilized in korean speech recognition. In this study auditory phonetical knowledge was applied to the classification of phone likely unit to present 45 phone likely unit. The vowels 'ㅔ, ㅐ'were classified as phone-likely of (ee) ; 'ㅒ, ㅖ' as [ye] ; and 'ㅚ, ㅙ, ㅞ' as [we]. Secondly, the Chiljongseong System of the draft for unified spelling system which is currently in use and the Paljongseonggajokyong of Korean script haerye were illustrated. The question on whether the phonetic value on 'ㄷ' and 'ㅅ' among the phonemes used in the final consonant of the korean fan guage is the same has been argued in the academic world for a long time. In this study, the transition stages of Korean consonants were investigated, and Ciljonseeng and Paljongseonggajokyong were utilized in speech recognition, and its effectiveness was verified. The experiment was divided into isolated word recognition and speech recognition, and in order to conduct the experiment PBW452 was used to test the isolated word recognition. The experiment was conducted on about 50 men and women - divided into 5 groups - and they vocalized 50 words each. As for the continuous speech recognition experiment to be utilized in the materialized stock exchange system, the sentence corpus of 71 stock exchange sentences and speech corpus vocalizing the sentences were collected and used 5 men and women each vocalized a sentence twice. As the result of the experiment, when the Paljongseonggajokyong was used as the consonant, the recognition performance elevated by an average of about 1.45% : and when phone likely unit with Paljongseonggajokyong and auditory phonetic applied simultaneously, was applied, the rate of recognition increased by an average of 1.5% to 2.02%. In the continuous speech recognition experiment, the recognition performance elevated by an average of about 1% to 2% than when the existing 49 or 56 phone likely units were utilized.

Development of a Korean Speech Recognition Platform (ECHOS) (한국어 음성인식 플랫폼 (ECHOS) 개발)

  • Kwon Oh-Wook;Kwon Sukbong;Jang Gyucheol;Yun Sungrack;Kim Yong-Rae;Jang Kwang-Dong;Kim Hoi-Rin;Yoo Changdong;Kim Bong-Wan;Lee Yong-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.8
    • /
    • pp.498-504
    • /
    • 2005
  • We introduce a Korean speech recognition platform (ECHOS) developed for education and research Purposes. ECHOS lowers the entry barrier to speech recognition research and can be used as a reference engine by providing elementary speech recognition modules. It has an easy simple object-oriented architecture, implemented in the C++ language with the standard template library. The input of the ECHOS is digital speech data sampled at 8 or 16 kHz. Its output is the 1-best recognition result. N-best recognition results, and a word graph. The recognition engine is composed of MFCC/PLP feature extraction, HMM-based acoustic modeling, n-gram language modeling, finite state network (FSN)- and lexical tree-based search algorithms. It can handle various tasks from isolated word recognition to large vocabulary continuous speech recognition. We compare the performance of ECHOS and hidden Markov model toolkit (HTK) for validation. In an FSN-based task. ECHOS shows similar word accuracy while the recognition time is doubled because of object-oriented implementation. For a 8000-word continuous speech recognition task, using the lexical tree search algorithm different from the algorithm used in HTK, it increases the word error rate by $40\%$ relatively but reduces the recognition time to half.

Emergency dispatching based on automatic speech recognition (음성인식 기반 응급상황관제)

  • Lee, Kyuwhan;Chung, Jio;Shin, Daejin;Chung, Minhwa;Kang, Kyunghee;Jang, Yunhee;Jang, Kyungho
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.31-39
    • /
    • 2016
  • In emergency dispatching at 119 Command & Dispatch Center, some inconsistencies between the 'standard emergency aid system' and 'dispatch protocol,' which are both mandatory to follow, cause inefficiency in the dispatcher's performance. If an emergency dispatch system uses automatic speech recognition (ASR) to process the dispatcher's protocol speech during the case registration, it instantly extracts and provides the required information specified in the 'standard emergency aid system,' making the rescue command more efficient. For this purpose, we have developed a Korean large vocabulary continuous speech recognition system for 400,000 words to be used for the emergency dispatch system. The 400,000 words include vocabulary from news, SNS, blogs and emergency rescue domains. Acoustic model is constructed by using 1,300 hours of telephone call (8 kHz) speech, whereas language model is constructed by using 13 GB text corpus. From the transcribed corpus of 6,600 real telephone calls, call logs with emergency rescue command class and identified major symptom are extracted in connection with the rescue activity log and National Emergency Department Information System (NEDIS). ASR is applied to emergency dispatcher's repetition utterances about the patient information. Based on the Levenshtein distance between the ASR result and the template information, the emergency patient information is extracted. Experimental results show that 9.15% Word Error Rate of the speech recognition performance and 95.8% of emergency response detection performance are obtained for the emergency dispatch system.

Phonetic Question Set Generation Algorithm (음소 질의어 집합 생성 알고리즘)

  • 김성아;육동석;권오일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.173-179
    • /
    • 2004
  • Due to the insufficiency of training data in large vocabulary continuous speech recognition, similar context dependent phones can be clustered by decision trees to share the data. When the decision trees are built and used to predict unseen triphones, a phonetic question set is required. The phonetic question set, which contains categories of the phones with similar co-articulation effects, is usually generated by phonetic or linguistic experts. This knowledge-based approach for generating phonetic question set, however, may reduce the homogeneity of the clusters. Moreover, the experts must adjust the question sets whenever the language or the PLU (phone-like unit) of a recognition system is changed. Therefore, we propose a data-driven method to automatically generate phonetic question set. Since the proposed method generates the phone categories using speech data distribution, it is not dependent on the language or the PLU, and may enhance the homogeneity of the clusters. In large vocabulary speech recognition experiments, the proposed algorithm has been found to reduce the error rate by 14.3%.

Postprocessing of A Speech Recognition using the Morphological Anlaysis Technique (형태소 분석 기법을 이용한 음성 인식 후처리)

  • 박미성;김미진;김계성;김성규;이문희;최재혁;이상조
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.4
    • /
    • pp.65-77
    • /
    • 1999
  • There are two problems which will be processed to graft a continuous speech recognition results into natural language processing technique. First, the speaking's unit isn't consistent with text's spacing unit. Second, when it is to be pronounced the phonological alternation phenomena occur inside morphemes or among morphemes. In this paper, we implement the postprocessing system of a continuous speech recognition that above all, solve two problems using the eo-jeol generator and syllable recoveror and morphologically analyze the generated results and then correct the failed results through the corrector. Our system experiments with two kinds of speech corpus, i.e., a primary school text book and editorial corpus. The successful percentage of the former is 93.72%, that of the latter is 92.26%. As results of experiment, we verified that our system is stable regardless the sorts of corpus.

  • PDF

A Model for Post-processing of Speech Recognition Using Syntactic Unit of Morphemes (구문형태소 단위를 이용한 음성 인식의 후처리 모델)

  • 양승원;황이규
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.74-80
    • /
    • 2002
  • There are many researches on post-processing methods for the Korean continuous speech recognition enhancement using natural language processing techniques. It is very difficult to use a formal morphological analyzer for improving the speech recognition because the analysis technique of natural language processing is mainly for formal written languages. In this paper, we propose a speech recognition enhancement model using syntactic unit of morphemes. This approach uses the functional word level longest match which dose not consider spacing words. We describe the post-processing mechanism for the improving speech recognition by using proposed model which uses the relationship of phonological structure information between predicates md auxiliary predicates or bound nouns that are frequently occurred in Korean sentences.

  • PDF

Acoustic Analyses of Vocal Vibrato of Korean Singers

  • Yoo, Jae-Yeon;Jeong, Ok-Ran;Kwon, Do-Ha
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.37-43
    • /
    • 2005
  • The phenomenon of vocal vibrato may be regarded as an acoustic representation of one of the most rapid and continuous changes in pitch and intensity that the human vocal mechanism is capable of producing. Singers are likely to use vibrato effectively to enrich their voice. The purpose of this study was to obtain acoustic measurements (vF0 and vAm) of 45 subjects (15 trot and 15 ballad singers and 15 non-singers) and to compare acoustic measurements of the vowel /a/ produced by 3 groups on 2 voice sampling conditions (prolongation and singing of /a/). Thirty singers of trot and ballad were selected by a producer and a concert director working for the KBS (Korean Broadcasting System). The MDVP was used to measure the acoustic parameters. A two-way MANOVA was used for statistical analyses. The results were as follows; Firstly, there was no significant difference among the 3 groups in vF0 and vAm in prolongation of /a/, but in singing voice, there was a significant difference among 3 groups in vF0 and vAm. Secondly, there was an interaction between music genre and voice sampling condition in vF0, and vAm. Finally, trot singers sing with more vibrato than ballad singers. It was concluded that it is very important to analyze singers' voice including various voice conditions (prolongation, reading, conversation, and singing) and to identify differences of singing voice characteristics among music genre.

  • PDF