• Title/Summary/Keyword: Automatic speech recognition

Search Result 212, Processing Time 0.026 seconds

Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command (음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현)

  • Shim, Byoung-Kyun;Han, Sung-Hyun
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.20 no.2
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

AI-based language tutoring systems with end-to-end automatic speech recognition and proficiency evaluation

  • Byung Ok Kang;Hyung-Bae Jeon;Yun Kyung Lee
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.48-58
    • /
    • 2024
  • This paper presents the development of language tutoring systems for nonnative speakers by leveraging advanced end-to-end automatic speech recognition (ASR) and proficiency evaluation. Given the frequent errors in non-native speech, high-performance spontaneous speech recognition must be applied. Our systems accurately evaluate pronunciation and speaking fluency and provide feedback on errors by relying on precise transcriptions. End-to-end ASR is implemented and enhanced by using diverse non-native speaker speech data for model training. For performance enhancement, we combine semisupervised and transfer learning techniques using labeled and unlabeled speech data. Automatic proficiency evaluation is performed by a model trained to maximize the statistical correlation between the fluency score manually determined by a human expert and a calculated fluency score. We developed an English tutoring system for Korean elementary students called EBS AI Peng-Talk and a Korean tutoring system for foreigners called KSI Korean AI Tutor. Both systems were deployed by South Korean government agencies.

HMM-Based Automatic Speech Recognition using EMG Signal

  • Lee Ki-Seung
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.3
    • /
    • pp.101-109
    • /
    • 2006
  • It has been known that there is strong relationship between human voices and the movements of the articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The EMG signals were acquired from three articulatory facial muscles. Preliminary, 10 Korean digits were used as recognition variables. The various feature parameters including filter bank outputs, linear predictive coefficients and cepstrum coefficients were evaluated to find the appropriate parameters for EMG-based speech recognition. The sequence of the EMG signals for each word is modelled by a hidden Markov model (HMM) framework. A continuous word recognition approach was investigated in this work. Hence, the model for each word is obtained by concatenating the subword models and the embedded re-estimation techniques were employed in the training stage. The findings indicate that such a system may have a capacity to recognize speech signals with an accuracy of up to 90%, in case when mel-filter bank output was used as the feature parameters for recognition.

A Study on Exceptional Pronunciations For Automatic Korean Pronunciation Generator (한국어 자동 발음열 생성 시스템을 위한 예외 발음 연구)

  • Kim Sunhee
    • MALSORI
    • /
    • no.48
    • /
    • pp.57-67
    • /
    • 2003
  • This paper presents a systematic description of exceptional pronunciations for automatic Korean pronunciation generation. An automatic pronunciation generator in Korean is an essential part of a Korean speech recognition system and a TTS (Text-To-Speech) system. It is composed of a set of regular rules and an exceptional pronunciation dictionary. The exceptional pronunciation dictionary is created by extracting the words that have exceptional pronunciations, based on the characteristics of the words of exceptional pronunciation through phonological research and the systematic analysis of the entries of Korean dictionaries. Thus, the method contributes to improve performance of automatic pronunciation generator in Korean as well as the performance of speech recognition system and TTS system in Korean.

  • PDF

Distorted Speech Rejection For Automatic Speech Recognition under CDMA Wireless Communication (CDMA이동통신환경에서의 음성인식을 위한 왜곡음성신호 거부방법)

  • Kim Nam Soo;Chang Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.597-601
    • /
    • 2004
  • This paper introduces a pre-rejection technique for wireless channel distorted speech with application to automatic speech recognition (ASR) Based on analysis of distorted speech signals over a wireless communication channel. we propose a method to reject the channel distorted speech with a small computational load. From a number of simulation results. we can discover that tile pre-rejection algorithm enhances the robustness of speech recognition operation.

Readability Enhancement of English Speech Recognition Output Using Automatic Capitalisation Classification (자동 대소문자 식별을 이용한 영어 음성인식 결과의 가독성 향상)

  • Kim, Ji-Hwan
    • MALSORI
    • /
    • no.61
    • /
    • pp.101-111
    • /
    • 2007
  • A modified speech recogniser have been proposed for automatic capitalisation generation to improve the readability of English speech recognition output. In this modified speech recogniser, every word in its vocabulary is duplicated: once in a de-caplitalised form and again in the capitalised forms. In addition its language model is re-trained on mixed case texts. In order to evaluate the performance of the proposed system, experiments of automatic capitalisation generation were performed for 3 hours of Broadcast News(BN) test data using the modified HTK BN transcription system. The proposed system produced an F-measure of 0.7317 for automatic capitalisation generation with an SER of 48.55, a precision of 0.7736 and a recall of 0.6942.

  • PDF

A Study on Voice Web Browsing in JAVA Beans Component Architecture Automatic Speech Recognition Application System. (JAVABeans Component 구조를 갖는 음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 장준식;윤재석
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.273-276
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in speed and implantation, in this study, Automatic Speech Recognition Application System with Java beans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

  • PDF

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • v.35 no.1
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.

A Study on Voice Web Browsing in Automatic Speech Recognition Application System (음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 윤재석
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.949-954
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in reusability and portability, in this study, Automatic Speech Recognition Application System with Javabeans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

Noise Robust Automatic Speech Recognition Scheme with Histogram of Oriented Gradient Features

  • Park, Taejin;Beack, SeungKwan;Lee, Taejin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.5
    • /
    • pp.259-266
    • /
    • 2014
  • In this paper, we propose a novel technique for noise robust automatic speech recognition (ASR). The development of ASR techniques has made it possible to recognize isolated words with a near perfect word recognition rate. However, in a highly noisy environment, a distinct mismatch between the trained speech and the test data results in a significantly degraded word recognition rate (WRA). Unlike conventional ASR systems employing Mel-frequency cepstral coefficients (MFCCs) and a hidden Markov model (HMM), this study employ histogram of oriented gradient (HOG) features and a Support Vector Machine (SVM) to ASR tasks to overcome this problem. Our proposed ASR system is less vulnerable to external interference noise, and achieves a higher WRA compared to a conventional ASR system equipped with MFCCs and an HMM. The performance of our proposed ASR system was evaluated using a phonetically balanced word (PBW) set mixed with artificially added noise.