• 제목/요약/키워드: End-to-end Automatic Speech Recognition

검색결과 17건 처리시간 0.012초

AI-based language tutoring systems with end-to-end automatic speech recognition and proficiency evaluation

  • Byung Ok Kang;Hyung-Bae Jeon;Yun Kyung Lee
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.48-58
    • /
    • 2024
  • This paper presents the development of language tutoring systems for nonnative speakers by leveraging advanced end-to-end automatic speech recognition (ASR) and proficiency evaluation. Given the frequent errors in non-native speech, high-performance spontaneous speech recognition must be applied. Our systems accurately evaluate pronunciation and speaking fluency and provide feedback on errors by relying on precise transcriptions. End-to-end ASR is implemented and enhanced by using diverse non-native speaker speech data for model training. For performance enhancement, we combine semisupervised and transfer learning techniques using labeled and unlabeled speech data. Automatic proficiency evaluation is performed by a model trained to maximize the statistical correlation between the fluency score manually determined by a human expert and a calculated fluency score. We developed an English tutoring system for Korean elementary students called EBS AI Peng-Talk and a Korean tutoring system for foreigners called KSI Korean AI Tutor. Both systems were deployed by South Korean government agencies.

Hybrid CTC-Attention Network-Based End-to-End Speech Recognition System for Korean Language

  • Hosung Park;Changmin Kim;Hyunsoo Son;Soonshin Seo;Ji-Hwan Kim
    • Journal of Web Engineering
    • /
    • 제21권2호
    • /
    • pp.265-284
    • /
    • 2021
  • In this study, an automatic end-to-end speech recognition system based on hybrid CTC-attention network for Korean language is proposed. Deep neural network/hidden Markov model (DNN/HMM)-based speech recognition system has driven dramatic improvement in this area. However, it is difficult for non-experts to develop speech recognition for new applications. End-to-end approaches have simplified speech recognition system into a single-network architecture. These approaches can develop speech recognition system that does not require expert knowledge. In this paper, we propose hybrid CTC-attention network as end-to-end speech recognition model for Korean language. This model effectively utilizes a CTC objective function during attention model training. This approach improves the performance in terms of speech recognition accuracy as well as training speed. In most languages, end-to-end speech recognition uses characters as output labels. However, for Korean, character-based end-to-end speech recognition is not an efficient approach because Korean language has 11,172 possible numbers of characters. The number is relatively large compared to other languages. For example, English has 26 characters, and Japanese has 50 characters. To address this problem, we utilize Korean 49 graphemes as output labels. Experimental result shows 10.02% character error rate (CER) when 740 hours of Korean training data are used.

Fast offline transformer-based end-to-end automatic speech recognition for real-world applications

  • Oh, Yoo Rhee;Park, Kiyoung;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.476-490
    • /
    • 2022
  • With the recent advances in technology, automatic speech recognition (ASR) has been widely used in real-world applications. The efficiency of converting large amounts of speech into text accurately with limited resources has become more vital than ever. In this study, we propose a method to rapidly recognize a large speech database via a transformer-based end-to-end model. Transformers have improved the state-of-the-art performance in many fields. However, they are not easy to use for long sequences. In this study, various techniques to accelerate the recognition of real-world speeches are proposed and tested, including decoding via multiple-utterance-batched beam search, detecting end of speech based on a connectionist temporal classification (CTC), restricting the CTC-prefix score, and splitting long speeches into short segments. Experiments are conducted with the Librispeech dataset and the real-world Korean ASR tasks to verify the proposed methods. From the experiments, the proposed system can convert 8 h of speeches spoken at real-world meetings into text in less than 3 min with a 10.73% character error rate, which is 27.1% relatively lower than that of conventional systems.

Hyperparameter experiments on end-to-end automatic speech recognition

  • Yang, Hyungwon;Nam, Hosung
    • 말소리와 음성과학
    • /
    • 제13권1호
    • /
    • pp.45-51
    • /
    • 2021
  • End-to-end (E2E) automatic speech recognition (ASR) has achieved promising performance gains with the introduced self-attention network, Transformer. However, due to training time and the number of hyperparameters, finding the optimal hyperparameter set is computationally expensive. This paper investigates the impact of hyperparameters in the Transformer network to answer two questions: which hyperparameter plays a critical role in the task performance and training speed. The Transformer network for training has two encoder and decoder networks combined with Connectionist Temporal Classification (CTC). We have trained the model with Wall Street Journal (WSJ) SI-284 and tested on devl93 and eval92. Seventeen hyperparameters were selected from the ESPnet training configuration, and varying ranges of values were used for experiments. The result shows that "num blocks" and "linear units" hyperparameters in the encoder and decoder networks reduce Word Error Rate (WER) significantly. However, performance gain is more prominent when they are altered in the encoder network. Training duration also linearly increased as "num blocks" and "linear units" hyperparameters' values grow. Based on the experimental results, we collected the optimal values from each hyperparameter and reduced the WER up to 2.9/1.9 from dev93 and eval93 respectively.

라벨이 없는 데이터를 사용한 종단간 음성인식기의 준교사 방식 도메인 적응 (Semi-supervised domain adaptation using unlabeled data for end-to-end speech recognition)

  • 정현재;구자현;김회린
    • 말소리와 음성과학
    • /
    • 제12권2호
    • /
    • pp.29-37
    • /
    • 2020
  • 최근 신경망 기반 심층학습 알고리즘의 적용으로 고전적인 Gaussian mixture model based hidden Markov model (GMM-HMM) 음성인식기에 비해 성능이 비약적으로 향상되었다. 또한 심층학습 기법의 장점을 더욱 잘 활용하는 방법으로 언어모델링 및 디코딩 과정을 통합처리 하는 종단간 음성인식 시스템에 대한 연구가 매우 활발히 진행되고 있다. 일반적으로 종단간 음성인식 시스템은 어텐션을 사용한 여러 층의 인코더-디코더 구조로 이루어져 있다. 때문에 종단간 음성인식 시스템이 충분히 좋은 성능을 내기 위해서는 많은 양의 음성과 문자열이 함께 있는 데이터가 필요하다. 음성-문자열 짝 데이터를 구하기 위해서는 사람의 노동력과 시간이 많이 필요하여 종단간 음성인식기를 구축하는 데 있어서 높은 장벽이 되고 있다. 그렇기에 비교적 적은 양의 음성-문자열 짝 데이터를 이용하여 종단간 음성인식기의 성능을 향상하는 선행연구들이 있으나, 음성 단일 데이터나 문자열 단일 데이터 한쪽만을 활용하여 진행된 연구가 대부분이다. 본 연구에서는 음성 또는 문자열 단일 데이터를 함께 이용하여 종단간 음성인식기가 다른 도메인의 말뭉치에서도 좋은 성능을 낼 수 있도록 하는 준교사 학습 방식을 제안했으며, 성격이 다른 도메인에 적응하여 제안된 방식이 효과적으로 동작하는지 확인하였다. 그 결과로 제안된 방식이 타깃 도메인에서 좋은 성능을 보임과 동시에 소스 도메인에서도 크게 열화되지 않는 성능을 보임을 알 수 있었다.

Integration of WFST Language Model in Pre-trained Korean E2E ASR Model

  • Junseok Oh;Eunsoo Cho;Ji-Hwan Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권6호
    • /
    • pp.1692-1705
    • /
    • 2024
  • In this paper, we present a method that integrates a Grammar Transducer as an external language model to enhance the accuracy of the pre-trained Korean End-to-end (E2E) Automatic Speech Recognition (ASR) model. The E2E ASR model utilizes the Connectionist Temporal Classification (CTC) loss function to derive hypothesis sentences from input audio. However, this method reveals a limitation inherent in the CTC approach, as it fails to capture language information from transcript data directly. To overcome this limitation, we propose a fusion approach that combines a clause-level n-gram language model, transformed into a Weighted Finite-State Transducer (WFST), with the E2E ASR model. This approach enhances the model's accuracy and allows for domain adaptation using just additional text data, avoiding the need for further intensive training of the extensive pre-trained ASR model. This is particularly advantageous for Korean, characterized as a low-resource language, which confronts a significant challenge due to limited resources of speech data and available ASR models. Initially, we validate the efficacy of training the n-gram model at the clause-level by contrasting its inference accuracy with that of the E2E ASR model when merged with language models trained on smaller lexical units. We then demonstrate that our approach achieves enhanced domain adaptation accuracy compared to Shallow Fusion, a previously devised method for merging an external language model with an E2E ASR model without necessitating additional training.

음절 bigram를 이용한 띄어쓰기 오류의 자동 교정 (Automatic Correction of Word-spacing Errors using by Syllable Bigram)

  • 강승식
    • 음성과학
    • /
    • 제8권2호
    • /
    • pp.83-90
    • /
    • 2001
  • We proposed a probabilistic approach of using syllable bigrams to the word-spacing problem. Syllable bigrams are extracted and the frequencies are calculated for the large corpus of 12 million words. Based on the syllable bigrams, we performed three experiments: (1) automatic word-spacing, (2) detection and correction of word-spacing errors for spelling checker, and (3) automatic insertion of a space at the end of line in the character recognition system. Experimental results show that the accuracy ratios are 97.7 percent, 82.1 percent, and 90.5%, respectively.

  • PDF

A User-friendly Remote Speech Input Method in Spontaneous Speech Recognition System

  • Suh, Young-Joo;Park, Jun;Lee, Young-Jik
    • The Journal of the Acoustical Society of Korea
    • /
    • 제17권2E호
    • /
    • pp.38-46
    • /
    • 1998
  • In this paper, we propose a remote speech input device, a new method of user-friendly speech input in spontaneous speech recognition system. We focus the user friendliness on hands-free and microphone independence in speech recognition applications. Our method adopts two algorithms, the automatic speech detection and the microphone array delay-and-sum beamforming (DSBF)-based speech enhancement. The automatic speech detection algorithm is composed of two stages; the detection of speech and nonspeech using the pitch information for the detected speech portion candidate. The DSBF algorithm adopts the time domain cross-correlation method as its time delay estimation. In the performance evaluation, the speech detection algorithm shows within-200 ms start point accuracy of 93%, 99% under 15dB, 20dB, and 25dB signal-to-noise ratio (SNR) environments, respectively and those for the end point are 72%, 89%, and 93% for the corresponding environments, respectively. The classification of speech and nonspeech for the start point detected region of input signal is performed by the pitch information-base method. The percentages of correct classification for speech and nonspeech input are 99% and 90%, respectively. The eight microphone array-based speech enhancement using the DSBF algorithm shows the maximum SNR gaing of 6dB over a single microphone and the error reductin of more than 15% in the spontaneous speech recognition domain.

  • PDF

CTC Ratio Scheduling을 이용한 Joint CTC/Attention 한국어 음성인식 (Joint CTC/Attention Korean ASR with CTC Ratio Scheduling)

  • 문영기;조용래;조원익;조근식
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2020년도 제32회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.37-41
    • /
    • 2020
  • 본 논문에서는 Joint CTC/Attention 모델에 CTC ratio scheduling을 이용한 end-to-end 한국어 음성인식을 연구하였다. Joint CTC/Attention은 CTC와 attention의 장점을 결합한 모델로서 attention, CTC 단일 모델보다 좋은 성능을 보여주지만, 학습이 진행될수록 CTC가 attention의 학습을 저해하는 요인이 된다. 본 논문에서는 이러한 문제를 해결하기 위해, 학습 진행에 따라 CTC의 비율(ratio)를 줄여나가는 CTC ratio scheduling 방법을 제안한다. CTC ratio scheduling를 이용하여 학습한 결과물은 기존 Joint CTC/Attention, 단일 attention 모델 대비 좋은 성능을 보여주는 것을 확인하였다.

  • PDF

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System

  • 장한;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.37-39
    • /
    • 2009
  • In the research of speech recognition, locating the beginning and end of a speech utterance in a background of noise is of great importance. Since the background noise presenting to record will introduce disturbance while we just want to get the stationary parameters to represent the corresponding speech section, in particular, a major source of error in automatic recognition system of isolated words is the inaccurate detection of beginning and ending boundaries of test and reference templates, thus we must find potent method to remove the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two simple time-domain measurements - short-time energy, and short-time zero-crossing rate, which couldn't guarantee the precise results if in the low signal-to-noise ratio environments. This paper proposes a novel approach that finds the Lyapunov exponent of time-domain waveform. This proposed method has no use for obtaining the frequency-domain parameters for endpoint detection process, e.g. Mel-Scale Features, which have been introduced in other paper. Comparing with the conventional methods based on short-time energy and short-time zero-crossing rate, the novel approach based on time-domain Lyapunov Exponents(LEs) is low complexity and suitable for Digital Isolated Word Recognition System.

  • PDF