• Title/Summary/Keyword: Automatic Speech Recognition (ASR)

Search Result 67, Processing Time 0.024 seconds

Semi-supervised domain adaptation using unlabeled data for end-to-end speech recognition (라벨이 없는 데이터를 사용한 종단간 음성인식기의 준교사 방식 도메인 적응)

  • Jeong, Hyeonjae;Goo, Jahyun;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.29-37
    • /
    • 2020
  • Recently, the neural network-based deep learning algorithm has dramatically improved performance compared to the classical Gaussian mixture model based hidden Markov model (GMM-HMM) automatic speech recognition (ASR) system. In addition, researches on end-to-end (E2E) speech recognition systems integrating language modeling and decoding processes have been actively conducted to better utilize the advantages of deep learning techniques. In general, E2E ASR systems consist of multiple layers of encoder-decoder structure with attention. Therefore, E2E ASR systems require data with a large amount of speech-text paired data in order to achieve good performance. Obtaining speech-text paired data requires a lot of human labor and time, and is a high barrier to building E2E ASR system. Therefore, there are previous studies that improve the performance of E2E ASR system using relatively small amount of speech-text paired data, but most studies have been conducted by using only speech-only data or text-only data. In this study, we proposed a semi-supervised training method that enables E2E ASR system to perform well in corpus in different domains by using both speech or text only data. The proposed method works effectively by adapting to different domains, showing good performance in the target domain and not degrading much in the source domain.

A Study on Voice Web Browsing in Automatic Speech Recognition Application System (음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 윤재석
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.949-954
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in reusability and portability, in this study, Automatic Speech Recognition Application System with Javabeans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

A Study on the Language Independent Dictionary Creation Using International Phoneticizing Engine Technology (국제 음소 기술에 의한 언어에 독립적인 발음사전 생성에 관한 연구)

  • Shin, Chwa-Cheul;Woo, In-Sung;Kang, Heung-Soon;Hwang, In-Soo;Kim, Suk-Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1E
    • /
    • pp.1-7
    • /
    • 2007
  • One result of the trend towards globalization is an increased number of projects that focus on natural language processing. Automatic speech recognition (ASR) technologies, for example, hold great promise in facilitating global communications and collaborations. Unfortunately, to date, most research projects focus on single widely spoken languages. Therefore, the cost to adapt a particular ASR tool for use with other languages is often prohibitive. This work takes a more general approach. We propose an International Phoneticizing Engine (IPE) that interprets input files supplied in our Phonetic Language Identity (PLI) format to build a dictionary. IPE is language independent and rule based. It operates by decomposing the dictionary creation process into a set of well-defined steps. These steps reduce rule conflicts, allow for rule creation by people without linguistics training, and optimize run-time efficiency. Dictionaries created by the IPE can be used with the Sphinx speech recognition system. IPE defines an easy-to-use systematic approach that can lead to internationalization of automatic speech recognition systems.

Automatic Floating-Point to Fixed-Point Conversion for Speech Recognition in Embedded Device (임베디드 디바이스에서 음성 인식 알고리듬 구현을 위한 부동 소수점 연산의 고정 소수점 연산 변환 기법)

  • Yun, Sung-Rack;Yoo, Chang-D.
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.305-306
    • /
    • 2007
  • This paper proposes an automatic conversion method from floating-point value computations to fixed-point value computations for implementing automatic speech recognition (ASR) algorithms in embedded device.

  • PDF

An automatic pronunciation evaluation system using non-native teacher's speech model (비원어민 교수자 음성모델을 이용한 자동발음평가 시스템)

  • Park, Hye-bin;Kim, Dong Heon;Joung, Jinoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.131-136
    • /
    • 2016
  • An appropriate evaluation on learner's pronunciation has been an important part of foreign language education. The learners should be evaluated and receive proper feedback for pronunciation improvement. Due to the cost and consistency problem of human evaluation, automatic pronunciation evaluation system has been studied. The most of the current automatic evaluation systems utilizes underlying Automatic Speech Recognition (ASR) technology. We suggest in this work to evaluate learner's pronunciation accuracy and fluency in word-level using the ASR and non-native teacher's speech model. Through the performance evaluation on our system, we confirm the overall evaluation result of pronunciation accuracy and fluency actually represents the learner's English skill level quite accurately.

A Study on Voice Web Browsing in JAVA Beans Component Architecture Automatic Speech Recognition Application System. (JAVABeans Component 구조를 갖는 음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 장준식;윤재석
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.273-276
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in speed and implantation, in this study, Automatic Speech Recognition Application System with Java beans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

  • PDF

Parameter Considering Variance Property for Speech Recognition in Noisy Environment (잡음환경에서의 음성인식을 위한 변이특성을 고려한 파라메터)

  • Park, Jin-Young;Lee, Kwang-Seok;Koh, Si-Young;Hur, Kang-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.469-472
    • /
    • 2005
  • This paper propose about effective speech feature parameter that have robust character in effect of noise in realizing speech recognition system. Established MFCC that is the basic parameter used to ASR(Automatic Speech Recognition) and DCTCs that use DCT in basic parameter. Also, proposed delta-Cepstrum and delta-delta-Cepstrum parameter that reconstruct Cepstrum to have information for variation of speech. And compared recognition performance in using HMM. For dimension reduction of each parameter LDA algorithm apply and compared recognition. Results are presented reduced dimension delta-delta-Cepstrum parameter in using LDA recognition performance that improve more than existent parameter in noise environment of various condition.

  • PDF

Deep learning-based speech recognition for Korean elderly speech data including dementia patients (치매 환자를 포함한 한국 노인 음성 데이터 딥러닝 기반 음성인식)

  • Jeonghyeon Mun;Joonseo Kang;Kiwoong Kim;Jongbin Bae;Hyeonjun Lee;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.33-48
    • /
    • 2023
  • In this paper we consider automatic speech recognition (ASR) for Korean speech data in which elderly persons randomly speak a sequence of words such as animals and vegetables for one minute. Most of the speakers are over 60 years old and some of them are dementia patients. The goal is to compare deep-learning based ASR models for such data and to find models with good performance. ASR is a technology that can recognize spoken words and convert them into written text by computers. Recently, many deep-learning models with good performance have been developed for ASR. Training data for such models are mostly composed of the form of sentences. Furthermore, the speakers in the data should be able to pronounce accurately in most cases. However, in our data, most of the speakers are over the age of 60 and often have incorrect pronunciation. Also, it is Korean speech data in which speakers randomly say series of words, not sentences, for one minute. Therefore, pre-trained models based on typical training data may not be suitable for our data, and hence we train deep-learning based ASR models from scratch using our data. We also apply some data augmentation methods due to small data size.

Using speech enhancement parameter for ASR (잡음환경의 ASR 성능개선을 위한 음성강조 파라미터)

  • Cha, Young-Dong;Kim, Young-Sub;Hur, Kang-In
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.63-66
    • /
    • 2006
  • 음성인식시스템은 사람이 별도의 장비 없이 음성만으로 시스템의 사용이 가능한 편리한 장점을 지니고 있으나 여러 가지 기술적인 어려움과 실제 환경의 낮은 인식률로 폭넓게 사용되지 못한 상황이다. 그 중 배경잡음은 음성인식의 인식률을 저하시키는 원인으로 지적 받고 있다. 이러한 잡음환경에 있는 ASR(Automatic Speech Recognition)의 성능 향상을 위해 외측억제 기능 이 추가된 파라미터를 제안한다. ASR 에서 널리 사용되는 파라미터인 MFCC을 본 논문에서 제안한 파라미터와 HMM를 이용하여 인식률을 비교하여 성능을 비교하였다. 실험결과를 통해 제안된 파라미터의 사용을 통해 잡음환경에 있는 ASR의 성능 향상을 확인할 수 있었다.

  • PDF

Implementation of Chip and Algorithm of a Speech Enhancement for an Automatic Speech Recognition Applied to Telematics Device (텔레메틱스 단말용 음성 인식을 위한 음성향상 알고리듬 및 칩 구현)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.5
    • /
    • pp.90-96
    • /
    • 2008
  • This paper presents an algorithm of a single chip acoustic speech enhancement for telematics device. The algorithm consists of two stages, i.e. noise reduction and echo cancellation. An adaptive filter based on cross spectral estimation is used to cancel echo. The external background noise is eliminated and the clear speech is estimated by using MMSE log-spectral magnitude estimation. To be suitable for use in consumer electronics, we also design a low cost, high speed and flexible hardware architecture. The performance of the proposed speech enhancement algorithms were measured both by the signal-to-noise ratio(SNR) and recognition accuracy of an automatic speech recognition(ASR) and yields better results compared with the conventional methods.

  • PDF