• Title/Summary/Keyword: Speech-to-Text

Search Result 501, Processing Time 0.035 seconds

Fast offline transformer-based end-to-end automatic speech recognition for real-world applications

  • Oh, Yoo Rhee;Park, Kiyoung;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.476-490
    • /
    • 2022
  • With the recent advances in technology, automatic speech recognition (ASR) has been widely used in real-world applications. The efficiency of converting large amounts of speech into text accurately with limited resources has become more vital than ever. In this study, we propose a method to rapidly recognize a large speech database via a transformer-based end-to-end model. Transformers have improved the state-of-the-art performance in many fields. However, they are not easy to use for long sequences. In this study, various techniques to accelerate the recognition of real-world speeches are proposed and tested, including decoding via multiple-utterance-batched beam search, detecting end of speech based on a connectionist temporal classification (CTC), restricting the CTC-prefix score, and splitting long speeches into short segments. Experiments are conducted with the Librispeech dataset and the real-world Korean ASR tasks to verify the proposed methods. From the experiments, the proposed system can convert 8 h of speeches spoken at real-world meetings into text in less than 3 min with a 10.73% character error rate, which is 27.1% relatively lower than that of conventional systems.

Development of Text-to-Speech System for PC (PC용 Text-to-Speech 시스템 개발)

  • Choi Muyeol;Hwang Cholgyu;Kim Soontae;Kim Junggon;Yi Sopae;Jang Seokbok;Pyo Kyungnan;Ahn Hyesun;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.41-44
    • /
    • 1999
  • 본 논문에서는 PC 응용을 위한 고음질의 한국어 text-to-speech(TTS) 합성 시스템을 개발하였다. 개발된 시스템의 합성방식으로는 음의 고저 조절, 인접음 사이의 연결 처리 및 음색제어 등에서 기존의 PSOLA 방식에 비해 장점을 가지는 정현파 모델 기반의 방식을 채택하였고, 자연스러운 운율 모델링을 위하여 통계적 기법중의 하나인 Classification and regression tree(CART) 방법을 사용하였다. 또한 음소 경계의 불연속성 문제를 줄이기 위한 합성단위로 초성-중성 및 종성 단위를 사용하였고, 다양한 음색표현이 가능하도록 음색제어 기능을 갖추었다. 그리고, 표준 Speech Application Program Interface(SAPI)를 준용한 TTS engine 형태로 구현함으로써 PC 상에서의 응용 프로그램 개발 편의성을 높였다. 합성음의 청취평가 결과 음질의 우수성 및 음색제어 기능의 유효성을 확인할 수 있었다.

  • PDF

Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Part-of-Speech Tagged Corpus (품사 부착 말뭉치를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선)

  • Lim, Min-Kyu;Kim, Kwang-Ho;Kim, Ji-Hwan
    • MALSORI
    • /
    • no.67
    • /
    • pp.181-193
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using a part-of-speech (POS) tagged corpus. We investigate 152 POS tags defined in Lancaster-Oslo-Bergen (LOB) corpus and word-POS tag pairs. We derive a new vocabulary through word addition. Words paired with some POS tags have to be included in vocabularies with any size, but the vocabulary inclusion of words paired with other POS tags varies based on the target size of vocabulary. The 152 POS tags are categorized according to whether the word addition is dependent of the size of the vocabulary. Using expert knowledge, we classify POS tags first, and then apply different ways of word addition based on the POS tags paired with the words. The performance of the proposed method is measured in terms of coverage and is compared with those of vocabularies with the same size (5,000 words) derived from frequency lists. The coverage of the proposed method is measured as 95.18% for the test short message service (SMS) text corpus, while those of the conventional vocabularies cover only 93.19% and 91.82% of words appeared in the same SMS text corpus.

  • PDF

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF

Design and Implementation of a Speech Synthesis Engine and a Plug-in for Internet Web Page (인터넷 웹페이지의 음성합성을 위한 엔진 및 플러그-인 설계 및 구현)

  • Lee, Hee-Man;Kim, Ji-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.461-469
    • /
    • 2000
  • In the paper, the design and the implementation of the netscape plug-in and the speech synthesis enginegenerating the speech sounds from the text information of the web pages are described. The steps of the generating speech sound from an web pages are the speech synthesis plug-in is activated when the netscape finds the audio/xesp MIME data type embedded in the browsed web page; the HTML file referenced in the EMBED MTML tag is down loaded from the referenced URL to send to the commander object located in the said plug-in; The speech synthesis engine control tags and the text characters are extracted from the down loaded HTML document by the commander object the synthesized speech sounds are generated by the speech synthesis engine. The speech synthesis engine interprets the command streams from the commander objects to call the member functions for the processing of the speech segment data in the data banks. The commander object and the speech synthesis engine are designed as an independent object to enhancethe flexitility and the portability.

  • PDF

Statistical Analysis Between Size and Balance of Text Corpus by Evaluation of the effect of Interview Sentence in Language Modeling (언어모델 인터뷰 영향 평가를 통한 텍스트 균형 및 사이즈간의 통계 분석)

  • Jung Eui-Jung;Lee Youngjik
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.87-90
    • /
    • 2002
  • This paper analyzes statistically the relationship between size and balance of text corpus by evaluation of the effect of interview sentences in language model for Korean broadcast news transcription system. Our Korean broadcast news transcription system's ultimate purpose is to recognize not interview speech, but the anchor's and reporter's speech in broadcast news show. But the gathered text corpus for constructing language model consists of interview sentences a portion of the whole, $15\%$ approximately. The characteristic of interview sentence is different from the anchor's and the reporter's in one thing or another. Therefore it disturbs the anchor and reporter oriented language modeling. In this paper, we evaluate the effect of interview sentences in language model for Korean broadcast news transcription system and analyze statistically the relationship between size and balance of text corpus by making an experiment as the same procedure according to varying the size of corpus.

  • PDF

Text-Prompt Speaker Verification using Variable Threshold and Sequential Decision (가변 문턱치와 순차결정법을 통한 문맥요구형 화자확인)

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.41-47
    • /
    • 2000
  • This paper concerns an effective text-prompted speaker verification method to increase the performance of speaker verification. While various speaker verification methods have already been developed, their effectiveness has not yet been formally proven in terms of achieving an acceptable performance level. It is also noted that the traditional methods were focused primarily on single, prompted utterance for verification. This paper, instead, proposes sequential decision method using variable threshold focused at handling two utterances for text-prompted speaker verification. Experimental results show that the proposed speaker verification method outperforms that of the speaker verification scheme without using the sequential decision by a factor of up to 3 times. From these results, we show that the proposed method is highly effective and achieves a reliable performance suitable for practical applications.

  • PDF

A Study on Phased Reading Techniques of Mathematical Expression in the Digital Talking Book (디지털 음성 도서에서 MathML 수식의 수준별 독음 변환 기법)

  • Hwang, Jungsoo;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.1025-1032
    • /
    • 2014
  • Until now, there were few supports on reading the mathematical expressions except text based expressions, so it is important to provide the reading of the mathematical expressions. Also, there are various of obstacles for people who are not visually impaired when reading the mathematical expressions such as the situation of presbyopia, reading the mathematical expressions in the vehicles, and so on. Therefore, supports for people to read mathematical expressions in various situations are needed. In the previous research, the main goal was to transform the mathematical expressions into Korean text based on Content MathML. In this paper, we expanded the range of the research from a reading disabilities to people who are not reading disabilities. We tested appropriacy of the rules we made to convert the MathML based expressions into speech and defined 3 math-to-speech rules in korean based on levels. We implemented the mathematical expressions by using 3 math-to-speech rules. We took comprehension test to find out whether our math to speech rules are well-defined or not.

A Design and Implementation of The Deep Learning-Based Senior Care Service Application Using AI Speaker

  • Mun Seop Yun;Sang Hyuk Yoon;Ki Won Lee;Se Hoon Kim;Min Woo Lee;Ho-Young Kwak;Won Joo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.23-30
    • /
    • 2024
  • In this paper, we propose a deep learning-based personalized senior care service application. The proposed application uses Speech to Text technology to convert the user's speech into text and uses it as input to Autogen, an interactive multi-agent large-scale language model developed by Microsoft, for user convenience. Autogen uses data from previous conversations between the senior and ChatBot to understand the other user's intent and respond to the response, and then uses a back-end agent to create a wish list, a shared calendar, and a greeting message with the other user's voice through a deep learning model for voice cloning. Additionally, the application can perform home IoT services with SKT's AI speaker (NUGU). The proposed application is expected to contribute to future AI-based senior care technology.