• Title/Summary/Keyword: Speech development

Search Result 605, Processing Time 0.05 seconds

Development of Speech Recognition and Synthetic Application for the Hearing Impairment (청각장애인을 위한 음성 인식 및 합성 애플리케이션 개발)

  • Lee, Won-Ju;Kim, Woo-Lin;Ham, Hye-Won;Yun, Sang-Un
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.129-130
    • /
    • 2020
  • 본 논문에서는 청각장애인의 의사소통을 위한 안드로이드 애플리케이션 시스템 구현 결과를 보인다. 구글 클라우드 플랫폼(Google Cloud Platform)의 STT(Speech to Text) API를 이용하여 음성 인식을 통해 대화의 내용을 텍스트의 형태로 출력한다. 그리고 TTS(Text to Speech)를 이용한 음성 합성을 통해 텍스트를 음성으로 출력한다. 또한, 포그라운드 서비스(Service)에서 가속도계 센서(Accelerometer Sensor)를 이용하여 스마트폰을 2~3회 흔들었을 때 해당 애플리케이션을 실행할 수 있도록 하여 애플리케이션의 활용성을 높인 시스템을 개발하였다.

  • PDF

A Study on Dialect Expression in Korean-Based Speech Recognition (한국어 기반 음성 인식에서 사투리 표현에 관한 연구)

  • Lee, Sin-hyup
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.333-335
    • /
    • 2022
  • The development of speech recognition processing technology has been applied and used in various video and streaming services along with STT and TTS technologies. However, there are high barriers to clear written expression due to the use of dialects and overlapping of stop words, exclamations, and similar words for voice recognition of actual conversation content. In this study, for ambiguous dialects in speech recognition, we propose a speech recognition technology that applies dialect key word dictionary processing method by category and dialect prosody as speech recognition network model properties.

  • PDF

Development of the algorithm for Korean vowel recognition (한국어 인식을 위한 알고리즘의 개발)

  • Ahn, Chang;Chin, Sang-Hyun;Rhee, Sang-Burm
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.620-623
    • /
    • 1988
  • A vowel is based on the recognition of a phoneme. Thus it is necessary for the programming of an algorithm to achieve the speech recognition in that case. In this paper, cepstrum is used for a voiced-unvoiced decision and speech parameters are extracted by linear prediction coding. Using these parameters, a speech understanding algorithm has been developed.

  • PDF

Development of a Baseline Platform for Spoken Dialog Recognition System (대화음성인식 시스템 구현을 위한 기본 플랫폼 개발)

  • Chung Minhwa;Seo Jungyun;Lee Yong-Jo;Han Myungsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.32-35
    • /
    • 2003
  • This paper describes our recent work for developing a baseline platform for Korean spoken dialog recognition. In our work, We have collected about 65 hour speech corpus with auditory transcriptions. Linguistic information on various levels such as mophology, syntax, semantics, and discourse is attached to the speech database by using automatic or semi-automatic tools for tagging linguistic information.

  • PDF

Development of a Reading Training Software offering Visual-Auditory Cue for Patients with Motor Speech Disorder (말운동장애인을 위한 시-청각 단서 제공 읽기 훈련 프로그램 개발)

  • Bang, D.H.;Jeon, Y.Y.;Yang, D.G.;Kil, S.K.;Kwon, M.S.;Lee, S.M.
    • Journal of Biomedical Engineering Research
    • /
    • v.29 no.4
    • /
    • pp.307-315
    • /
    • 2008
  • In this paper, we developed a visual-auditory cue software for reading training of motor speech disorder patients. Motor speech disorder patients can use the visual and/or auditory cues for reading training and improving their symptom. The software provides some sentences with visual-auditory cues. Our sentences used for reading training are adequately comprised on modulation training according to a professional advice in speech therapy field. To ameliorate reading skills we developed two algorithms, first one is automatically searching the starting time of speech spoken by patients and the other one is removing auditory-cue from the recorded speech that recorded at the same time. The searching of speech starting time was experimented by 10 sentences per 6 subjects in four kinds of noisy environments thus the results is that $7.042{\pm}8.99[ms]$ error was detected. The experiment of the cancellation algorithm of auditory-cue was executed from 6 subjects with 1 syllable speech. The result takes improved the speech recognition rate $25{\pm}9.547[%]$ between before and after cancellation of auditory-cue in speech. User satisfaction index of the developed program was estimated as good.

Convergent Analysis on the Speech Sound of Typically Developing Children Aged 3 to 5 : Focused on Word Level and Connected Speech Level (3-5세 일반아동의 말소리에 대한 융합적 분석: 단어와 자발화를 중심으로)

  • Kim, Yun-Joo;Park, Hyun-Ju
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.6
    • /
    • pp.125-132
    • /
    • 2018
  • This study was to investigate the speech sound production characteristics and evaluation aspects of preschool children through word test and connected speech test. For this, the authors conducted Assessment of Phonology and Articulation for Children(APAC) to 72 normal children(24 three-, four-, and five-year-olds each) and analyzed difference in percent of correct consonant(PCC) and intelligibility according to age and sex, correlation between PCC and intelligibility, and speech sound error patterns. PCC and intelligibility increased with age but there was no difference according to sex. The correlation was statistically significant in 5-year-old group. Speech sound error patterns were different in the two tests. This study showed that children's speech sound production varied according to language unit. Therefore, both types of tests should be done to grasp their speech sound production ability properly. This suggests that current standard to identify language impairment only by PCC of word level requires review and further studies.

Digital Speech Coding Technologies for Wire and Wireless Communication (유무선망에서 사용되는 디지털 음성 부호화 기술 동향)

  • Yoon, Byungsik;Choi, Songin;Kang, Sangwon
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.261-269
    • /
    • 2005
  • Throughout the history of digital communication, the digital speech coder is used as speech compression tool. Nowadays, the speech coder has been rapidly developed in the area of mobile communication system to overcome severe channel error and limitation of radio frequency resources. Due to the development of high performance communication system, high quality of speech coder is needed. This kind of speech coder can be used not only in communication services but also in digital multimedia services. In this paper, we describe the technologies of digital speech coder which are used in wire and wireless communication. We also present a summary of recent speech coding standards for narrowband and wideband applications. Finally we introduce the technical trends of next generation speech coder.

A Training Method for Emotionally Robust Speech Recognition using Frequency Warping (주파수 와핑을 이용한 감정에 강인한 음성 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.528-533
    • /
    • 2010
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variation on the speech signal and the speech recognition system were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, a training method that cover the speech variations is proposed to develop the emotionally robust speech recognition system. Experimental results from the isolated word recognition using HMM showed that propose method reduced the error rate of the conventional recognition system by 28.4% when emotional test data was used.

Correlation analysis of antipsychotic dose and speech characteristics according to extrapyramidal symptoms (추체외로 증상에 따른 항정신병 약물 복용량과 음성 특성의 상관관계 분석)

  • Lee, Subin;Kim, Seoyoung;Kim, Hye Yoon;Kim, Euitae;Yu, Kyung-Sang;Lee, Ho-Young;Lee, Kyogu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.367-374
    • /
    • 2022
  • In this paper, correlation analysis between speech characteristics and the dose of antipsychotic drugs was performed. To investigate the pattern of speech characteristics of ExtraPyramidal Symptoms (EPS) related to voice change, a common side effect of antipsychotic drugs, a Korean-based extrapyramidal symptom speech corpus was constructed through the sentence development. Through this, speech patterns of EPS and non-EPS groups were investigated, and in particular, a strong speech feature correlation was shown in the EPS group. In addition, it was confirmed that the type of speech sentence affects the speech feature pattern, and these results suggest the possibility of early detection of antipsychotics-induced EPS based on the speech features.

Speech Data Collection for korean Speech Recognition (한국어 음성인식을 위한 음성 데이터 수집)

  • Park, Jong-Ryeal;Kwon, Oh-Wook;Kim, Do-Yeong;Choi, In-Jeong;Jeong, Ho-Young;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.74-81
    • /
    • 1995
  • This paper describes the development of speech databases for the Korean language which were constructed at Communications Research Laboratory in KAIST. The procedure and environment to construct the speech database are presented in detail, and the phonetic and linguistic properties of the databases are presented. the databases were intended for use in designing and evaluating speech recognition algorithms. The databases consist of five different sets of speech contents : trade-related continuous speech with 3,000 words, variable-length connected digits, phoneme-balanced 75 isolated words, 500 isolated Korean provincial names, and Korean A-set words.

  • PDF