• Title/Summary/Keyword: text-to-speech system

Search Result 246, Processing Time 0.026 seconds

Speaker Recognition using PCA in Driving Car Environments (PCA를 이용한 자동차 주행 환경에서의 화자인식)

  • Yu, Ha-Jin
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.103-106
    • /
    • 2005
  • The goal of our research is to build a text independent speaker recognition system that can be used in any condition without any additional adaptation process. The performance of speaker recognition systems can be severally degraded in some unknown mismatched microphone and noise conditions. In this paper, we show that PCA(Principal component analysis) without dimension reduction can greatly increase the performance to a level close to matched condition. The error rate is reduced more by the proposed augmented PCA, which augment an axis to the feature vectors of the most confusable pairs of speakers before PCA

  • PDF

Context-adaptive Phoneme Segmentation for a TTS Database (문자-음성 합성기의 데이터 베이스를 위한 문맥 적응 음소 분할)

  • 이기승;김정수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.135-144
    • /
    • 2003
  • A method for the automatic segmentation of speech signals is described. The method is dedicated to the construction of a large database for a Text-To-Speech (TTS) synthesis system. The main issue of the work involves the refinement of an initial estimation of phone boundaries which are provided by an alignment, based on a Hidden Market Model(HMM). Multi-layer perceptron (MLP) was used as a phone boundary detector. To increase the performance of segmentation, a technique which individually trains an MLP according to phonetic transition is proposed. The optimum partitioning of the entire phonetic transition space is constructed from the standpoint of minimizing the overall deviation from hand labelling positions. With single speaker stimuli, the experimental results showed that more than 95% of all phone boundaries have a boundary deviation from the reference position smaller than 20 ms, and the refinement of the boundaries reduces the root mean square error by about 25%.

On the Development of Animated Tutoring Dialogue Agent for Elementary School Science Learning (초등과학 수업을 위한 애니메이션 기반 튜터링 다이얼로그 에이전트 개발)

  • Jeong, Sang-Mok;Han, Byeong-Rae;Song, Gi-Sang
    • Journal of The Korean Association of Information Education
    • /
    • v.9 no.4
    • /
    • pp.673-684
    • /
    • 2005
  • In this research, we have developed a "computer tutor" that mimics the human tutor with animated tutoring dialog agent and the agent was integrated to teaching-learning material for elementary science subject. The developed system is a natural language based teaching-learning system using one-to-one dialogue. The developed pedagogical dialogue teaching-learning system analysis student's answer then provides appropriate answer or questions after comparing the student's answer with elementary school level achievement. When the agent gives either question or answer it uses the TTS(Text-to-Speech) function. Also the agent has an animated human tutor face for providing more human like feedback. The developed dialogue interface has been applied to 64 6th grade students. The test results show that the test group's average score is higher than the control group by 10.797. This shows that unlike conventional web courseware, our approach that "ask-answer" process and the animated character, which has human tutor's emotional expression, attracts students and helps to immerse to the courseware.

  • PDF

Singing Voice Synthesis Using HMM Based TTS and MusicXML (HMM 기반 TTS와 MusicXML을 이용한 노래음 합성)

  • Khan, Najeeb Ullah;Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.53-63
    • /
    • 2015
  • Singing voice synthesis is the generation of a song using a computer given its lyrics and musical notes. Hidden Markov models (HMM) have been proved to be the models of choice for text to speech synthesis. HMMs have also been used for singing voice synthesis research, however, a huge database is needed for the training of HMMs for singing voice synthesis. And commercially available singing voice synthesis systems which use the piano roll music notation, needs to adopt the easy to read standard music notation which make it suitable for singing learning applications. To overcome this problem, we use a speech database for training context dependent HMMs, to be used for singing voice synthesis. Pitch and duration control methods have been devised to modify the parameters of the HMMs trained on speech, to be used as the synthesis units for the singing voice. This work describes a singing voice synthesis system which uses a MusicXML based music score editor as the front-end interface for entry of the notes and lyrics to be synthesized and a hidden Markov model based text to speech synthesis system as the back-end synthesizer. A perceptual test shows the feasibility of our proposed system.

Image Based Human Action Recognition System to Support the Blind (시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템)

  • Ko, ByoungChul;Hwang, Mincheol;Nam, Jae-Yeal
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.138-143
    • /
    • 2015
  • In this paper we develop a novel human action recognition system based on communication between an ear-mounted Bluetooth camera and an action recognition server to aid scene recognition for the blind. First, if the blind capture an image of a specific location using the ear-mounted camera, the captured image is transmitted to the recognition server using a smartphone that is synchronized with the camera. The recognition server sequentially performs human detection, object detection and action recognition by analyzing human poses. The recognized action information is retransmitted to the smartphone and the user can hear the action information through the text-to-speech (TTS). Experimental results using the proposed system showed a 60.7% action recognition performance on the test data captured in indoor and outdoor environments.

Rich Transcription Generation Using Automatic Insertion of Punctuation Marks (자동 구두점 삽입을 이용한 Rich Transcription 생성)

  • Kim, Ji-Hwan
    • MALSORI
    • /
    • no.61
    • /
    • pp.87-100
    • /
    • 2007
  • A punctuation generation system which combines prosodic information with acoustic and language model information is presented. Experiments have been conducted first for the reference text transcriptions. In these experiments, prosodic information was shown to be more useful than language model information. When these information sources are combined, an F-measure of up to 0.7830 was obtained for adding punctuation to a reference transcription. This method of punctuation generation can also be applied to the 1-best output of a speech recogniser. The 1-best output is first time aligned. Based on the time alignment information, prosodic features are generated. As in the approach applied in the punctuation generation for reference transcriptions, the best sequence of punctuation marks for this 1-best output is found using the prosodic feature model and an language model trained on texts which contain punctuation marks.

  • PDF

Design And Implementation of a Speech Recognition Interview Model based-on Opinion Mining Algorithm (오피니언 마이닝 알고리즘 기반 음성인식 인터뷰 모델의 설계 및 구현)

  • Kim, Kyu-Ho;Kim, Hee-Min;Lee, Ki-Young;Lim, Myung-Jae;Kim, Jeong-Lae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.225-230
    • /
    • 2012
  • The opinion mining is that to use the existing data mining technology also uploaded blog to web, to use product comment, the opinion mining can extract the author's opinion therefore it not judge text's subject, only judge subject's emotion. In this paper, published opinion mining algorithms and the text using speech recognition API for non-voice data to judge the emotions suggested. The system is open and the Subject associated with Google Voice Recognition API sunwihwa algorithm, the algorithm determines the polarity through improved design, based on this interview, speech recognition, which implements the model.

Voice-based Device Control Using oneM2M IoT Platforms

  • Jeong, Isu;Yun, Jaeseok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.151-157
    • /
    • 2019
  • In this paper, we present a prototype system for controlling IoT home appliances via voice-based commands. A voice command has been widely deployed as one of unobtrusive user interfaces for applications in a variety of IoT domains. However, interoperability between diverse IoT systems is limited by several dominant companies providing voice assistants like Amazon Alexa or Google Now due to their proprietary systems. A global IoT standard, oneM2M has been proposed to mitigate the lack of interoperability between IoT systems. In this paper, we deployed oneM2M-based platforms for a voice record device like a wrist band and LED control device like a home appliance. We developed all the components for recording voices and controlling IoT devices, and demonstrate the feasibility of our proposed method based on oneM2M platforms and Google STT (Speech-to-Text) API for controlling home appliances by showing a user scenario for turning the LED device on and off via voice commands.

Effects of the Tele-Monitoring With the Speech-to-Text Application on Occupational Balance in Healthy Adults : Feasibility Study (음성-텍스트 변환 어플리케이션을 이용한 원격 모니터링이 건강한 성인의 작업균형에 미치는 효과)

  • Na, Nam Heui;Lee, Seong A;Lee, Yeong Hyun;Lee, Sang-Heon;Hwang, Do-Yeon;Park, Jin-Hyuck
    • Therapeutic Science for Rehabilitation
    • /
    • v.11 no.3
    • /
    • pp.93-106
    • /
    • 2022
  • Objective : The COVID-19 pandemic has brought non-face-to-face healthcare service delivery system. Research into telehealth system and its efficacy remains unclear. Methods : Seven healthy adults participated in this study to investigate effects of tele-monitoring with the speech-to-text (STT) application to induce changes in occupational activities on occupational balance in healthy adults. Subjects were requested to choose occupational activities they wanted to have researched and then register them to the STT application. The STT application provided an alarm to check whether the pre-registered activities were performed on time, and whether the subjects performed it by their voice. The subjects were followed for 1 week, with assessments at baseline, and after 1-week's tele-monitoring. Results : Our findings showed that the subjects were willing to participate in tele-monitoring with the STT application with high adherence and satisfaction. In addition, there was a significant improvement in occupational activities related to health (p<.05). Specifically, adherence, satisfaction, and efficacy of the tele-monitoring with the STT application could successfully bring occupational balance in short-term periods. Conclusion : These findings highlight that tele-monitoring with a smartphone could be considered as one promising way to restore occupational balance in lockdown after the COVID-19 outbreak.

Performance Enhancement of Speaker Identification System Based on GMM Using the Modified EM Algorithm (수정된 EM알고리즘을 이용한 GMM 화자식별 시스템의 성능향상)

  • Kim, Seong-Jong;Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.31-42
    • /
    • 2005
  • Recently, Gaussian Mixture Model (GMM), a special form of CHMM, has been applied to speaker identification and it has proved that performance of GMM is better than CHMM. Therefore, in this paper the speaker models based on GMM and a new GMM using the modified EM algorithm are introduced and evaluated for text-independent speaker identification. Various experiments were performed to evaluate identification performance of two algorithms. As a result of the experiments, the GMM speaker model attained 94.6% identification accuracy using 40 seconds of training data and 32 mixtures and 97.8% accuracy using 80 seconds of training data and 64 mixtures. On the other hand, the new GMM speaker model achieved 95.0% identification accuracy using 40 seconds of training data and 32 mixtures and 98.2% accuracy using 80 seconds of training data and 64 mixtures. It shows that the new GMM speaker identification performance is better than the GMM speaker identification performance.

  • PDF