• Title/Summary/Keyword: text-to-speech system

Search Result 246, Processing Time 0.025 seconds

Decision of the Korean Speech Act using Feature Selection Method (자질 선택 기법을 이용한 한국어 화행 결정)

  • 김경선;서정연
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.278-284
    • /
    • 2003
  • Speech act is the speaker's intentions indicated through utterances. It is important for understanding natural language dialogues and generating responses. This paper proposes the method of two stage that increases the performance of the korean speech act decision. The first stage is to select features from the part of speech results in sentence and from the context that uses previous speech acts. We use x$^2$ statistics(CHI) for selecting features that have showed high performance in text categorization. The second stage is to determine speech act with selected features and Neural Network. The proposed method shows the possibility of automatic speech act decision using only POS results, makes good performance by using the higher informative features and speed up by decreasing the number of features. We tested the system using our proposed method in Korean dialogue corpus transcribed from recording in real fields, and this corpus consists of 10,285 utterances and 17 speech acts. We trained it with 8,349 utterances and have test it with 1,936 utterances, obtained the correct speech act for 1,709 utterances(88.3%). This result is about 8% higher accuracy than without selecting features.

AN ALGORITHM FOR CLASSIFYING EMOTION OF SENTENCES AND A METHOD TO DIVIDE A TEXT INTO SOME SCENES BASED ON THE EMOTION OF SENTENCES

  • Fukoshi, Hirotaka;Sugimoto, Futoshi;Yoneyama, Masahide
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.773-777
    • /
    • 2009
  • In recent years, the field of synthesizing voice has been developed rapidly, and the technologies such as reading aloud an email or sound guidance of a car navigation system are used in various scenes of our life. The sound quality is monotonous like reading news. It is preferable for a text such as a novel to be read by the voice that expresses emotions wealthily. Therefore, we have been trying to develop a system reading aloud novels automatically that are expressed clear emotions comparatively such as juvenile literature. At first it is necessary to identify emotions expressed in a sentence in texts in order to make a computer read texts with an emotionally expressive voice. A method on the basis of the meaning interpretation that utilized artificial intelligence technology for a method to specify emotions of texts is thought, but it is very difficult with the current technology. Therefore, we propose a method to determine only emotion every sentence in a novel by a simpler way. This method determines the emotion of a sentence according to an emotion that words such as a verb in a Japanese verb sentence, and an adjective and an adverb in a adjective sentence, have. The emotional characteristics that these words have are prepared beforehand as a emotional words dictionary by us. The emotions used here are seven types: "joy," "sorrow," "anger," "surprise," "terror," "aversion" or "neutral."

  • PDF

Development of Korean-to-English and English-to-Korean Mobile Translator for Smartphone (스마트폰용 영한, 한영 모바일 번역기 개발)

  • Yuh, Sang-Hwa;Chae, Heung-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.229-236
    • /
    • 2011
  • In this paper we present light weighted English-to-Korean and Korean-to-English mobile translators on smart phones. For natural translation and higher translation quality, translation engines are hybridized with Translation Memory (TM) and Rule-based translation engine. In order to maximize the usability of the system, we combined an Optical Character Recognition (OCR) engine and Text-to-Speech (TTS) engine as a Front-End and Back-end of the mobile translators. With the BLEU and NIST evaluation metrics, the experimental results show our E-K and K-E mobile translation equality reach 72.4% and 77.7% of Google translators, respectively. This shows the quality of our mobile translators almost reaches the that of server-based machine translation to show its commercial usefulness.

Contents Development of IrobiQ on School Violence Prevention Program for Young Children (지능형 로봇 아이로비큐(IrobiQ)를 활용한 학교폭력 예방 프로그램 개발)

  • Hyun, Eunja;Lee, Hawon;Yeon, Hyemin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.9
    • /
    • pp.455-466
    • /
    • 2013
  • The purpose of this study was to develop a school violence prevention program "Modujikimi" for young children to be embedded in IrobiQ, the teacher assistive robot. The themes of this program consisted of basic character education, bullying prevention education and sexual violence prevention education. The activity types included large group, individual and small group activities, free choice activities, and finally parents' education, which included poems, fairy tales, music, art, sharing stories. Finally, the multi modal functions of the robot were employed: image on the screen, TTS (Text To Speech), touch function, recognition of sound and recording system. The robot content was demonstrated to thirty early childhood educators whose acceptability of the content was measured using questionnaires. And also the content was applied to children in daycare center. As a result, majority of them responded positively in acceptability. The results of this study suggest that the further research is needed to improve two-way interactivity of teacher assistive robot.

The Study on Korean Prosody Generation using Artificial Neural Networks (인공 신경망의 한국어 운율 발생에 관한 연구)

  • Min Kyung-Joong;Lim Un-Cheon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.337-340
    • /
    • 2004
  • The exactly reproduced prosody of a TTS system is one of the key factors that affect the naturalness of synthesized speech. In general, rules about prosody had been gathered either from linguistic knowledge or by analyzing the prosodic information from natural speech. But these could not be perfect and some of them could be incorrect. So we proposed artificial neural network(ANN)s that can be trained to team the prosody of natural speech and generate it. In learning phase, let ANNs learn the pitch and energy contour of center phoneme by applying a string of phonemes in a sentence to ANNs and comparing the output pattern with target pattern and making adjustment in weighting values to get the least mean square error between them. In test phase, the estimation rates were computed. We saw that ANNs could generate the prosody of a sentence.

  • PDF

Implementation of Yoga Posture Training Application Using Google ML Kit (Google ML Kit를 이용한 요가 자세 훈련 애플리케이션 구현)

  • Kim, Hyoung Min;Yoon, Jong Hyeon;Park, Su Hyun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.178-180
    • /
    • 2022
  • An application implementation that allows users to train yoga posture based on the landmark of yoga posture of yoga instructors obtained from the Google Firebase ML Kit was introduced. Using the ML Kit, the user's posture is classified and landmarks corresponding to each joint are obtained. The accuracy measurement reference value for the yoga posture is set through the angle formed by the joints of the obtained landmark. The accuracy between the reference landmark for the yoga posture of professional yoga instructors and the landmark for the user's pose through the ML Kit was compared. According to the accuracy reference value, information on malfunction and correct motion is provided to the user through Text-to-Speech (TTS). Users are managed effectively with Firebase, and a system that displays the amount of exercise through a counter and timer when the user performs an exercise that meets the accuracy reference value was explained.

  • PDF

Proposal of speaker change detection system considering speaker overlap (화자 겹침을 고려한 화자 전환 검출 시스템 제안)

  • Park, Jisu;Yun, Young-Sun;Cha, Shin;Park, Jeon Gue
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.466-472
    • /
    • 2021
  • Speaker Change Detection (SCD) refers to finding the moment when the main speaker changes from one person to the next in a speech conversation. In speaker change detection, difficulties arise due to overlapping speakers, inaccuracy in the information labeling, and data imbalance. To solve these problems, TIMIT corpus widely used in speech recognition have been concatenated artificially to obtain a sufficient amount of training data, and the detection of changing speaker has performed after identifying overlapping speakers. In this paper, we propose an speaker change detection system that considers the speaker overlapping. We evaluated and verified the performance using various approaches. As a result, a detection system similar to the X-Vector structure was proposed to remove the speaker overlapping region, while the Bi-LSTM method was selected to model the speaker change system. The experimental results show a relative performance improvement of 4.6 % and 13.8 % respectively, compared to the baseline system. Additionally, we determined that a robust speaker change detection system can be built by conducting related studies based on the experimental results, taking into consideration text and speaker information.

Speaker Verification System Using Continuants and Multilayer Perceptrons (지속음 및 다층신경망을 이용한 화자증명 시스템)

  • Lee, Tae-Seung;Park, Sung-Won;Hwang, Byong-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.1015-1020
    • /
    • 2003
  • Among the techniques to protect private information by adopting biometrics, speaker verification is expected to be widely used due to advantages in convenient usage and implementation cost. Speaker verification should achieve a high degree of the reliability in the verification score, the flexibility in speech text usage, and the efficiency in verification system complexity. Continuants have excellent speaker-discriminant power and the modest number of phonemes in the category, and multilayer perceptrons (MLPs) have superior recognition ability and fast operation speed. In consequence, the two provide viable ways for speaker verification system to obtain the above properties. This paper implements a system to which continuants and MLPs are applied, and evaluates the system using a Korean speech database. The results of the experiment prove that continuants and MLPs enable the system to acquire the three properties.

  • PDF

Continuous Speech Recognition Using N-gram Language Models Constructed by Iterative Learning (반복학습법에 의해 작성한 N-gram 언어모델을 이용한 연속음성인식에 관한 연구)

  • 오세진;황철준;김범국;정호열;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.6
    • /
    • pp.62-70
    • /
    • 2000
  • In usual language models(LMs), the probability has been estimated by selecting highly frequent words from a large text side database. However, in case of adopting LMs in a specific task, it is unnecessary to using the general method; constructing it from a large size tent, considering the various kinds of cost. In this paper, we propose a construction method of LMs using a small size text database in order to be used in specific tasks. The proposed method is efficient in increasing the low frequent words by applying same sentences iteratively, for it will robust the occurrence probability of words as well. We carried out continuous speech recognition(CSR) experiments on 200 sentences uttered by 3 speakers using LMs by iterative teaming(IL) in a air flight reservation task. The results indicated that the performance of CSR, using an IL applied LMs, shows an 20.4% increased recognition accuracy compared to those without it. This system, using the IL method, also shows an average of 13.4% higher recognition accuracy than the previous one, which uses context-free grammar(CFG), implying the effectiveness of it.

  • PDF

Subtitle Automatic Generation System using Speech to Text (음성인식을 이용한 자막 자동생성 시스템)

  • Son, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Recently, many videos such as online lecture videos caused by COVID-19 have been generated. However, due to the limitation of working hours and lack of cost, they are only a part of the videos with subtitles. It is emerging as an obstructive factor in the acquisition of information by deaf. In this paper, we try to develop a system that automatically generates subtitles using voice recognition and generates subtitles by separating sentences using the ending and time to reduce the time and labor required for subtitle generation.