• Title/Summary/Keyword: text-to-speech system

Search Result 246, Processing Time 0.031 seconds

A Korean Multi-speaker Text-to-Speech System Using d-vector (d-vector를 이용한 한국어 다화자 TTS 시스템)

  • Kim, Kwang Hyeon;Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.469-475
    • /
    • 2022
  • To train the model of the deep learning-based single-speaker TTS system, a speech DB of tens of hours and a lot of training time are required. This is an inefficient method in terms of time and cost to train multi-speaker or personalized TTS models. The voice cloning method uses a speaker encoder model to make the TTS model of a new speaker. Through the trained speaker encoder model, a speaker embedding vector representing the timbre of the new speaker is created from the small speech data of the new speaker that is not used for training. In this paper, we propose a multi-speaker TTS system to which voice cloning is applied. The proposed TTS system consists of a speaker encoder, synthesizer and vocoder. The speaker encoder applies the d-vector technique used in the speaker recognition field. The timbre of the new speaker is expressed by adding the d-vector derived from the trained speaker encoder as an input to the synthesizer. It can be seen that the performance of the proposed TTS system is excellent from the experimental results derived by the MOS and timbre similarity listening tests.

A System of English Vowel Transcription Based on Acoustic Properties (영어 모음음소의 표기체계에 관한 연구)

  • Kim, Dae-Won
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.73-79
    • /
    • 2003
  • There are more than five systems for transcribing English vowels. Because of this diversity, teachers of English and students are confronted with not a little problems with the English vowel symbols used in the English-Korean dictionaries, English text books, books for Phonetics and Phonology. This study was designed to suggest criterions for the phonemic transcription of English vowels on the basis of phonetic properties of the vowels and a system of English vowel transcription based on the criterions in order to minimize the problems with inter-system differences. A speaker (phonetician) of RP English uttered a series of isolated minimal pairs containing the vowels in question. The suggested vowel symbols are as follows: (1) Simple vowels: /i:/ in beat, /I/ bit, /$\varepsilon$/ bet, /${\ae}$ bat, /a:/ father, /Dlla/ bod, /c:/ bawd, /$\upsilon$ put, /u:/ boot /$\Lambda$/ but, and /e/ about /$\varepsilon:ll3:r$/ bird. (2) Diphthongs: /aI/ in bite, /a$\upsilon$/ bout, /cI/ boy, /3$\upsilon$llo$\upsilon$/ boat, /eI/ bait, /eelleer/ air, /uelluer/ poor, /iellier/ beer. Where two symbols are shown corresponding to the vowel in a single word, the first is appropriate for most speakers of British English and the second for most speakers of American English.

  • PDF

Parameters Comparison in the speaker Identification under the Noisy Environments (화자식별을 위한 파라미터의 잡음환경에서의 성능비교)

  • Choi, Hong-Sub
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.185-195
    • /
    • 2000
  • This paper seeks to compare the feature parameters used in speaker identification systems under noisy environments. The feature parameters compared are LP cepstrum (LPCC), Cepstral mean subtraction(CMS), Pole-filtered CMS(PFCMS), Adaptive component weighted cepstrum(ACW) and Postfilter cepstrum(PF). The GMM-based text independent speaker identification system is designed for this target. Some series of experiments show that the LPCC parameter is adequate for modelling the speaker in the matched environments between train and test stages. But in the mismatched training and testing conditions, modified parameters are preferable the LPCC. Especially CMS and PFCMS parameters are more effective for the microphone mismatching conditions while the ACW and PF parameters are good for more noisy mismatches.

  • PDF

Design and Implementation of Mobile Communication System for Hearing- impaired Person (청각 장애인을 위한 모바일 통화 시스템 설계 및 구현)

  • Yun, Dong-Hee;Kim, Young-Ung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.5
    • /
    • pp.111-116
    • /
    • 2016
  • According to the Ministry of Science, ICT and Future Planning's survey of information gap, smartphone retention rate of disabled people stayed in one-third of non-disabled people, the situation is significantly less access to information for people with disabilities than non-disabled people. In this paper, we develop an application, CallHelper, that helps to be more convenient to use mobile voice calls to the auditory disabled people. CallHelper runs automatically when a call comes in, translates caller's voice to text output on the mobile screen, and displays the emotion reasoning from the caller's voice to visualize emoticons. It also saves voice, translated text, and emotion data that can be played back.

Hate Speech Detection Using Modified Principal Component Analysis and Enhanced Convolution Neural Network on Twitter Dataset

  • Majed, Alowaidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2023
  • Traditionally used for networking computers and communications, the Internet has been evolving from the beginning. Internet is the backbone for many things on the web including social media. The concept of social networking which started in the early 1990s has also been growing with the internet. Social Networking Sites (SNSs) sprung and stayed back to an important element of internet usage mainly due to the services or provisions they allow on the web. Twitter and Facebook have become the primary means by which most individuals keep in touch with others and carry on substantive conversations. These sites allow the posting of photos, videos and support audio and video storage on the sites which can be shared amongst users. Although an attractive option, these provisions have also culminated in issues for these sites like posting offensive material. Though not always, users of SNSs have their share in promoting hate by their words or speeches which is difficult to be curtailed after being uploaded in the media. Hence, this article outlines a process for extracting user reviews from the Twitter corpus in order to identify instances of hate speech. Through the use of MPCA (Modified Principal Component Analysis) and ECNN, we are able to identify instances of hate speech in the text (Enhanced Convolutional Neural Network). With the use of NLP, a fully autonomous system for assessing syntax and meaning can be established (NLP). There is a strong emphasis on pre-processing, feature extraction, and classification. Cleansing the text by removing extra spaces, punctuation, and stop words is what normalization is all about. In the process of extracting features, these features that have already been processed are used. During the feature extraction process, the MPCA algorithm is used. It takes a set of related features and pulls out the ones that tell us the most about the dataset we give itThe proposed categorization method is then put forth as a means of detecting instances of hate speech or abusive language. It is argued that ECNN is superior to other methods for identifying hateful content online. It can take in massive amounts of data and quickly return accurate results, especially for larger datasets. As a result, the proposed MPCA+ECNN algorithm improves not only the F-measure values, but also the accuracy, precision, and recall.

Speaker Identification Using Higher-Order Statistics In Noisy Environment (고차 통계를 이용한 잡음 환경에서의 화자식별)

  • Shin, Tae-Young;Kim, Gi-Sung;Kwon, Young-Uk;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.25-35
    • /
    • 1997
  • Most of speech analysis methods developed up to date are based on second order statistics, and one of the biggest drawback of these methods is that they show dramatical performance degradation in noisy environments. On the contrary, the methods using higher order statistics(HOS), which has the property of suppressing Gaussian noise, enable robust feature extraction in noisy environments. In this paper we propose a text-independent speaker identification system using higher order statistics and compare its performance with that using the conventional second-order-statistics-based method in both white and colored noise environments. The proposed speaker identification system is based on the vector quantization approach, and employs HOS-based voiced/unvoiced detector in order to extract feature parameters for voiced speech only, which has non-Gaussian distribution and is known to contain most of speaker-specific characteristics. Experimental results using 50 speaker's database show that higher-order-statistics-based method gives a better identificaiton performance than the conventional second-order-statistics-based method in noisy environments.

  • PDF

A Method of Recognizing and Validating Road Name Address from Speech-oriented Text (음성 기반 도로명 주소 인식 및 주소 검증 기법)

  • Lee, Keonsoo;Kim, Jung-Yeon;Kang, Byeong-Gwon
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.31-39
    • /
    • 2021
  • Obtaining delivery addresses from calls is one of the most important processes in TV home shopping business. By automating this process, the operational efficiency of TV home shopping can be increased. In this paper, a method of recognizing and validating road name address, which is the address system of South Korea, from speech oriented text is proposed. The speech oriented text has three challenges. The first is that the numbers are represented in the form of pronunciation. The second is that the recorded address has noises that are made from repeated pronunciation of the same address, or unordered address. The third is that the readability of the resulted address. For resolving these problems, the proposed method enhances the existing address databases provided by the Korea Post and Ministry of the Interior and Safety. Various types of pronouncing address are added, and heuristic rules for dividing ambiguous pronunciations are employed. And the processed address is validated by checking the existence in the official address database. Even though, this proposed method is for the STT result of the address pronunciation, this also can be used for any 3rd party services that need to validate road name address. The proposed method works robustly on noises such as positions change or omission of elements.

Development of a 3-D Visualization Application for Management of Substation Equipment

  • Park, Chang-Hyun
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.23 no.3
    • /
    • pp.38-44
    • /
    • 2009
  • This paper presents a new windows application based on 3-D graphics and Text-To-Speech (TTS) for effective management of substation equipment. When problems in a power system occur, inexperienced power system operators may have difficulty in understanding the situation as well as finding suitable countermeasures quickly. This paper addresses an effective scheme to visualizing power system equipment under normal and abnormal conditions using 3-D graphics and animations. In addition, the state variations and the order of maintenance priority of substation equipment are represented by TTS and intuitive methods. The proposed system can help power system operators to more quickly understand the state of power system equipment, and it can provide operators with the suitable countermeasures for minimizing damage caused by equipment problems.

Automatic Speech Style Recognition Through Sentence Sequencing for Speaker Recognition in Bilateral Dialogue Situations (양자 간 대화 상황에서의 화자인식을 위한 문장 시퀀싱 방법을 통한 자동 말투 인식)

  • Kang, Garam;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.17-32
    • /
    • 2021
  • Speaker recognition is generally divided into speaker identification and speaker verification. Speaker recognition plays an important function in the automatic voice system, and the importance of speaker recognition technology is becoming more prominent as the recent development of portable devices, voice technology, and audio content fields continue to expand. Previous speaker recognition studies have been conducted with the goal of automatically determining who the speaker is based on voice files and improving accuracy. Speech is an important sociolinguistic subject, and it contains very useful information that reveals the speaker's attitude, conversation intention, and personality, and this can be an important clue to speaker recognition. The final ending used in the speaker's speech determines the type of sentence or has functions and information such as the speaker's intention, psychological attitude, or relationship to the listener. The use of the terminating ending has various probabilities depending on the characteristics of the speaker, so the type and distribution of the terminating ending of a specific unidentified speaker will be helpful in recognizing the speaker. However, there have been few studies that considered speech in the existing text-based speaker recognition, and if speech information is added to the speech signal-based speaker recognition technique, the accuracy of speaker recognition can be further improved. Hence, the purpose of this paper is to propose a novel method using speech style expressed as a sentence-final ending to improve the accuracy of Korean speaker recognition. To this end, a method called sentence sequencing that generates vector values by using the type and frequency of the sentence-final ending appearing in the utterance of a specific person is proposed. To evaluate the performance of the proposed method, learning and performance evaluation were conducted with a actual drama script. The method proposed in this study can be used as a means to improve the performance of Korean speech recognition service.

A Child Emotion Analysis System using Text Mining and Method for Constructing a Children's Emotion Dictionary (텍스트마이닝 기반 아동 감정 분석 시스템 및 아동용 감정 사전 구축 방안)

  • Young-Jun Park;Sun-Young Kim;Yo-Han Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.545-550
    • /
    • 2024
  • In a society undergoing rapid change, modern individuals are facing various stresses, and there's a noticeable increase in mental health treatments for children as well. For the psychological well-being of children, it's crucial to swiftly discern their emotional states. However, this proves challenging as young children often articulate their emotions using limited vocabulary. This paper aims to categorize children's psychological states into four emotions: depression, anxiety, loneliness, and aggression. We propose a method for constructing an emotion dictionary tailored for children based on assessments from child psychology experts.