• Title/Summary/Keyword: Voice transformation

Search Result 56, Processing Time 0.029 seconds

The method of Voice Packet Transformation for the Improvement of Voice Quality in Embedded VoIP System (Embedded VoIP 시스템에서 음질개선을 위한 음성패킷 변환기법)

  • 강진아;양영배;임재윤
    • Proceedings of the IEEK Conference
    • /
    • 2003.07a
    • /
    • pp.430-433
    • /
    • 2003
  • In this paper, we propose the method of voice packet transformation for the improvement of voice quality in embedded VoIP system terminals. For this purpose, it was analyzed about the RTP header in the voice packet receiving side and designed about the handling method of voice packets by using jitter buffer. Through this analyzed and designed results, by implementing the voice packet transformation method and testing on the self-product VoIP system, we reached that a conclusion is satisfied with performance of VoIP services.

  • PDF

Voice transformation for HTS using correlation between fundamental frequency and vocal tract length (기본주파수와 성도길이의 상관관계를 이용한 HTS 음성합성기에서의 목소리 변환)

  • Yoo, Hyogeun;Kim, Younggwan;Suh, Youngjoo;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.41-47
    • /
    • 2017
  • The main advantage of the statistical parametric speech synthesis is its flexibility in changing voice characteristics. A personalized text-to-speech(TTS) system can be implemented by combining a speech synthesis system and a voice transformation system, and it is widely used in many application areas. It is known that the fundamental frequency and the spectral envelope of speech signal can be independently modified to convert the voice characteristics. Also it is important to maintain naturalness of the transformed speech. In this paper, a speech synthesis system based on Hidden Markov Model(HMM-based speech synthesis, HTS) using the STRAIGHT vocoder is constructed and voice transformation is conducted by modifying the fundamental frequency and spectral envelope. The fundamental frequency is transformed in a scaling method, and the spectral envelope is transformed through frequency warping method to control the speaker's vocal tract length. In particular, this study proposes a voice transformation method using the correlation between fundamental frequency and vocal tract length. Subjective evaluations were conducted to assess preference and mean opinion scores(MOS) for naturalness of synthetic speech. Experimental results showed that the proposed voice transformation method achieved higher preference than baseline systems while maintaining the naturalness of the speech quality.

Voice Personality Transformation Using an Optimum Classification and Transformation (최적 분류 변환을 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.400-409
    • /
    • 2004
  • In this paper. a voice personality transformation method is proposed. which makes one person's voice sound like another person's voice. To transform the voice personality. vocal tract transfer function is used as a transformation parameter. Comparing with previous methods. the proposed method makes transformed speech closer to target speaker's voice in both subjective and objective points of view. Conversion between vocal tract transfer functions is implemented by classification of entire vector space followed by linear transformation for each cluster. LPC cepstrum is used as a feature parameter. A joint classification and transformation method is proposed, where optimum clusters and transformation matrices are simultaneously estimated in the sense of a minimum mean square error criterion. To evaluate the performance of the proposed method. transformation rules are generated from 150 sentences uttered by three male and on female speakers. These rules are then applied to another 150 sentences uttered by the same speakers. and objective evaluation and subjective listening tests are performed.

GMM based Nonlinear Transformation Methods for Voice Conversion

  • Vu, Hoang-Gia;Bae, Jae-Hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.67-70
    • /
    • 2005
  • Voice conversion (VC) is a technique for modifying the speech signal of a source speaker so that it sounds as if it is spoken by a target speaker. Most previous VC approaches used a linear transformation function based on GMM to convert the source spectral envelope to the target spectral envelope. In this paper, we propose several nonlinear GMM-based transformation functions in an attempt to deal with the over-smoothing effect of linear transformation. In order to obtain high-quality modifications of speech signals our VC system is implemented using the Harmonic plus Noise Model (HNM)analysis/synthesis framework. Experimental results are reported on the English corpus, MOCHA-TlMlT.

  • PDF

A nonlinear transformation methods for GMM to improve over-smoothing effect

  • Chae, Yi Geun
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.38 no.2
    • /
    • pp.182-187
    • /
    • 2014
  • We propose nonlinear GMM-based transformation functions in an attempt to deal with the over-smoothing effects of linear transformation for voice processing. The proposed methods adopt RBF networks as a local transformation function to overcome the drawbacks of global nonlinear transformation functions. In order to obtain high-quality modifications of speech signals, our voice conversion is implemented using the Harmonic plus Noise Model analysis/synthesis framework. Experimental results are reported on the English corpus, MOCHA-TIMIT.

Walking Assistance System for Sight Impaired People Based on a Multimodal Information Transformation Technique (멀티모달 정보변환을 통한 시각장애우 보행 보조 시스템)

  • Yu, Jae-Hyoung;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.465-472
    • /
    • 2009
  • This paper proposes a multimodal information transformation system that converts the image information to the voice information to provide the sight impaired people with walking area and obstacles, which are extracted by an acquired image from a single CCD camera. Using a chain-code line detection algorithm, the walking area is found from the vanishing point and boundary of a sidewalk on the edge image. And obstacles are detected by Gabor filter of extracting vertical lines on the walking area. The proposed system expresses the voice information of pre-defined sentences, consisting of template words which mean walking area and obstacles. The multi-modal information transformation system serves the useful voice information to the sight impaired that intend to reach their destination. The experiments of the proposed algorithm has been implemented on the indoor and outdoor environments, and verified its superiority to exactly provide walking parameters sentences.

Voice conversion using low dimensional vector mapping (낮은 차원의 벡터 변환을 통한 음성 변환)

  • Lee, Kee-Seung;Doh, Won;Youn, Dae-Hee
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.4
    • /
    • pp.118-127
    • /
    • 1998
  • In this paper, we propose a voice personality transformation method which makes one person's voice sound like another person's voice. In order to transform the voice personality, vocal tract transfer function is used as a transformation parameter. Comparing with previous methods, the proposed method can obtain high-quality transformed speech with low computational complexity. Conversion between the vocal tract transfer functions is implemented by a linear mapping based on soft clustering. In this process, mean LPC cepstrum coefficients and mean removed LPC cepstrum modeled by the low dimensional vector are used as transformation parameters. To evaluate the performance of the proposed method, mapping rules are generated from 61 Korean words uttered by two male and one female speakers. These rules are then applied to 9 sentences uttered by the same persons, and objective evaluation and subjective listening tests for the transformed speech are performed.

  • PDF

Voice Personality Transformation Using a Multiple Response Classification and Regression Tree (다중 응답 분류회귀트리를 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.253-261
    • /
    • 2004
  • In this paper, a new voice personality transformation method is proposed. which modifies speaker-dependent feature variables in the speech signals. The proposed method takes the cepstrum vectors and pitch as the transformation paremeters, which represent vocal tract transfer function and excitation signals, respectively. To transform these parameters, a multiple response classification and regression tree (MR-CART) is employed. MR-CART is the vector extended version of a conventional CART, whose response is given by the vector form. We evaluated the performance of the proposed method by comparing with a previously proposed codebook mapping method. We also quantitatively analyzed the performance of voice transformation and the complexities according to various observations. From the experimental results for 4 speakers, the proposed method objectively outperforms a conventional codebook mapping method. and we also observed that the transformed speech sounds closer to target speech.

Feature Selection-based Voice Transformation (단위 선택 기반의 음성 변환)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.1
    • /
    • pp.39-50
    • /
    • 2012
  • A voice transformation (VT) method that can make the utterance of a source speaker mimic that of a target speaker is described. Speaker individuality transformation is achieved by altering three feature parameters, which include the LPC cepstrum, pitch period and gain. The main objective of this study involves construction of an optimal sequence of features selected from a target speaker's database, to maximize both the correlation probabilities between the transformed and the source features and the likelihood of the transformed features with respect to the target model. A set of two-pass conversion rules is proposed, where the feature parameters are first selected from a database then the optimal sequence of the feature parameters is then constructed in the second pass. The conversion rules were developed using a statistical approach that employed a maximum likelihood criterion. In constructing an optimal sequence of the features, a hidden Markov model (HMM) was employed to find the most likely combination of the features with respect to the target speaker's model. The effectiveness of the proposed transformation method was evaluated using objective tests and informal listening tests. We confirmed that the proposed method leads to perceptually more preferred results, compared with the conventional methods.

Voice personality transformation using an orthogonal vector space conversion (직교 벡터 공간 변환을 이용한 음성 개성 변환)

  • Lee, Ki-Seung;Park, Kun-Jong;Youn, Dae-Hee
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.96-107
    • /
    • 1996
  • A voice personality transformation algorithm using orthogonal vector space conversion is proposed in this paper. Voice personality transformation is the process of changing one person's acoustic features (source) to those of another person (target). In this paper, personality transformation is achieved by changing the LPC cepstrum coefficients, excitation spectrum and pitch contour. An orthogonal vector space conversion technique is proposed to transform the LPC cepstrum coefficients. The LPC cepstrum transformation is implemented by principle component decomposition by applying the Karhunen-Loeve transformation and minimum mean-square error coordinate transformation(MSECT). Additionally, we propose a pitch contour modification method to transform the prosodic characteristics of any speaker. To do this, reference pitch patterns for source and target speaker are firstly built up, and speaker's one. The experimental results show the effectiveness of the proposed algorithm in both subjective and objective evaluations.

  • PDF