• Title/Summary/Keyword: 포만트 주파수

Search Result 40, Processing Time 0.024 seconds

Comparison and Analysis of Speech Signals for Emotion Recognition (감정 인식을 위한 음성신호 비교 분석)

  • Cho Dong-Uk;Kim Bong-Hyun;Lee Se-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.533-536
    • /
    • 2006
  • 본 논문에서는 음성 신호로부터 감정의 특징을 나타내는 요소를 찾아내는 것을 목표로 하고 있다. 일반적으로 감정을 인식할 수 있는 요소는 단어, 톤, 음성신호의 피치, 포만트, 그리고 발음 속도 및 음질 등이 있다. 음성을 기반으로 감정을 익히는 방법 중에서 현재 가장 많이 접근하고 있는 방법은 피치에 의한 방법이 있다. 사람의 경우는 주파수 같은 분석 요소보다는 톤과 단어, 빠르기, 음질로 감정을 받아들이게 되는 것이 자연스러운 방법이므로 이러한 요소들이 감정을 분류하는데 중요한 요소로 쓰일 수 있다. 따라서, 본 논문에서는 감정에 따른 음성의 특징을 추출하기 위해 사람의 감정 중에서 비교적 자주 쓰이는 평상, 기쁨, 화남, 슬픔에 관련된 4가지 감정을 비교 분석하였으며, 인간의 감정에 대한 음성의 특성을 분석한 결과, 강도와 스펙트럼에서 각각의 일관된 결과를 추출할 수 있었고, 이러한 결과에 대한 실험 과정과 최종 결과 및 근거를 제시하였다. 끝으로 실험에 의해 제안한 방법의 유용성을 입증하고자 한다.

  • PDF

Bilingual Voice Conversion Using Frequency Warping on Formant Space (포만트 공간에서의 주파수 변환을 이용한 이중 언어 음성 변환 연구)

  • Chae, Yi-Geun;Yun, Young-Sun;Jung, Jin Man;Eun, Seongbae
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.133-139
    • /
    • 2014
  • This paper describes several approaches to transform a speaker's individuality to another's individuality using frequency warping between bilingual formant frequencies on different language environments. The proposed methods are simple and intuitive voice conversion algorithms that do not use training data between different languages. The approaches find the warping function from source speaker's frequency to target speaker's frequency on formant space. The formant space comprises four representative monophthongs for each language. The warping functions can be represented by piecewise linear equations, inverse matrix. The used features are pure frequency components including magnitudes, phases, and line spectral frequencies (LSF). The experiments show that the LSF-based voice conversion methods give better performance than other methods.

Branch Algorithm for Phoneme Segmentation in Korean Speech Recognition System (한국어 음성인식 시스템에서 음소 경계 검출을 위한 Branch 알고리즘)

  • 서영완;한승진;장흥종;이정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.357-359
    • /
    • 2000
  • 음소 단위로 구축된 음성 데이터는 음성인식, 합성 및 분석 등의 분야에서 매우 중요하다. 일반적으로 음소는 유성음과 무성음으로 구분되어 진다. 이러한 유성음과 무성음은 많은 특징적 차이가 있지만, 기존의 음소 경계추출 알고리즘은 이를 고려하지 않고 시간 축을 기준으로 이전 프레임과 매개변수 (스펙트럼) 비교만을 통하여 음소의 경계를 결정한다. 본 논문에서는 음소 경계 추출을 위하여 유성음과 무성음의 특징적 차이를 고려한 블록기반의 Branch 알고리즘을 설계하였다. Branch 알고리즘을 사용하기 위한 스펙트럼 비교 방법은 MFCC(Mel-Frequency Cepstrum Coefficient)를 기반으로 한 거리 측정법을 사용하였고, 유성음과 무성음의 구분은 포만트 주파수를 이용하였다. 실험 결과 3~4음절 고립단어를 대상으로 약 78%의 정확도를 얻을수 있었다.

  • PDF

Phoneme Separation and Establishment of Time-Frequency Discriminative Pattern on Korean Syllables (음절신호의 음소 분리와 시간-주파수 판별 패턴의 설정)

  • 류광열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.12
    • /
    • pp.1324-1335
    • /
    • 1991
  • In this paper, a phoneme separation and an establishment of discriminative pattern of Korean phonemes are studied on experiment. The separation uses parameters such as pitch extraction, glottal peak pulse width of each pitch. speech duration. envelope and amplitude bias. The first pitch is extracted by deviations of glottal peak and width. energy and normalization on a bias on the top of vowel envelope. And then, it traces adjacent pitch to vowel in whole. On vewel, amethod to be reduced gliding pattern and the possible of vowel distinction to be used just second formant are proposed, and shrinking pitch waveform has nothing to do with pitch length is estimated. A pattern of envelope, spectrum, shrinking waveform, and a method of analysis by mutual relation among phonemes and manners of articulation on consonant are detected. As experimental results, 90% on vowel phoneme, 80% and 60% on initial and final consonant are discriminated.

  • PDF

Analyzing the Acoustic Elements and Emotion Recognition from Speech Signal Based on DRNN (음향적 요소분석과 DRNN을 이용한 음성신호의 감성 인식)

  • Sim, Kwee-Bo;Park, Chang-Hyun;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.45-50
    • /
    • 2003
  • Recently, robots technique has been developed remarkably. Emotion recognition is necessary to make an intimate robot. This paper shows the simulator and simulation result which recognize or classify emotions by learning pitch pattern. Also, because the pitch is not sufficient for recognizing emotion, we added acoustic elements. For that reason, we analyze the relation between emotion and acoustic elements. The simulator is composed of the DRNN(Dynamic Recurrent Neural Network), Feature extraction. DRNN is a learning algorithm for pitch pattern.

2.4kbps Speech Coding Algorithm Using the Sinusoidal Model (정현파 모델을 이용한 2.4kbps 음성부호화 알고리즘)

  • 백성기;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.196-204
    • /
    • 2002
  • The Sinusoidal Transform Coding(STC) is a vocoding scheme based on a sinusoidal model of a speech signal. The low bit-rate speech coding based on sinusoidal model is a method that models and synthesizes speech with fundamental frequency and its harmonic elements, spectral envelope and phase in the frequency region. In this paper, we propose the 2.4kbps low-rate speech coding algorithm using the sinusoidal model of a speech signal. In the proposed coder, the pitch frequency is estimated by choosing the frequency that makes least mean squared error between synthetic speech with all spectrum peaks and speech synthesized with chosen frequency and its harmonics. The spectral envelope is estimated using SEEVOC(Spectral Envelope Estimation VOCoder) algorithm and the discrete all-pole model. The phase information is obtained using the time of pitch pulse occurrence, i.e., the onset time, as well as the phase of the vocal tract system. Experimental results show that the synthetic speech preserves both the formant and phase information of the original speech very well. The performance of the coder has been evaluated in terms of the MOS test based on informal listening tests, and it achieved over the MOS score of 3.1.

A Method of Learning and Recognition of Vowels by Using Neural Network (신경망을 이용한 모음의 학습 및 인식 방법)

  • Shim, Jae-Hyoung;Lee, Jong-Hyeok;Yoon, Tae-Hoon;Kim, Jae-Chang;Lee, Yang-Sung
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.11
    • /
    • pp.144-151
    • /
    • 1990
  • In this work Ohotomo et al., neural network model for learning and recognizing vowels is modified in order to reduce the time for learning and the possibility of incorrect recognition. In this modification, the finite bandwidth of formant frequencies of vowels are taken into consider-ations in coding input patterns. Computer simulations show that the modification reduces not only the possibility of incorrect recognition by about $30{\%}$ but also the time for learning by about $7{\%}$.

  • PDF

The Characteristics of the Vocalization of the Female News Anchors (여성 뉴스 앵커의 발성 특성 분석)

  • Kyon, Doo-Heon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.7
    • /
    • pp.390-395
    • /
    • 2011
  • This paper covers the studies on common voice parameters through the voice analysis of female main news anchors on weekday evening by the station, and differences of relative voices and sounds among stations. To examine voice characteristics, 6 voice parameters were analyzed and it showed anchors of each station had distinctive characteristics of voices and phonations over all fields except the speech rate, and there were also differences in sound systems. As major analysis parameters, basic pitch, tone of the 1st formant and pitch ratio, level of closeness by pitch bandwidth, type of sentence closing through average pitch position within pitch bandwidth, average speech rate, and acoustic tone analysis by energy distribution by frequency band were used. Analyzed values and results could be referred to and utilized in the criteria of phonation characteristics for domestic female news anchors.

Phoneme Segmentation in Consideration of Speech feature in Korean Speech Recognition (한국어 음성인식에서 음성의 특성을 고려한 음소 경계 검출)

  • 서영완;송점동;이정현
    • Journal of Internet Computing and Services
    • /
    • v.2 no.1
    • /
    • pp.31-38
    • /
    • 2001
  • Speech database built of phonemes is significant in the studies of speech recognition, speech synthesis and analysis, Phoneme, consist of voiced sounds and unvoiced ones, Though there are many feature differences in voiced and unvoiced sounds, the traditional algorithms for detecting the boundary between phonemes do not reflect on them and determine the boundary between phonemes by comparing parameters of current frame with those of previous frame in time domain, In this paper, we propose the assort algorithm, which is based on a block and reflecting upon the feature differences between voiced and unvoiced sounds for phoneme segmentation, The assort algorithm uses the distance measure based upon MFCC(Mel-Frequency Cepstrum Coefficient) as a comparing spectrum measure, and uses the energy, zero crossing rate, spectral energy ratio, the formant frequency to separate voiced sounds from unvoiced sounds, N, the result of out experiment, the proposed system showed about 79 percents precision subject to the 3 or 4 syllables isolated words, and improved about 8 percents in the precision over the existing phonemes segmentation system.

  • PDF

A Study on Fuzziness Parameter Selection in Fuzzy Vector Quantization for High Quality Speech Synthesis (고음질의 음성합성을 위한 퍼지벡터양자화의 퍼지니스 파라메타선정에 관한 연구)

  • 이진이
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.60-69
    • /
    • 1998
  • This paper proposes a speech synthesis method using Fuzzy VQ, and then study how to make choice of fuzziness value which optimizes (controls) the performance of FVQ in order to obtain the synthesized speech which is closer to the original speech. When FVQ is used to synthesize a speech, analysis stage generates membership function values which represents the degree to which an input speech pattern matches each speech patterns in codebook, and synthesis stage reproduces a synthesized speech, using membership function values which is obtained in analysis stage, fuzziness value, and fuzzy-c-means operation. By comparsion of the performance of the FVQ and VQ synthesizer with simmulation, we show that, although the FVQ codebook size is half of a VQ codebook size, the performance of FVQ is almost equal to that of VQ. This results imply that, when Fuzzy VQ is used to obtain the same performance with that of VQ in speech synthesis, we can reduce by half of memory size at a codebook storage. And then we have found that, for the optimized FVQ with maximum SQNR in synthesized speech, the fuzziness value should be small when the variance of analysis frame is relatively large, while fuzziness value should be large, when it is small. As a results of comparsion of the speeches synthesized by VQ and FVQ in their spectrogram of frequency domain, we have found that spectrum bands(formant frequency and pitch frequency) of FVQ synthesized speech are closer to the original speech than those using VQ.

  • PDF