• 제목/요약/키워드: voice quality features

검색결과 43건 처리시간 0.016초

정상 음성의 목소리 특성의 정성적 분류와 음성 특징과의 상관관계 도출 (Qualitative Classification of Voice Quality of Normal Speech and Derivation of its Correlation with Speech Features)

  • 김정민;권철홍
    • 말소리와 음성과학
    • /
    • 제6권1호
    • /
    • pp.71-76
    • /
    • 2014
  • In this paper voice quality of normal speech is qualitatively classified by five components of breathy, creaky, rough, nasal, and thin/thick voice. To determine whether a correlation exists between a subjective measure of voice and an objective measure of voice, each voice is perceptually evaluated using the 1/2/3 scale by speech processing specialists and acoustically analyzed using speech analysis tools such as the Praat, MDVP, and VoiceSauce. The speech parameters include features related to speech source and vocal tract filter. Statistical analysis uses a two-independent-samples non-parametric test. Experimental results show that statistical analysis identified a significant correlation between the speech feature parameters and the components of voice quality.

음성감정인식에서 음색 특성 및 영향 분석 (Analysis of Voice Quality Features and Their Contribution to Emotion Recognition)

  • 이정인;최정윤;강홍구
    • 방송공학회논문지
    • /
    • 제18권5호
    • /
    • pp.771-774
    • /
    • 2013
  • 본 연구는 감정상태와 음색특성의 관계를 확인하고, 추가로 cepstral 피쳐와 조합하여 감정인식을 진행하였다. Open quotient, harmonic-to-noise ratio, spectral tilt, spectral sharpness를 포함하는 특징들을 음색검출을 위해 적용하였고, 일반적으로 사용되는 피치와 에너지를 기반한 운율피쳐를 적용하였다. ANOVA분석을 통해 각 특징벡터의 유효성을 살펴보고, sequential forward selection 방법을 적용하여 최종 감정인식 성능을 분석하였다. 결과적으로, 제안된 피쳐들으로부터 성능이 향상되는 것을 확인하였고, 특히 화남과 기쁨에 대하여 에러가 줄어드는 것을 확인하였다. 또한 음색관련 피쳐들이 cepstral 피쳐와 결합할 경우 역시 인식 성능이 향상되었다.

음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류 (Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features)

  • 여은정;김선희;정민화
    • 말소리와 음성과학
    • /
    • 제13권2호
    • /
    • pp.57-66
    • /
    • 2021
  • 본 논문은 말 명료도 기준의 마비말장애 중증도 자동 분류 문제에 초점을 둔다. 말 명료도는 호흡, 발성, 공명, 조음, 운율 등 다양한 말 기능 특징의 영향을 받는다. 그러나 대부분의 선행연구는 한 개의 말 기능 특징만을 중증도 자동분류에 사용하였다. 본 논문에서는 음성의 장애 특성을 효과적으로 포착하기 위해 마비말장애 중증도 자동 분류에서 음질, 운율, 발음의 다양한 말 기능 특징을 반영하고자 하였다. 음질은 jitter, shimmer, HNR, voice breaks 개수, voice breaks 정도로 구성된다. 운율은 발화 속도(전체 길이, 말 길이, 말 속도, 조음 속도), 음높이(F0 평균, 표준편차, 최솟값, 최댓값, 중간값, 25 사분위값, 75 사분위값), 그리고 리듬(% V, deltas, Varcos, rPVIs, nPVIs)을 포함한다. 발음에는 음소 정확도(자음 정확도, 모음 정확도, 전체 음소 정확도)와 모음 왜곡도[VSA(vowel space area), FCR (formant centralized ratio), VAI(vowel articulatory index), F2 비율]가 있다. 본 논문에서는 다양한 특징 조합을 사용하여 중증도 자동 분류를 시행하였다. 실험 결과, 음질, 운율, 발음 특징 세 가지 말 기능 특징 모두를 분류에 사용했을 때 F1-score 80.15%로 가장 높은 성능이 나타났다. 이는 마비말장애 중증도 자동 분류에는 음질, 운율, 발음 특징이 모두 함께 고려되어야 함을 시사한다.

음향 파라미터에 의한 정서적 음성의 음질 분석 (Analysis of the Voice Quality in Emotional Speech Using Acoustical Parameters)

  • 조철우;리타오
    • 대한음성학회지:말소리
    • /
    • 제55권
    • /
    • pp.119-130
    • /
    • 2005
  • The aim of this paper is to investigate some acoustical characteristics of the voice quality features from the emotional speech database. Six different parameters are measured and compared for 6 different emotions (normal, happiness, sadness, fear, anger, boredom) and from 6 different speakers. Inter-speaker variability and intra-speaker variability are measured. Some intra-speaker consistency of the parameter change across the emotions are observed, but inter-speaker consistency are not observed.

  • PDF

음원 파라미터 모델과 인공신경망을 이용한 음성장애 검출 (Screening of Voice Disorder using Source Parameter Model and Artificial Neural Network)

  • 파벨시틸;조철우;미샤파벨
    • 음성과학
    • /
    • 제15권2호
    • /
    • pp.89-97
    • /
    • 2008
  • There is a number of clinical conditions that affect directly or indirectly the physical properties of the vocal folds and thereby the pressure waveforms of elicited sounds. If the relationships between the clinical conditions and the voice quality are sufficiently reliable, it should be possible to detect these diseases or disorders. The focus of this paper is to determine the set of features and their values that would characterize the speaker's state of vocal folds. To the extent that these features can capture the anatomical, physiological, and neurological aspects of the speaker they can be potentially used to mediate an unobtrusive approach to diagnosis. We will show a new approach to this problem supported with results obtained from two disordered voice corpora.

  • PDF

국악(판소리) 발성법 (The Vocalization for Korean Traditional Song "Pansori")

  • 홍기환
    • 대한후두음성언어의학회지
    • /
    • 제22권2호
    • /
    • pp.111-114
    • /
    • 2011
  • All singers can often develop voice trouble secondary to vocal abuse and overuse, but it is well known that traditional Korean singer (pansori) develop voice disorders more frequently than western style sunger. While laryngological concern for voice disorders arising in professional singers has received some attention, empirically motivated investigations of the underlying acoustic features of the singing voice have been relatively limited. Since all singers have a good knowledge of the voice and voice training, they would hardly give consent for treatment to a doctor unless he understood their desire to maximize their voice quality. The components of this report are composed of breathing, basic ekement, and vocalization, essencial fact, for getting a perfect voice for pansori. The breathing is based on hypogastric breathing. The main functions of breathing are energy and power of utterence, tempo of rhythm and seperating paragraph and controlling feelings according to dramatic situation. Vocalization is based on general vocalization. Main uses of it are maintaining singer's tone and harmony of cosmetic dual force.

  • PDF

잡음환경에 강인한 음성분류기반의 패킷손실 은닉 알고리즘 (Packet Loss Concealment Algorithm Based on Robust Voice Classification in Noise Environment)

  • 김형국;류상현
    • 한국음향학회지
    • /
    • 제33권1호
    • /
    • pp.75-80
    • /
    • 2014
  • 실시간 VoIP 네트워크는 지연, 지터 그리고 패킷손실과 같은 네트워크 장애요소로 인해 품질저하가 발생한다. 본 논문은 VoIP 음질 향상을 위해 잡음환경에 강인한 음성분류기반의 패킷손실 은닉 알고리즘을 제안한다. 제안된 방식에서는 음성신호로부터 추출된 다양한 특징들을 분석하고 이를 기반으로 획득된 적응적인 문턱값을 사용하여 수신단에 도착한 패킷을 분류한다. 정확한 신호분류 결과는 패킷손실 은닉에 사용된다. 그리고 선형 예측 기반의 손실패킷 은닉은 연속적으로 패킷을 은닉하거나 손실된 패킷복원 시 발생하는 메탈릭 아티펙트를 제거함으로써 고품질의 음성을 제공한다.

동일 후적자가 산출하는 기관식도 발성($PROVOX^{(R)}$ 발성)과 식도 발성에 대한 음향학적 및 공기역학적 특성 비교 (The Comparison of the Acoustic and Aerodynamic Characteristics of $PROVOX^{(R)}$ Voice and Esophageal Voice Produced by the Same Laryngectomee)

  • 표화영;최홍식;임성은;최성희
    • 음성과학
    • /
    • 제5권1호
    • /
    • pp.121-139
    • /
    • 1999
  • Our experimental subject was a laryngectomee who had undergone total laryngectomy with $PROVOX^{(R)}$ insertion, and learned esophageal speech after the surgery, so he could produce both $PROVOX^{(R)}$ voice and esophageal voice. With this subject's production of $PROVOX^{(R)}$ and esophageal voice, we are to compare the acoustic and aerodynamic characteristics of the two voices, under the same physical conditions of the same person. As a result, the fundamental frequency of esophageal voice was 137.2 Hz, and that of $PROVOX^{(R)}$ was 97.5 Hz. $PROVOX^{(R)}$ voice showed lower jitter, shimmer and NHR than esophageal voice, which means that $PROVOX^{(R)}$ voice showed better voice quality than esophageal voice. In spectrographic analysis, the formation of formants and pseudoformants were more distinct in esophageal voice and several temporal aspects of acoutic features such as VOT and closure duration were more similar with normal voice in $PROVOX^{(R)}$ voice. During the sentence utterance, esophageal voice showed longer pause or silence duration than $PROVOX^{(R)}$ voice. Maximum phonation time and mean flow rate of $PROVOX^{(R)}$ voice were much longer and larger than esophageal voice, but mean and range of sound pressure level, subglottic pressure and voice efficiency were similar in the two voices. Glottal resistance of esophageal voice was much larger than $PROVOX^{(R)}$ voice which showed still larger glottal resistance than normal voice.

  • PDF

운율이식을 통해 나타난 감정인지 양상 연구 (A Study on the Perceptual Aspects of an Emotional Voice Using Prosody Transplantation)

  • 이서배
    • 대한음성학회지:말소리
    • /
    • 제62호
    • /
    • pp.19-32
    • /
    • 2007
  • This study investigated the perception of emotional voices by transplanting some or all of the prosodic aspects, i.e. pitch, duration, and intensity, of the utterances produced with emotional voices onto those with normal voices and vice versa. Listening evaluation by 24 raters revealed that prosodic effect was greater than segmental & vocal quality effect on the preception of the emotion. The degree of influence of prosody and that of segments & vocal quality varied according to the type of emotion. As for fear, prosodic elements had far greater influence than segmental & vocal quality elements whereas segmental and vocal elements had as much effect as prosody on the perception of happy voices. Different amount of contribution to the perception of emotion was found among prosodic features with the descending order of pitch, duration and intensity. As for the length of the utterances, the perception of emotion was more effective with long utterances than with short utterances.

  • PDF

Kernel PCA를 이용한 GMM 기반의 음성변환 (GMM Based Voice Conversion Using Kernel PCA)

  • 한준희;배재현;오영환
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.167-180
    • /
    • 2008
  • This paper describes a novel spectral envelope conversion method based on Gaussian mixture model (GMM). The core of this paper is rearranging source feature vectors in input space to the transformed feature vectors in feature space for the better modeling of GMM of source and target features. The quality of statistical modeling is dependent on the distribution and the dimension of data. The proposed method transforms both of the distribution and dimension of data and gives us the chance to model the same data with different configuration. Because the converted feature vectors should be on the input space, only source feature vectors are rearranged in the feature space and target feature vectors remain unchanged for the joint pdf of source and target features using KPCA. The experimental result shows that the proposed method outperforms the conventional GMM-based conversion method in various training environment.

  • PDF