• Title/Summary/Keyword: Voice classification

Search Result 150, Processing Time 0.023 seconds

A Study on the Correlation Between Sasang Constitution and Sound Characteristics Used Harmonics and Formant Bandwidth (Harmonics(배음)와 Formant Bandwidth(포먼트 폭)를 이용한 음성특성(音聲特性)과 사상체질간(四象體質間)의 상관성(相關性) 연구(硏究))

  • Park, Sung-Jin;Kim, Dal-Rae
    • Journal of Sasang Constitutional Medicine
    • /
    • v.16 no.1
    • /
    • pp.61-73
    • /
    • 2004
  • This study was prepared to investigate the correlation between Sasang constitutional groups and voice characteristics using voice analysis system(in this study, CSL). I focused on the voice characteristics in terms of harmonics, Formant frequency and Formant Bandwidth. The subjects were 71 males. I classified them into three groups, that is Soeumin group, Soyangin group and Taeumin group. The classification method of Constitution used two ways, QSCCII(Questionnarie for the Sasang Constitution Classification II) and Interview with a specialist in Sasang Constitution. So 71 people were categorized into 31 Soeumin(people), 18 Soyangin(people) and 22 Taeumin(people). Pitch is approximately similar to the fundamental frequency(F0) in voices. Shimmer in dB gives an evaluation of the period-to-period variability of the peak-to-peak amplitude within the analyzed voice sample. FFT(Fast Fourier Transform) method in CSL can display sampled voices into harmonics. H1 is the first peak and h2 is the second peak in the harmonics. The amplitude difference of h1 and h2(h1-h2) can be explained as the speaker's phonation type, And Formant frequency and bandwidth can be explained as the speaker's vocal tract. So I checked the harmonics and Formant frequency and Bandwidth as the voice parameters. First I have captured /e/ voices from all subjects using microphone. And then I analyzed /e/ voices with CSL. Power Spectrum and Formant History is the menu in the CSL which can display harmonics and Formant frequency and bandwidth. The results about the correlation between Sasang Constitutional Groups and voice parameters are as follows; 1. There is no significant amplitude difference of harmonics(h1-h2) among three groups. 2. There is the significant difference between Soeumin Group and Soyangin Group in Formant Frequency 1 and Formant Bandwidth 1(p<0.05). Any other parameters have no significance. I assume that Soyangin Group has clearer and brighter voice than Soeumin Group according to the Formant Bandwidth difference. And I think its result has coincidence with the context of "Dongyi Suse Bowon" and "Sasangimhejinam".

  • PDF

Voice Activity Detection Based on Real-Time Discriminative Weight Training (실시간 변별적 가중치 학습에 기반한 음성 검출기)

  • Chang, Sang-Ick;Jo, Q-Haing;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.100-106
    • /
    • 2008
  • In this paper we apply a discriminative weight training employing power spectral flatness measure (PSFM) to a statistical model-based voice activity detection (VAD) in various noise environments. In our approach, the VAD decision rule is expressed as the geometric mean of optimally weighted likelihood ratio test (LRT) based on a minimum classification error (MCE) method which is different from the previous works in th at different weights are assigned to each frequency bin and noise environments depending on PSFM. According to the experimental results, the proposed approach is found to be effective for the statistical model-based VAD using the LRT.

Musical Characteristics Analysis of Lee Mi-Ja and Pati Kim by Sasang Constitution Classification Method in Appearance & Manner of Speaking (용모사기론적 사상체질 분류를 통한 이미자, 패티김의 음악적 특징 분석)

  • Lee, Se-Hwan;Kim, Bong-Hyun;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.23-30
    • /
    • 2010
  • The 50th anniversary of life as popular a female singer Lee Mi-Ja & Pati Kim in this paper analyzed out their musical characteristics to apply the analysis method of the Sasang constitution. From this, we are applied appearance & manner of speaking's method to classify the constitution through the face and voice features in the Sasang constitution classification method. Therefore, we are quantitatively analyzed out result of the Sasang constitution classification through voice and face of the visual comparison of the same ages by technological application of appearance & manner of speaking 's method. Also, experiment results also are applied to Lee Mi-Ja & Pati Kim and the same experiment to confirm that the result based on the two female singers analyzed their musical characteristics through comparison and analysis to face shape and the features of a tone based on the Sasang constitution about two female singers.

A Study on the sound characteristic and B.M.I by Sasang Constitution (사상체질별 음향특성(音響特性)과 신체질량지수(BMI)에 관(關)한 연구(硏究))

  • Kim, Dal-Rae
    • Journal of Sasang Constitutional Medicine
    • /
    • v.16 no.1
    • /
    • pp.53-60
    • /
    • 2004
  • Purpose This study is to find the characteristics of voice quality based on the classifying the sound characteristics and B.M.I. by Sasang Constitution. Methods To make the notion of the consensus of Sasang Constitution's Voice, classification into 4 categories was made: clear/hoarse, high/low, powerful/powerless, fast/slow. Result The voice quality of Soyangin group was classified as powerful and fast, and that of Taeumin group was classified as powerful and hoarse and low, and that of Soeumin group was classified as powerless and clear. The mean B.M.I. of Soeumin group was classified as 21.4, and that of Taeumin group was classified as 26.3. Conclusion 1. Taeumin was significantly high compared with Soeumin in B.M.I. 2. It can be classified as Taeumin when B.M.I. is high(26.3). 3. It can be classified as Soeumin when B.M.I. is low(21.4). 4. The voice quality of Soyangin group was classified as clear and fast, or strong and clear, and that of Taeumin group as powerful and hoarse, and that of Soeumin group as powerless and low.

  • PDF

Analysis of Voice Quality Features and Their Contribution to Emotion Recognition (음성감정인식에서 음색 특성 및 영향 분석)

  • Lee, Jung-In;Choi, Jeung-Yoon;Kang, Hong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.5
    • /
    • pp.771-774
    • /
    • 2013
  • This study investigates the relationship between voice quality measurements and emotional states, in addition to conventional prosodic and cepstral features. Open quotient, harmonics-to-noise ratio, spectral tilt, spectral sharpness, and band energy were analyzed as voice quality features, and prosodic features related to fundamental frequency and energy are also examined. ANOVA tests and Sequential Forward Selection are used to evaluate significance and verify performance. Classification experiments show that using the proposed features increases overall accuracy, and in particular, errors between happy and angry decrease. Results also show that adding voice quality features to conventional cepstral features leads to increase in performance.

Application of Machine Learning on Voice Signals to Classify Body Mass Index - Based on Korean Adults in the Korean Medicine Data Center (머신러닝 기반 음성분석을 통한 체질량지수 분류 예측 - 한국 성인을 중심으로)

  • Kim, Junho;Park, Ki-Hyun;Kim, Ho-Seok;Lee, Siwoo;Kim, Sang-Hyuk
    • Journal of Sasang Constitutional Medicine
    • /
    • v.33 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • Objectives The purpose of this study was to check whether the classification of the individual's Body Mass Index (BMI) could be predicted by analyzing the voice data constructed at the Korean medicine data center (KDC) using machine learning. Methods In this study, we proposed a convolutional neural network (CNN)-based BMI classification model. The subjects of this study were Korean adults who had completed voice recording and BMI measurement in 2006-2015 among the data established at the Korean Medicine Data Center. Among them, 2,825 data were used for training to build the model, and 566 data were used to assess the performance of the model. As an input feature of CNN, Mel-frequency cepstral coefficient (MFCC) extracted from vowel utterances was used. A model was constructed to predict a total of four groups according to gender and BMI criteria: overweight male, normal male, overweight female, and normal female. Results & Conclusions Performance evaluation was conducted using F1-score and Accuracy. As a result of the prediction for four groups, The average accuracy was 0.6016, and the average F1-score was 0.5922. Although it showed good performance in gender discrimination, it is judged that performance improvement through follow-up studies is necessary for distinguishing BMI within gender. As research on deep learning is active, performance improvement is expected through future research.

Voice Activity Detection in Noisy Environment using Speech Energy Maximization and Silence Feature Normalization (음성 에너지 최대화와 묵음 특징 정규화를 이용한 잡음 환경에 강인한 음성 검출)

  • Ahn, Chan-Shik;Choi, Ki-Ho
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.169-174
    • /
    • 2013
  • Speech recognition, the problem of performance degradation is the difference between the model training and recognition environments. Silence features normalized using the method as a way to reduce the inconsistency of such an environment. Silence features normalized way of existing in the low signal-to-noise ratio. Increase the energy level of the silence interval for voice and non-voice classification accuracy due to the falling. There is a problem in the recognition performance is degraded. This paper proposed a robust speech detection method in noisy environments using a silence feature normalization and voice energy maximize. In the high signal-to-noise ratio for the proposed method was used to maximize the characteristics receive less characterized the effects of noise by the voice energy. Cepstral feature distribution of voice / non-voice characteristics in the low signal-to-noise ratio and improves the recognition performance. Result of the recognition experiment, recognition performance improved compared to the conventional method.

Speech/Music Signal Classification Based on Spectrum Flux and MFCC For Audio Coder (오디오 부호화기를 위한 스펙트럼 변화 및 MFCC 기반 음성/음악 신호 분류)

  • Sangkil Lee;In-Sung Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.239-246
    • /
    • 2023
  • In this paper, we propose an open-loop algorithm to classify speech and music signals using the spectral flux parameters and Mel Frequency Cepstral Coefficients(MFCC) parameters for the audio coder. To increase responsiveness, the MFCC was used as a short-term feature parameter and spectral fluxes were used as a long-term feature parameters to improve accuracy. The overall voice/music signal classification decision is made by combining the short-term classification method and the long-term classification method. The Gaussian Mixed Model (GMM) was used for pattern recognition and the optimal GMM parameters were extracted using the Expectation Maximization (EM) algorithm. The proposed long-term and short-term combined speech/music signal classification method showed an average classification error rate of 1.5% on various audio sound sources, and improved the classification error rate by 0.9% compared to the short-term single classification method and 0.6% compared to the long-term single classification method. The proposed speech/music signal classification method was able to improve the classification error rate performance by 9.1% in percussion music signals with attacks and 5.8% in voice signals compared to the Unified Speech Audio Coding (USAC) audio classification method.

A Phonetic Study of 'Sasang Constitution' (음성학적으로 본 사상체질)

  • Moon Seung-Jae;Tak Ji-Hyun;Hwang Hyejeong
    • MALSORI
    • /
    • v.55
    • /
    • pp.1-14
    • /
    • 2005
  • Sasang Constitution, one branch of oriental medicine, claims that people can be classified into four different 'constitutions:' Taeyang, Taeum, Soyang, and Soeum. This study investigates whether the classification of the constitutions could be accurately made solely based on people's voice by analyzing the data from 46 different voices whose constitutions were already determined. Seven source-related parameters and four filter-related parameters were phonetically analyzed and the GMM(Gaussian mixture model) was tried on the data. Both the results from phonetic analyses and GMM showed that all the parameters (except one) failed to distinguish the constitutions of the people successfully. And even the single exception, B2 (the bandwidth of the second formant) did not provide us with sufficient reasons to be the source of distinction. This result seems to suggest one of the two conclusions: either the Sasang Constitutions cannot be substantiated with phonetic characteristics of peoples' voices with reliable accuracy, or we need to find yet some other parameters which haven't been conventionally proposed.

  • PDF

Analysis of the Relationship Between Sasang Constitutional Groups and Speech Features Based on a Listening Evaluation of Voice Characteristics (목소리 특성의 청취 평가에 기초한 사상체질과 음성 특징의 상관관계 분석)

  • Kwon, Chulhong;Kim, Jongyeol;Kim, Keunho;Jang, Junsu
    • Phonetics and Speech Sciences
    • /
    • v.4 no.4
    • /
    • pp.71-77
    • /
    • 2012
  • Sasang constitution experts utilize voice characteristics as an auxiliary measure for deciding a person's constitutional group. This study aims at establishing a relationship between speech features and the constitutional groups by subjective listening evaluation of voice characteristics. A speech database of 841 speakers whose constitutional groups have been already diagnosed by Sasang constitution experts was constructed. Speech features related to speech source and vocal tract filter were extracted from five vowels and one sentence. Statistically significant speech features for classifying the groups were analyzed using SPSS. The features contributed to constitution classification were speaking rate, Energy, A1, A2, A3, H1, H2, H4, CPP for males in their 20s, F0_mean, CPP, SPI, HNR, Shimmer, Energy, A1, A2, A3, H1, H2, H4 for females in their 20s, Energy, A1, A2, A3, H1, H2, H4, CPP for male in the 60s, and Jitter, HNR, CPP, SPI for females in their 60s. Experimental results show that speech technology is useful in classifying constitutional groups.