• Title/Summary/Keyword: cepstral

Search Result 297, Processing Time 0.023 seconds

Monophthong Recognition Optimizing Muscle Mixing Based on Facial Surface EMG Signals (안면근육 표면근전도 신호기반 근육 조합 최적화를 통한 단모음인식)

  • Lee, Byeong-Hyeon;Ryu, Jae-Hwan;Lee, Mi-Ran;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.143-150
    • /
    • 2016
  • In this paper, we propose Korean monophthong recognition method optimizing muscle mixing based on facial surface EMG signals. We observed that EMG signal patterns and muscle activity may vary according to Korean monophthong pronunciation. We use RMS, VAR, MMAV1, MMAV2 which were shown high recognition accuracy in previous study and Cepstral Coefficients as feature extraction algorithm. And we classify Korean monophthong by QDA(Quadratic Discriminant Analysis) and HMM(Hidden Markov Model). Muscle mixing optimized using input data in training phase, optimized result is applied in recognition phase. Then New data are input, finally Korean monophthong are recognized. Experimental results show that the average recognition accuracy is 85.7% in QDA, 75.1% in HMM.

Whale Sound Reconstruction using MFCC and L2-norm Minimization (MFCC와 L2-norm 최소화를 이용한 고래소리의 재생)

  • Chong, Ui-Pil;Jeon, Seo-Yun;Hong, Jeong-Pil;Jo, Se-Hyung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.4
    • /
    • pp.147-152
    • /
    • 2018
  • Underwater transient signals are complex, variable and nonlinear, resulting in a difficulty in accurate modeling with reference patterns. We analyze one type of underwater transient signals, in the form of whale sounds, using the MFCC(Mel-Frequency Cepstral Constant) and synthesize them from the MFCC and the weighted $L_2$-norm minimization techniques. The whales in this experiments are Humpback whales, Right whales, Blue whales, Gray whales, Minke whales. The 20th MFCC coefficients are extracted from the original signals using the MATLAB programming and reconstructed using the weighted $L_2$-norm minimization with the inverse MFCC. Finally, we could find the optimum weighted factor, 3~4 for reconstruction of whale sounds.

Many-to-many voice conversion experiments using a Korean speech corpus (다수 화자 한국어 음성 변환 실험)

  • Yook, Dongsuk;Seo, HyungJin;Ko, Bonggu;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.351-358
    • /
    • 2022
  • Recently, Generative Adversarial Networks (GAN) and Variational AutoEncoders (VAE) have been applied to voice conversion that can make use of non-parallel training data. Especially, Conditional Cycle-Consistent Generative Adversarial Networks (CC-GAN) and Cycle-Consistent Variational AutoEncoders (CycleVAE) show promising results in many-to-many voice conversion among multiple speakers. However, the number of speakers has been relatively small in the conventional voice conversion studies using the CC-GANs and the CycleVAEs. In this paper, we extend the number of speakers to 100, and analyze the performances of the many-to-many voice conversion methods experimentally. It has been found through the experiments that the CC-GAN shows 4.5 % less Mel-Cepstral Distortion (MCD) for a small number of speakers, whereas the CycleVAE shows 12.7 % less MCD in a limited training time for a large number of speakers.

Dialect classification based on the speed and the pause of speech utterances (발화 속도와 휴지 구간 길이를 사용한 방언 분류)

  • Jonghwan Na;Bowon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2023
  • In this paper, we propose an approach for dialect classification based on the speed and pause of speech utterances as well as the age and gender of the speakers. Dialect classification is one of the important techniques for speech analysis. For example, an accurate dialect classification model can potentially improve the performance of speaker or speech recognition. According to previous studies, research based on deep learning using Mel-Frequency Cepstral Coefficients (MFCC) features has been the dominant approach. We focus on the acoustic differences between regions and conduct dialect classification based on the extracted features derived from the differences. In this paper, we propose an approach of extracting underexplored additional features, namely the speed and the pauses of speech utterances along with the metadata including the age and the gender of the speakers. Experimental results show that our proposed approach results in higher accuracy, especially with the speech rate feature, compared to the method only using the MFCC features. The accuracy improved from 91.02% to 97.02% compared to the previous method that only used MFCC features, by incorporating all the proposed features in this paper.

A study on the clinical utility of voiced sentences in acoustic analysis for pathological voice evaluation (장애음성의 음향학적 분석에서 유성음 문장의 임상적 유용성에 관한 연구)

  • Ji-sung Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.298-303
    • /
    • 2023
  • This study aimed to investigate the clinical utility of voiced sentence tasks for voice evaluation. To this end, we analyzed the correlation between perturbation-based acoustic measurements [jitter percent (jitter), shimmer percent (shimmer), Noise to Harmonic Ratio (NHR)] using sustained vowel phonation, and cepstrum-based acoustic measurements [Cepstral Peak Prominence (CPP), Low/High spectral ratio (L/H ratio)] using voiced sentences. As a result of analyzing data collected from 65 patients with voice disorders, there was a significant correlation between the CPP and jitter (r = -.624, p = .000), shimmer (r = -.530, p = .000), NHR (r = -.469, p = .000).This suggests that the cepstrum measurement of voiced sentences can be used as an alternative to the analysis limitations of the pathological voice such as not possible perturbation-based acoustic measurement, and result difference according to the analysis section.

A PCA-based MFDWC Feature Parameter for Speaker Verification System (화자 검증 시스템을 위한 PCA 기반 MFDWC 특징 파라미터)

  • Hahm Seong-Jun;Jung Ho-Youl;Chung Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.36-42
    • /
    • 2006
  • A Principal component analysis (PCA)-based Mel-Frequency Discrete Wavelet Coefficients (MFDWC) feature Parameters for speaker verification system is Presented in this Paper In this method, we used the 1st-eigenvector obtained from PCA to calculate the energy of each node of level that was approximated by. met-scale. This eigenvector satisfies the constraint of general weighting function that the squared sum of each component of weighting function is unity and is considered to represent speaker's characteristic closely because the 1st-eigenvector of each speaker is fairly different from the others. For verification. we used Universal Background Model (UBM) approach that compares claimed speaker s model with UBM on frame-level. We performed experiments to test the effectiveness of PCA-based parameter and found that our Proposed Parameters could obtain improved average Performance of $0.80\%$compared to MFCC. $5.14\%$ to LPCC and 6.69 to existing MFDWC.

Voice-Based Gender Identification Employing Support Vector Machines (음성신호 기반의 성별인식을 위한 Support Vector Machines의 적용)

  • Lee, Kye-Hwan;Kang, Sang-Ick;Kim, Deok-Hwan;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.75-79
    • /
    • 2007
  • We propose an effective voice-based gender identification method using a support vector machine(SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model(GMM) using the mel frequency cepstral coefficients(MFCC). A novel means of incorporating a features fusion scheme based on a combination of the MFCC and pitch is proposed with the aim of improving the performance of gender identification using the SVM. Experiment results indicate that the gender identification performance using the SVM is significantly better than that of the GMM. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

Automatic Control of Horizontal-moving Stereoscopic Camera by Disparity Compensation

  • Kwon, Ki-Chul;Choi, Jae-Kwang;Kim, Nam;Young-Soo
    • Journal of the Optical Society of Korea
    • /
    • v.6 no.4
    • /
    • pp.150-155
    • /
    • 2002
  • Horizontally-moving method (HMM) stereoscopic camera has a liner relationship between ver-gence and focus control. We introduced the automatic control method for a stereoscopic camera system that uses the relationship between vergence and focus of an HMM stereoscopic camera. the Automatic control method uses disparity compensation of the acquired image pair from the stereoscopic camera. For faster extraction of disparity information, the proposed binocular dispar-ity estimation method by the one-dimensional cepstral filter algorithm would be investigated. The suggested system in this study substantially reduced the controlling time and error-ratio so as to make it possible to achieve natural and clear images.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

Channel Compensation for Cepstrum-Based Detection of Laryngeal Diseases (켑스트럼 기반의 후두암 감별을 위한 채널보상)

  • Kim Young Kuk;Kim Su Mi;Kim Hyung Soon;Wang Soo-Geun;Jo Cheol-Woo;Yang Byung-Gon
    • MALSORI
    • /
    • no.50
    • /
    • pp.111-122
    • /
    • 2004
  • Automatic detection of laryngeal diseases by voice is attractive because of its non-intrusive nature. Cepstrum based approach to detect laryngeal cancer shows reliable performance even when the periodicity of voice signals is severely lost, but it has a drawback that it is not robust to channel mismatch due to different microphone characteristics. In this paper, to deal with mismatched training and test microphone conditions, we investigate channel compensation techniques such as Cepstral Mean Subtraction (CMS) and Pole Filtered CMS (PFCMS). According to our experiments, PFCMS yields better performance than CMS. By using PFCMS, we obtained 12% and 40% error reduction over baseline and CMS, respectively.

  • PDF