• Title/Summary/Keyword: Speech Feature Analysis

Search Result 177, Processing Time 0.022 seconds

Emotion Recognition Method based on Feature and Decision Fusion using Speech Signal and Facial Image (음성 신호와 얼굴 영상을 이용한 특징 및 결정 융합 기반 감정 인식 방법)

  • Joo, Jong-Tae;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.11-14
    • /
    • 2007
  • 인간과 컴퓨터간의 상호교류 하는데 있어서 감정 인식은 필수라 하겠다. 그래서 본 논문에서는 음성 신호 및 얼굴 영상을 BL(Bayesian Learning)과 PCA(Principal Component Analysis)에 적용하여 5가지 감정 (Normal, Happy, Sad, Anger, Surprise) 으로 패턴 분류하였다. 그리고 각각 신호의 단점을 보완하고 인식률을 높이기 위해 결정 융합 방법과 특징 융합 방법을 이용하여 감정융합을 실행하였다. 결정 융합 방법은 각각 인식 시스템을 통해 얻어진 인식 결과 값을 퍼지 소속 함수에 적용하여 감정 융합하였으며, 특정 융합 방법은 SFS(Sequential Forward Selection)특정 선택 방법을 통해 우수한 특정들을 선택한 후 MLP(Multi Layer Perceptron) 기반 신경망(Neural Networks)에 적용하여 감정 융합을 실행하였다.

  • PDF

Speaker Identification Using PCA Fuzzy Mixture Model (PCA 퍼지 혼합 모델을 이용한 화자 식별)

  • Lee, Ki-Yong
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.149-157
    • /
    • 2003
  • In this paper, we proposed the principal component analysis (PCA) fuzzy mixture model for speaker identification. A PCA fuzzy mixture model is derived from the combination of the PCA and the fuzzy version of mixture model with diagonal covariance matrices. In this method, the feature vectors are first transformed by each speaker's PCA transformation matrix to reduce the correlation among the elements. Then, the fuzzy mixture model for speaker is obtained from these transformed feature vectors with reduced dimensions. The orthogonal Gaussian Mixture Model (GMM) can be derived as a special case of PCA fuzzy mixture model. In our experiments, with having the number of mixtures equal, the proposed method requires less training time and less storage as well as shows better speaker identification rate compared to the conventional GMM. Also, the proposed one shows equal or better identification performance than the orthogonal GMM does.

  • PDF

A Study on the Consonant Classification Using Fuzzy Inference (퍼지추론을 이용한 한국어 자음분류에 관한 연구)

  • 박경식
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1992.06a
    • /
    • pp.71-75
    • /
    • 1992
  • This paper proposes algorithm in order to classify Korean consonant phonemes same as polosives, fricatives affricates into la sounds, glottalized sounds, aspirated sounds. This three kinds of sounds are one of distinctive characters of the Korean language which don't eist in language same as English. This is thesis on classfication of 14 Korean consonants(k, t, p, s, c, k', t', p', s', c', kh, ph, ch) as a previous stage for Korean phone recognition. As feature sets for classification, LPC cepstral analysis. The eperiments are two stages. First, using short-time speech signal analysis and Mahalanobis distance, consonant segments are detected from original speech signal, then the consonants are classified by fuzzy inference. As the results of computer simulations, the classification rate of the speech data was come to 93.75%.

  • PDF

Computer-Based Fluency Evaluation of English Speaking Tests for Koreans (한국인을 위한 영어 말하기 시험의 컴퓨터 기반 유창성 평가)

  • Jang, Byeong-Yong;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.9-20
    • /
    • 2014
  • In this paper, we propose an automatic fluency evaluation algorithm for English speaking tests. In the proposed algorithm, acoustic features are extracted from an input spoken utterance and then fluency score is computed by using support vector regression (SVR). We estimate the parameters of feature modeling and SVR using the speech signals and the corresponding scores by human raters. From the correlation analysis results, it is shown that speech rate, articulation rate, and mean length of runs are best for fluency evaluation. Experimental results show that the correlation between the human score and the SVR score is 0.87 for 3 speaking tests, which suggests the possibility of the proposed algorithm as a secondary fluency evaluation tool.

Analysis of Error Patterns in Korean Connected Digit Telephone Speech Recognition (연결숫자음 전화음성 인식에서의 오인식 유형 분석)

  • Kim Min Sung;Jung Sung Yun;Son Jong Mok;Bae Keun Sung;Kim Sang Hun
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.115-118
    • /
    • 2003
  • Channel distortion and coarticulation effect in the connected digit telephone speech make it difficult to recognize, and degrade recognition performance in the telephone environment. In this paper, as a basic research to improve the recognition performance of Korean connected digit telephone, error patterns are investigated and analyzed. Telephone digit speech database released by SITEC with HTK system is used for recognition experiments. Both DWFBA and MRTCN methods are used for feature extraction and channel compensation, respectively. Experimental results are discussed with our findings.

  • PDF

Design of a variable rate speech codec for the W-CDMA system (W-CDMA 시스템을 위한 가변율 음성코덱 설계)

  • 정우성
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.142-147
    • /
    • 1998
  • Recently, 8 kb/s CS-ACELP coder of G.729 is atandardized by ITU-T SG15 and it has been reported that the speech quality of G729 is better than or equal to that of 32kb/s ADPCM. However G.729 is the fixed rate speech coder, and it does not consider the property of voice activity in mutual conversation. If we use the voice activity, we can reduce the average bit rate in half without any degradations of the speech quality. In this paper, we propose an efficient variable rate algorithm for G.729. The variable rate algorithm consists of two main subjects, the rate determination algorithm and algorithm, we combine the energy-thresholding method, the phonetic segmentation method by integration of various feature parameters obtained through the analysis procedure, and the variable hangover period method. Through the analysis of noise features, the 1 kb/s sub rate coder is designed for coding the background noise signal. So, we design the 4 kb/s sub rate coder for the unvoiced parts. The performance of the variable rate algorithm is evaluated by the comparison of speed quality and average bit rate with G.729. Subjective quality test is also done by MOS test. Conclusively, it is verified that the proposed variable rate CS-ACELP coder produced the same speech quality as G.729, at the average bit rate of 4.4 kb/s.

  • PDF

Blind Classification of Speech Compression Methods using Structural Analysis of Bitstreams (비트스트림의 구조 분석을 이용한 음성 부호화 방식 추정 기법)

  • Yoo, Hoon;Park, Cheol-Sun;Park, Young-Mi;Kim, Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.1
    • /
    • pp.59-64
    • /
    • 2012
  • This paper addresses a blind estimation and classification algorithm of the speech compression methods by using analysis on the structure of compressed bitstreams. Various speech compression methods including vocoders are developed in order to transmit or store the speech signals at very low bitrates. As a key feature, the vocoders contain the block structure inevitably. In classification of each compression method, we use the Measure of Inter-Block Correlation (MIBC) to check whether the bitstream includes the block structure or not, and to estimate the block length. Moreover, for the compression methods with the same block length, the proposed algorithm estimates the corresponding compression method correctly by using that each compression method has different correlation characteristics in each bit location. Experimental results indicate that the proposed algorithm classifies the speech compression methods robustly for various types and lengths of speech signals in noisy environment.

Extraction of MFCC feature parameters based on the PCA-optimized filter bank and Korean connected 4-digit telephone speech recognition (PCA-optimized 필터뱅크 기반의 MFCC 특징파라미터 추출 및 한국어 4연숫자 전화음성에 대한 인식실험)

  • 정성윤;김민성;손종목;배건성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.279-283
    • /
    • 2004
  • In general, triangular shape filters are used in the filter bank when we extract MFCC feature parameters from the spectrum of the speech signal. A different approach, which uses specific filter shapes in the filter bank that are optimized to the spectrum of training speech data, is proposed by Lee et al. to improve the recognition rate. A principal component analysis method is used to get the optimized filter coefficients. Using a large amount of 4-digit telephone speech database, in this paper, we get the MFCCs based on the PCA-optimized filter bank and compare the recognition performance with conventional MFCCs and direct weighted filter bank based MFCCs. Experimental results have shown that the MFCC based on the PCA-optimized filter bank give slight improvement in recognition rate compared to the conventional MFCCs but fail to achieve better performance than the MFCCs based on the direct weighted filter bank analysis. Experimental results are discussed with our findings.

A Merging Algorithm with the Discrete Wavelet Transform to Extract Valid Speech-Sounds (이산 웨이브렛 변환을 이용한 유효 음성 추출을 위한 머징 알고리즘)

  • Kim, Jin-Ok;Hwang, Dae-Jun;Paek, Han-Wook;Chung, Chin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.289-294
    • /
    • 2002
  • A valid speech-sound block can be classified to provide important information for speech recognition. The classification of the speech-sound block comes from the MRA(multi-resolution analysis) property of the DWT(discrete wavelet transform), which is used to reduce the computational time for the pre-processing of speech recognition. The merging algorithm is proposed to extract valid speech-sounds in terms of position and frequency range. It needs some numerical methods for an adaptive DWT implementation and performs unvoiced/voiced classification and denoising. Since the merging algorithm can decide the processing parameters relating to voices only and is independent of system noises, it is useful for extracting valid speech-sounds. The merging algorithm has an adaptive feature for arbitrary system noises and an excellent denoising SNR(signal-to-nolle ratio).

An Emotion Recognition Method using Facial Expression and Speech Signal (얼굴표정과 음성을 이용한 감정인식)

  • 고현주;이대종;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.799-807
    • /
    • 2004
  • In this paper, we deal with an emotion recognition method using facial images and speech signal. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Emotion recognition using the facial expression is performed by using a multi-resolution analysis based on the discrete wavelet transform. And then, the feature vectors are extracted from the linear discriminant analysis method. On the other hand, the emotion recognition from speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and then the final recognition is obtained from a multi-decision making scheme.