• Title/Summary/Keyword: Speech Feature Analysis

Search Result 177, Processing Time 0.026 seconds

Classification of Sasang Constitution Taeumin by Comparative of Speech Signals Analysis (음성 분석 정보값 비교를 통한 사상체질 태음인의 분류)

  • Kim, Bong-Hyun;Lee, Se-Hwan;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.17-24
    • /
    • 2008
  • This paper proposes Sasang constitution classification through speech signals analysis values and comparison. For this, this paper wishes to propose Taeumin classification method of output values signals that comes out speech signal analysis to connect with process classification of Soeumin through skin diagnosis by first step in the whole system configuration to provide for objective index of Sasang constitution. First of all, these characteristic of voices wish to extract phonetic elements that each Sasang constitution groups' clear features. Also, we wish to classify Taeumin through constitution groups' difference and similarity on the basis of results value. Finally, the effectiveness of this method is verified through the experiments.

Effective Combination of Temporal Information and Linear Transformation of Feature Vector in Speaker Verification (화자확인에서 특징벡터의 순시 정보와 선형 변환의 효과적인 적용)

  • Seo, Chang-Woo;Zhao, Mei-Hua;Lim, Young-Hwan;Jeon, Sung-Chae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.127-132
    • /
    • 2009
  • The feature vectors which are used in conventional speaker recognition (SR) systems may have many correlations between their neighbors. To improve the performance of the SR, many researchers adopted linear transformation method like principal component analysis (PCA). In general, the linear transformation of the feature vectors is based on concatenated form of the static features and their dynamic features. However, the linear transformation which based on both the static features and their dynamic features is more complex than that based on the static features alone due to the high order of the features. To overcome these problems, we propose an efficient method that applies linear transformation and temporal information of the features to reduce complexity and improve the performance in speaker verification (SV). The proposed method first performs a linear transformation by PCA coefficients. The delta parameters for temporal information are then obtained from the transformed features. The proposed method only requires 1/4 in the size of the covariance matrix compared with adding the static and their dynamic features for PCA coefficients. Also, the delta parameters are extracted from the linearly transformed features after the reduction of dimension in the static features. Compared with the PCA and conventional methods in terms of equal error rate (EER) in SV, the proposed method shows better performance while requiring less storage space and complexity.

  • PDF

Robust Speech Hash Function

  • Chen, Ning;Wan, Wanggen
    • ETRI Journal
    • /
    • v.32 no.2
    • /
    • pp.345-347
    • /
    • 2010
  • In this letter, we present a new speech hash function based on the non-negative matrix factorization (NMF) of linear prediction coefficients (LPCs). First, linear prediction analysis is applied to the speech to obtain its LPCs, which represent the frequency shaping attributes of the vocal tract. Then, the NMF is performed on the LPCs to capture the speech's local feature, which is then used for hash vector generation. Experimental results demonstrate the effectiveness of the proposed hash function in terms of discrimination and robustness against various types of content preserving signal processing manipulations.

Acoustic, Intraoral Air Pressure and EMG Studies of Vowel Devoicing in Korean

  • Kim, Hyun-Gi;Niimi, Sei-Ji
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.3-13
    • /
    • 2003
  • The devoicing vowel is a phonological process whose contrast in sonority is lost or reduces in a particular phonetic environment. Phonetically, the vocal fold vibration originates from the abduction/adduction of the glottis in relation to supraglottal articulatory movements. The purpose of this study is to investigate Korean vowel devoicing by means of experimental instruments. The interrelated laryngeal adjustments and aerodynamic effects for this voicing can clarify the redundant articulatory gestures relevant to the distinctive feature of sonority. Five test words were selected, being composed of the high vowel /i/, between the fricative and strong aspirated or lenis affricated consonants. The subjects uttered the test words successively at a normal or at a faster speed. The EMG, the sensing tube Gaeltec S7b and the High-Speech Analysis system and MSL II were used in these studies. Acoustically, three different types of speech waveforms and spectrograms were classified, based on the voicing variation. The intraoral air pressure curves showed differences, depending on the voicing variations. The activity patterns of the PCA and the CT for devoicing vowels appeared differently from those showing the partially devoicing vowels and the voicing vowels.

  • PDF

Evaluation of Frequency Warping Based Features and Spectro-Temporal Features for Speaker Recognition (화자인식을 위한 주파수 워핑 기반 특징 및 주파수-시간 특징 평가)

  • Choi, Young Ho;Ban, Sung Min;Kim, Kyung-Wha;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.3-10
    • /
    • 2015
  • In this paper, different frequency scales in cepstral feature extraction are evaluated for the text-independent speaker recognition. To this end, mel-frequency cepstral coefficients (MFCCs), linear frequency cepstral coefficients (LFCCs), and bilinear warped frequency cepstral coefficients (BWFCCs) are applied to the speaker recognition experiment. In addition, the spectro-temporal features extracted by the cepstral-time matrix (CTM) are examined as an alternative to the delta and delta-delta features. Experiments on the NIST speaker recognition evaluation (SRE) 2004 task are carried out using the Gaussian mixture model-universal background model (GMM-UBM) method and the joint factor analysis (JFA) method, both based on the ALIZE 3.0 toolkit. Experimental results using both the methods show that BWFCC with appropriate warping factor yields better performance than MFCC and LFCC. It is also shown that the feature set including the spectro-temporal information based on the CTM outperforms the conventional feature set including the delta and delta-delta features.

An analysis of Speech Acts for Korean Using Support Vector Machines (지지벡터기계(Support Vector Machines)를 이용한 한국어 화행분석)

  • En Jongmin;Lee Songwook;Seo Jungyun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.365-368
    • /
    • 2005
  • We propose a speech act analysis method for Korean dialogue using Support Vector Machines (SVM). We use a lexical form of a word, its part of speech (POS) tags, and bigrams of POS tags as sentence features and the contexts of the previous utterance as context features. We select informative features by Chi square statistics. After training SVM with the selected features, SVM classifiers determine the speech act of each utterance. In experiment, we acquired overall $90.54\%$ of accuracy with dialogue corpus for hotel reservation domain.

Development of a Phoneme and Tone Labeling Program (음소 및 성조 레이블링 프로그램 개발)

  • Lee, Yun-Kyung;Kwak, Chul;Kwon, Oh-Wook
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.435-436
    • /
    • 2007
  • Although previous speech analysis programs usually provide speech analysis and phoneme labeling functionalities, they require much time in manual labeling and support only English alphabets. To solve these problems, we develop a new Windows-based program with an improved phoneme and tone labeling method as well as the conventional speech analysis functionalities. The developed program has the unique feature in semi-automatic phoneme and tone labeling based on hidden Markov models.

  • PDF

EFFICIENCY OF SPEECH FEATURES (음성 특징의 효율성)

  • 황규웅
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.225-227
    • /
    • 1995
  • This paper compared waveform, cepstrum, and spline wavelet features with nonlinear discriminant analysis. This measure shows efficiency of speech parametrization better than old linear separability criteria and can be used to measure the efficiency of each layer of certain system. Spline wavelet transform has larger gap among classes and cepstrum is clustered better than the spline wavelet feature. Both features do not have good property for classification and we will compare Gabor wavelet transform, Mel cepstrum, delta cepstrum, etc.

  • PDF

Speaker Identification Using Augmented PCA in Unknown Environments (부가 주성분분석을 이용한 미지의 환경에서의 화자식별)

  • Yu, Ha-Jin
    • MALSORI
    • /
    • no.54
    • /
    • pp.73-83
    • /
    • 2005
  • The goal of our research is to build a text-independent speaker identification system that can be used in any condition without any additional adaptation process. The performance of speaker recognition systems can be severely degraded in some unknown mismatched microphone and noise conditions. In this paper, we show that PCA(principal component analysis) can improve the performance in the situation. We also propose an augmented PCA process, which augments class discriminative information to the original feature vectors before PCA transformation and selects the best direction for each pair of highly confusable speakers. The proposed method reduced the relative recognition error by 21%.

  • PDF

Authentication Performance Optimization for Smart-phone based Multimodal Biometrics (스마트폰 환경의 인증 성능 최적화를 위한 다중 생체인식 융합 기법 연구)

  • Moon, Hyeon-Joon;Lee, Min-Hyung;Jeong, Kang-Hun
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.151-156
    • /
    • 2015
  • In this paper, we have proposed personal multimodal biometric authentication system based on face detection, recognition and speaker verification for smart-phone environment. Proposed system detect the face with Modified Census Transform algorithm then find the eye position in the face by using gabor filter and k-means algorithm. Perform preprocessing on the detected face and eye position, then we recognize with Linear Discriminant Analysis algorithm. Afterward in speaker verification process, we extract the feature from the end point of the speech data and Mel Frequency Cepstral Coefficient. We verified the speaker through Dynamic Time Warping algorithm because the speech feature changes in real-time. The proposed multimodal biometric system is to fuse the face and speech feature (to optimize the internal operation by integer representation) for smart-phone based real-time face detection, recognition and speaker verification. As mentioned the multimodal biometric system could form the reliable system by estimating the reasonable performance.