• Title/Summary/Keyword: Speech Feature Analysis

Search Result 178, Processing Time 0.031 seconds

Effective Feature Vector for Isolated-Word Recognizer using Vocal Cord Signal (성대신호 기반의 명령어인식기를 위한 특징벡터 연구)

  • Jung, Young-Giu;Han, Mun-Sung;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.226-234
    • /
    • 2007
  • In this paper, we develop a speech recognition system using a throat microphone. The use of this kind of microphone minimizes the impact of environmental noise. However, because of the absence of high frequencies and the partially loss of formant frequencies, previous systems developed with those devices have shown a lower recognition rate than systems which use standard microphone signals. This problem has led to researchers using throat microphone signals as supplementary data sources supporting standard microphone signals. In this paper, we present a high performance ASR system which we developed using only a throat microphone by taking advantage of Korean Phonological Feature Theory and a detailed throat signal analysis. Analyzing the spectrum and the result of FFT of the throat microphone signal, we find that the conventional MFCC feature vector that uses a critical pass filter does not characterize the throat microphone signals well. We also describe the conditions of the feature extraction algorithm which make it best suited for throat microphone signal analysis. The conditions involve (1) a sensitive band-pass filter and (2) use of feature vector which is suitable for voice/non-voice classification. We experimentally show that the ZCPA algorithm designed to meet these conditions improves the recognizer's performance by approximately 16%. And we find that an additional noise-canceling algorithm such as RAST A results in 2% more performance improvement.

A Study on Wavelet Application for Signal Analysis (신호 해석을 위한 웨이브렛 응용에 관한 연구)

  • Bae, Sang-Bum;Ryu, Ji-Goo;Kim, Nam-Ho
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.302-305
    • /
    • 2005
  • Recently, many methods to analyze signal have been proposed and representative methods are the Fourier transform and wavelet transform. In these methods, the Fourier transform represents signal with combination cosine and sine at all locations in the frequency domain. However, it doesn't provide time information that particular frequency occurs in signal and denpends on only the global feature of the signal. So, to improve these points the wavelet transform which is capable of multiresolution analysis has been applied to many fields such as speech processing, image processing and computer vision. And the wavelet transform, which uses changing window according to scale parameter, presents time-frequency localization. In this paper, we proposed a new approach using a wavelet of cosine and sine type and analyzed features of signal in a limited point of frequency-time plane.

  • PDF

Robust Speaker Identification using Independent Component Analysis (독립성분 분석을 이용한 강인한 화자식별)

  • Jang, Gil-Jin;Oh, Yung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.5
    • /
    • pp.583-592
    • /
    • 2000
  • This paper proposes feature parameter transformation method using independent component analysis (ICA) for speaker identification. The proposed method assumes that the cepstral vectors from various channel-conditioned speech are constructed by a linear combination of some characteristic functions with random channel noise added, and transforms them into new vectors using ICA. The resultant vector space can give emphasis to the repetitive speaker information and suppress the random channel distortions. Experimental results show that the transformation method is effective for the improvement of speaker identification system.

  • PDF

Comparison of Feature Extraction Methods for the Telephone Speech Recognition (전화 음성 인식을 위한 특징 추출 방법 비교)

  • 전원석;신원호;김원구;이충용;윤대희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.7
    • /
    • pp.42-49
    • /
    • 1998
  • 본 논문에서는 전화망 환경에서 음성 인식 성능을 개선하기 위한 특징 벡터 추출 단계에서의 처리 방법들을 연구하였다. 먼저, 고립 단어 인식 시스템에서 채널 왜곡 보상 방 법들을 단어 모델과 문맥 독립 음소 모델에 대하여 인식 실험을 하였다. 켑스트럼 평균 차 감법, RASTA 처리, 켑스트럼-시간 행렬을 실험하였으며, 인식 모델에 따른 각 알고리즘의 성능을 비교하였다. 둘째로, 문맥 독립 음소 모델을 이용한 인식 시스템의 성능 향상을 위하 여 정적 특징 벡터에 대하여 주성분 분석 방법(principal component analysis)과 선형 판별 분석(linear discriminant analysis)과 같은 선형 변환 방법을 적용하여 분별력이 높은 벡터 공간으로 변환함으로써 인식 성능을 향상시켰다. 또한 선형 변환 방법을 켑스트럼 평균 차 감법과 결합하여 더욱 뛰어난 성능을 보여주었다.

  • PDF

Convergence Characteristics of Ant Colony Optimization with Selective Evaluation in Feature Selection (특징 선택에서 선택적 평가를 사용하는 개미 군집 최적화의 수렴 특성)

  • Lee, Jin-Seon;Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.41-48
    • /
    • 2011
  • In feature selection, the selective evaluation scheme for Ant Colony Optimization(ACO) has recently been proposed, which reduces computational load by excluding unnecessary or less promising candidate solutions from the actual evaluation. Its superiority was supported by experimental results. However the experiment seems to be not statistically sufficient since it used only one dataset. The aim of this paper is to analyze convergence characteristics of the selective evaluation scheme and to make the conclusion more convincing. We chose three datasets related to handwriting, medical, and speech domains from UCI repository whose feature set size ranges from 256 to 617. For each of them, we executed 12 independent runs in order to obtain statistically stable data. Each run was given 72 hours to observe the long-time convergence. Based on analysis of experimental data, we describe a reason for the superiority and where the scheme can be applied.

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

Research on Chinese Microblog Sentiment Classification Based on TextCNN-BiLSTM Model

  • Haiqin Tang;Ruirui Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.842-857
    • /
    • 2023
  • Currently, most sentiment classification models on microblogging platforms analyze sentence parts of speech and emoticons without comprehending users' emotional inclinations and grasping moral nuances. This study proposes a hybrid sentiment analysis model. Given the distinct nature of microblog comments, the model employs a combined stop-word list and word2vec for word vectorization. To mitigate local information loss, the TextCNN model, devoid of pooling layers, is employed for local feature extraction, while BiLSTM is utilized for contextual feature extraction in deep learning. Subsequently, microblog comment sentiments are categorized using a classification layer. Given the binary classification task at the output layer and the numerous hidden layers within BiLSTM, the Tanh activation function is adopted in this model. Experimental findings demonstrate that the enhanced TextCNN-BiLSTM model attains a precision of 94.75%. This represents a 1.21%, 1.25%, and 1.25% enhancement in precision, recall, and F1 values, respectively, in comparison to the individual deep learning models TextCNN. Furthermore, it outperforms BiLSTM by 0.78%, 0.9%, and 0.9% in precision, recall, and F1 values.

A Study on Spoken Digits Analysis and Recognition (숫자음 분석과 인식에 관한 연구)

  • 김득수;황철준
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.3
    • /
    • pp.107-114
    • /
    • 2001
  • This paper describes Connected Digit Recognition with Considering Acoustic Feature in Korea. The recognition rate of connected digit is usually lower than word recognition. Therefore, speech feature parameter and acoustic feature are employed to make robust model for digit, and we could confirm the effect of Considering. Acoustic Feature throughout the experience of recognition. We used KLE 4 connected digit as database and 19 continuous distributed HMM as PLUs(Phoneme Like Units) using phonetical rules. For recognition experience, we have tested two cases. The first case, we used usual method like using Mel-Cepstrum and Regressive Coefficient for constructing phoneme model. The second case, we used expanded feature parameter and acoustic feature for constructing phoneme model. In both case, we employed OPDP(One Pass Dynamic Programming) and FSA(Finite State Automata) for recognition tests. When appling FSN for recognition, we applied various acoustic features. As the result, we could get 55.4% recognition rate for Mel-Cepstrum, and 67.4% for Mel-Cepstrum and Regressive Coefficient. Also, we could get 74.3% recognition rate for expanded feature parameter, and 75.4% for applying acoustic feature. Since, the case of applying acoustic feature got better result than former method, we could make certain that suggested method is effective for connected digit recognition in korean.

  • PDF

Speech Recognition Using Noise Robust Features and Spectral Subtraction (잡음에 강한 특징 벡터 및 스펙트럼 차감법을 이용한 음성 인식)

  • Shin, Won-Ho;Yang, Tae-Young;Kim, Weon-Goo;Youn, Dae-Hee;Seo, Young-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.38-43
    • /
    • 1996
  • This paper compares the recognition performances of feature vectors known to be robust to the environmental noise. And, the speech subtraction technique is combined with the noise robust feature to get more performance enhancement. The experiments using SMC(Short time Modified Coherence) analysis, root cepstral analysis, LDA(Linear Discriminant Analysis), PLP(Perceptual Linear Prediction), RASTA(RelAtive SpecTrAl) processing are carried out. An isolated word recognition system is composed using semi-continuous HMM. Noisy environment experiments usign two types of noises:exhibition hall, computer room are carried out at 0, 10, 20dB SNRs. The experimental result shows that SMC and root based mel cepstrum(root_mel cepstrum) show 9.86% and 12.68% recognition enhancement at 10dB in compare to the LPCC(Linear Prediction Cepstral Coefficient). And when combined with spectral subtraction, mel cepstrum and root_mel cepstrum show 16.7% and 8.4% enhanced recognition rate of 94.91% and 94.28% at 10dB.

  • PDF

Performance of analysis and extraction of speech feature using characteristics of basilar membrane (기저막 특성을 이용한 새로운 음성 특징 추출 및 성능 분석)

  • 이철희;신유식;정성환;김종교
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.153-156
    • /
    • 2000
  • 본 논문에서는 음성 인식률 향상을 위한 여러 가지방법들 중에서 음성특징 파라미터 추출 방법에 관한 한가지 방법을 제시하였다. 본 논문에서는 청각 특성을 기반으로 한 MFCC(met frequency cepstrum coef-ficients)와 성능 향상을 위한 방법으로 GFCC (gamma-tone filter frequency cepstrum coefficients)를 제시하고 음성 인식을 수행하여 성능을 분석하였다. MFCC에서 일반적으로 사용하는 임계 대역 필터로 삼각 필터(triangular filter) 대신 청각 구조의 기저막(basilar membrane)특성을 묘사한 gammatone 대역 통과 필터를 이용하여 특징 파라미터를 추출하였다. DTW 알고리즘으로 인식률을 분석한 결과 삼각 대역 필터를 이용한 것보다 gammatone 대역 통과 필터를 이용한 추출법이 약 2∼3%의 성능 향상을 보였다.

  • PDF