• Title/Summary/Keyword: 음성인식률

Search Result 549, Processing Time 0.025 seconds

Mel-Frequency Cepstral Coefficients Using Formants-Based Gaussian Distribution Filterbank (포만트 기반의 가우시안 분포를 가지는 필터뱅크를 이용한 멜-주파수 켑스트럴 계수)

  • Son, Young-Woo;Hong, Jae-Keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.8
    • /
    • pp.370-374
    • /
    • 2006
  • Mel-frequency cepstral coefficients are widely used as the feature for speech recognition. In FMCC extraction process. the spectrum. obtained by Fourier transform of input speech signal is divided by met-frequency bands, and each band energy is extracted for the each frequency band. The coefficients are extracted by the discrete cosine transform of the obtained band energy. In this Paper. we calculate the output energy for each bandpass filter by taking the weighting function when applying met-frequency scaled bandpass filter. The weighting function is Gaussian distributed function whose center is at the formant frequency In the experiments, we can see the comparative performance with the standard MFCC in clean condition. and the better Performance in worse condition by the method proposed here.

A Study on a Non-Voice Section Detection Model among Speech Signals using CNN Algorithm (CNN(Convolutional Neural Network) 알고리즘을 활용한 음성신호 중 비음성 구간 탐지 모델 연구)

  • Lee, Hoo-Young
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.6
    • /
    • pp.33-39
    • /
    • 2021
  • Speech recognition technology is being combined with deep learning and is developing at a rapid pace. In particular, voice recognition services are connected to various devices such as artificial intelligence speakers, vehicle voice recognition, and smartphones, and voice recognition technology is being used in various places, not in specific areas of the industry. In this situation, research to meet high expectations for the technology is also being actively conducted. Among them, in the field of natural language processing (NLP), there is a need for research in the field of removing ambient noise or unnecessary voice signals that have a great influence on the speech recognition recognition rate. Many domestic and foreign companies are already using the latest AI technology for such research. Among them, research using a convolutional neural network algorithm (CNN) is being actively conducted. The purpose of this study is to determine the non-voice section from the user's speech section through the convolutional neural network. It collects the voice files (wav) of 5 speakers to generate learning data, and utilizes the convolutional neural network to determine the speech section and the non-voice section. A classification model for discriminating speech sections was created. Afterwards, an experiment was conducted to detect the non-speech section through the generated model, and as a result, an accuracy of 94% was obtained.

LPC 켑스트럼 및 FFT 스펙트럼에 의한 성별 인식 알고리즘

  • Choe, Jae-Seung;Jeong, Byeong-Gu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.63-65
    • /
    • 2012
  • 본 논문에서는 입력된 음성이 남성화자인지 여성화자인지를 구분하는 FFT 스펙트럼 및 LPC 켑스트럼 입력에 의한 성별인식 알고리즘을 제안한다. 본 논문에서는 특히 남성화자와 여성화자의 특징벡터를 비교 분석하여, 이러한 남녀의 음향학적인 특징벡터의 차이점을 이용하여 신경회로망에 의한 성별 인식에 대한 실험을 수행한다. 특히 12차의 LPC 켑스트럼 및 8차의 저역 FFT 스펙트럼의 특징벡터를 사용한 경우에, 남성화자 및 여성화자에 대해서 양호한 남녀 성별인식률이 구해졌다.

  • PDF

Phoneme Classification using the Modified LVQ2 Algorithm (수정된 LVQ2 알고리즘을 이용한 음소분류)

  • 김홍국;이황수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.12 no.1E
    • /
    • pp.71-77
    • /
    • 1993
  • 패턴매칭 기법에 근거한 음성 인식 시스템은 크게 clustering 과정과 labeling 과정으로 구성된다. 본 논문에서는 Kohonen의 featrue map 알고리즘과 LVQ2 알고리즘을 각각 clusterer와 labeler로 하는 음소인식 시스템을 구성한다. 구성된 인식시스템의 성능을 향상시키기 위해서 수정된 LVQ2알고리즘(MLVQ2)을 제안한다. MLVQ2는 selective learning, LVQ2, perturbed LVQ2 그리고 기존의 LVQ2의 4단계 학습과정으로 구성된다. 제안된 음소 인식 알고리즘의 성능을 평가하기 위하여 LVQ2와 MLVQ2를 각각 사용하여 6가지의 한국어 음소군에 대한 feature map을 만든다. 음소인식 실험결과, LVQ2와 MLVQ2를 사용하는 경우 각각 60.5%와 65.4%의 인식률을 얻을 수 있었다.

  • PDF

On a Method Which Improves Text Independent Speaker Verification Performance through Limiting Speech Production Loudness (성량제한을 적용한 어구독립 화자증명 성능향상 방안)

  • 이태승;최호진
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.457-459
    • /
    • 2001
  • 지속음(continuants) 단위로 화자간 차이를 식별하는 어구독립 화자증명(text-independent speaker verification) 방식에서 입력음성의 성량을 제한하여 보다 높은 인식률을 달성할 수 있는 화자인식 방법을 제안한다.

  • PDF

An Improvement of the Outline Mede Error Backpropagation Algorithm Learning Speed for Pattern Recognition (패턴인식에서 온라인 오류역전파 알고리즘의 학습속도 향상방법)

  • 이태승;황병원
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.616-618
    • /
    • 2002
  • MLP(multilayer perceptron)는 다른 패턴인식 방법에 비해 몇 가지 이점이 있어 다양한 문제영역에서 사용되고 있다 그러나 MLP의 학습에 일반적으로 사용되는 EBP(error backpropagation) 알고리즘은 학습시간이 비교적 오래 걸린다는 단점이 있으며, 이는 실시간 처리를 요구하는 문제나 대규모 데이터 및 MLP 구조로 인해 학습시간이 상당히 긴 문제에서 제약으로 작용한다. 패턴인식에 사용되는 학습데이터는 풍부한 중복특성을 내포하고 있으므로 패턴마다 MLP의 내부변수를 갱신하는 은라인 계열의 학습방식이 속도의 향상에 상당한 효과가 있다. 일반적인 온라인 EBP 알고리즘에서는 내부 가중치 갱신시 고정된 학습률을 적용한다. 고정 학습률을 적절히 선택함으로써 패턴인식 응용에서 상당한 속도개선을 얻을 수 있지만, 학습률을 고정함으로써 온라인 방식에서 패턴별 갱신의 특성을 완전히 활용하지 못하는 비효율성이 발생한다. 또한, 학습도중 패턴군이 학습된 패턴과 그렇지 못한 패턴으로 나뉘고 이 가운데 학습된 패턴은 학습을 위한 계산에 포함될 필요가 없음에도 불구하고, 기존의 온라인 EBP에서는 에폭에 할당된 모든 패턴을 일률적으로 계산에 포함시킨다. 이 문제에 대해 본 논문에서는 학습이 진행됨에 따라 패턴마다 적절한 학습률을 적용하고 필요한 패턴만을 학습에 반영하는 패턴별 가변학습률 및 학습생략(COIL) 방댑을 제안한다. 제안한 COIL의 성능을 입증하기 위해 화자증명과 음성인식을 실험하고 그 결과를 제시한다.

  • PDF

A Study on Robust Speech Emotion Feature Extraction Under the Mobile Communication Environment (이동통신 환경에서 강인한 음성 감성특징 추출에 대한 연구)

  • Cho Youn-Ho;Park Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.269-276
    • /
    • 2006
  • In this paper, we propose an emotion recognition system that can discriminate human emotional state into neutral or anger from the speech captured by a cellular-phone in real time. In general. the speech through the mobile network contains environment noise and network noise, thus it can causes serious System performance degradation due to the distortion in emotional features of the query speech. In order to minimize the effect of these noise and so improve the system performance, we adopt a simple MA (Moving Average) filter which has relatively simple structure and low computational complexity, to alleviate the distortion in the emotional feature vector. Then a SFS (Sequential Forward Selection) feature optimization method is implemented to further improve and stabilize the system performance. Two pattern recognition method such as k-NN and SVM is compared for emotional state classification. The experimental results indicate that the proposed method provides very stable and successful emotional classification performance such as 86.5%. so that it will be very useful in application areas such as customer call-center.

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

Syllable Recognition of HMM using Segment Dimension Compression (세그먼트 차원압축을 이용한 HMM의 음절인식)

  • Kim, Joo-Sung;Lee, Yang-Woo;Hur, Kang-In;Ahn, Jum-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.40-48
    • /
    • 1996
  • In this paper, a 40 dimensional segment vector with 4 frame and 7 frame width in every monosyllable interval was compressed into a 10, 14, 20 dimensional vector using K-L expansion and neural networks, and these was used to speech recognition feature parameter for CHMM. And we also compared them with CHMM added as feature parameter to the discrete duration time, the regression coefficients and the mixture distribution. In recognition test at 100 monosyllable, recognition rates of CHMM +${\bigtriangleup}$MCEP, CHMM +MIX and CHMM +DD respectively improve 1.4%, 2.36% and 2.78% over 85.19% of CHMM. And those using vector compressed by K-L expansion are less than MCEP + ${\bigtriangleup}$MCEP but those using K-L + MCEP, K-L + ${\bigtriangleup}$MCEP are almost same. Neural networks reflect more the speech dynamic variety than K-L expansion because they use the sigmoid function for the non-linear transform. Recognition rates using vector compressed by neural networks are higher than those using of K-L expansion and other methods.

  • PDF

A Study on the Voice Dialing using HMM and Post Processing of the Connected Digits (HMM과 연결 숫자음의 후처리를 이용한 음성 다이얼링에 관한 연구)

  • Yang, Jin-Woo;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.5
    • /
    • pp.74-82
    • /
    • 1995
  • This paper is study on the voice dialing using HMM and post processing of the connected digits. HMM algorithm is widely used in the speech recognition with a good result. But, the maximum likelihood estimation of HMM(Hidden Markov Model) training in the speech recognition does not lead to values which maximize recognition rate. To solve the problem, we applied the post processing to segmental K-means procedure are in the recognition experiment. Korea connected digits are influenced by the prolongation more than English connected digits. To decrease the segmentation error in the level building algorithm some word models which can be produced by the prolongation are added. Some rules for the added models are applied to the recognition result and it is updated. The recognition system was implemented with DSP board having a TMS320C30 processor and IBM PC. The reference patterns were made by 3 male speakers in the noisy laboratory. The recognition experiment was performed for 21 sort of telephone number, 252 data. The recognition rate was $6\%$ in the speaker dependent, and $80.5\%$ in the speaker independent recognition test.

  • PDF