• Title/Summary/Keyword: LPC(Linear Predictive Coefficient)

Search Result 18, Processing Time 0.026 seconds

Speaker Recognition using LPC cepstrum Coefficients and Neural Network (LPC 켑스트럼 계수와 신경회로망을 사용한 화자인식)

  • Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2521-2526
    • /
    • 2011
  • This paper proposes a speaker recognition algorithm using a perceptron neural network and LPC (Linear Predictive Coding) cepstrum coefficients. The proposed algorithm first detects the voiced sections at each frame. Then, the LPC cepstrum coefficients which have speaker characteristics are obtained by the linear predictive analysis for the detected voiced sections. To classify the obtained LPC cepstrum coefficients, a neural network is trained using the LPC cepstrum coefficients. In this experiment, the performance of the proposed algorithm was evaluated using the speech recognition rates based on the LPC cepstrum coefficients and the neural network.

A LSF Quantizer for the Wideband Speech Using the Predictive VQ-Pyramid VQ (예측 VQ-Pyramid VQ를 이용한 광대역 음성용 LSF 양자학기 설계)

  • 이강은;이인성;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.4
    • /
    • pp.333-339
    • /
    • 2004
  • This Paper proposes the vector quantizer-pyramid vector quantizer(VQ-PVQ) structure. Also both predictive structure and safety-net concept are combined into the VQ-PVQ to quantize the IPC parameter of wideband speech codec. The Performance is compared to the LPC vector quantizer used in the AMR-WB(ITU-T G.722.2). demonstrating reduction in both spectral distortion and encoding memory.

Neural-network-based Driver Drowsiness Detection System Using Linear Predictive Coding Coefficients and Electroencephalographic Changes (선형예측계수와 뇌파의 변화를 이용한 신경회로망 기반 운전자의 졸음 감지 시스템)

  • Chong, Ui-Pil;Han, Hyung-Seob
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.3
    • /
    • pp.136-141
    • /
    • 2012
  • One of the main reasons for serious road accidents is driving while drowsy. For this reason, drowsiness detection and warning system for drivers has recently become a very important issue. Monitoring physiological signals provides the possibility of detecting features of drowsiness and fatigue of drivers. One of the effective signals is to measure electroencephalogram (EEG) signals and electrooculogram (EOG) signals. The aim of this study is to extract drowsiness-related features from a set of EEG signals and to classify the features into three states: alertness, drowsiness, sleepiness. This paper proposes a neural-network-based drowsiness detection system using Linear Predictive Coding (LPC) coefficients as feature vectors and Multi-Layer Perceptron (MLP) as a classifier. Samples of EEG data from each predefined state were used to train the MLP program by using the proposed feature extraction algorithms. The trained MLP program was tested on unclassified EEG data and subsequently reviewed according to manual classification. The classification rate of the proposed system is over 96.5% for only very small number of samples (250ms, 64 samples). Therefore, it can be applied to real driving incident situation that can occur for a split second.

GMM-Based Gender Identification Employing Group Delay (Group Delay를 이용한 GMM기반의 성별 인식 알고리즘)

  • Lee, Kye-Hwan;Lim, Woo-Hyung;Kim, Nam-Soo;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.243-249
    • /
    • 2007
  • We propose an effective voice-based gender identification using group delay(GD) Generally, features for speech recognition are composed of magnitude information rather than phase information. In our approach, we address a difference between male and female for GD which is a derivative of the Fourier transform phase. Also, we propose a novel way to incorporate the features fusion scheme based on a combination of GD and magnitude information such as mel-frequency cepstral coefficients(MFCC), linear predictive coding (LPC) coefficients, reflection coefficients and formant. The experimental results indicate that GD is effective in discriminating gender and the performance is significantly improved when the proposed feature fusion technique is applied.

A MFCC-based CELP Speech Coder for Server-based Speech Recognition in Network Environments (네트워크 환경에서 서버용 음성 인식을 위한 MFCC 기반 음성 부호화기 설계)

  • Lee, Gil-Ho;Yoon, Jae-Sam;Oh, Yoo-Rhee;Kim, Hong-Kook
    • MALSORI
    • /
    • no.54
    • /
    • pp.27-43
    • /
    • 2005
  • Existing standard speech coders can provide speech communication of high quality while they degrade the performance of speech recognition systems that use the reconstructed speech by the coders. The main cause of the degradation is that the spectral envelope parameters in speech coding are optimized to speech quality rather than to the performance of speech recognition. For example, mel-frequency cepstral coefficient (MFCC) is generally known to provide better speech recognition performance than linear prediction coefficient (LPC) that is a typical parameter set in speech coding. In this paper, we propose a speech coder using MFCC instead of LPC to improve the performance of a server-based speech recognition system in network environments. However, the main drawback of using MFCC is to develop the efficient MFCC quantization with a low-bit rate. First, we explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel error. As a result, we propose a 8.7 kbps MFCC-based CELP coder. It is shown from a PESQ test that the proposed speech coder has a comparable speech quality to 8 kbps G.729 while it is shown that the performance of speech recognition using the proposed speech coder is better than that using G.729.

  • PDF

A Design of Speech Feature Vector Extractor using TMS320C31 DSP Chip (TMS DSP 칩을 이용한 음성 특징 벡터 추출기 설계)

  • 예병대;이광명;성광수
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2212-2215
    • /
    • 2003
  • In this paper, we proposed speech feature vector extractor for embedded system using TMS 320C31 DSP chip. For this extractor, we used algorithm using cepstrum coefficient based on LPC(Linear Predictive Coding) that is reliable algorithm to be is widely used for speech recognition. This system extract the speech feature vector in real time, so is used the mobile system, such as cellular phones, PDA, electronic note, and so on, implemented speech recognition.

  • PDF

Vocabulary Recognition Post-Processing System using Phoneme Similarity Error Correction (음소 유사율 오류 보정을 이용한 어휘 인식 후처리 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.83-90
    • /
    • 2010
  • In vocabulary recognition system has reduce recognition rate unrecognized error cause of similar phoneme recognition and due to provided inaccurate vocabulary. Input of inaccurate vocabulary by feature extraction case of recognition by appear result of unrecognized or similar phoneme recognized. Also can't feature extraction properly when phoneme recognition is similar phoneme recognition. In this paper propose vocabulary recognition post-process error correction system using phoneme likelihood based on phoneme feature. Phoneme likelihood is monophone training phoneme data by find out using MFCC and LPC feature extraction method. Similar phoneme is induced able to recognition of accurate phoneme due to inaccurate vocabulary provided unrecognized reduced error rate. Find out error correction using phoneme likelihood and confidence when vocabulary recognition perform error correction for error proved vocabulary. System performance comparison as a result of recognition improve represent MFCC 7.5%, LPC 5.3% by system using error pattern and system using semantic.

Phoneme-Boundary-Detection and Phoneme Recognition Research using Neural Network (음소경계검출과 신경망을 이용한 음소인식 연구)

  • 임유두;강민구;최영호
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.224-229
    • /
    • 1999
  • In the field of speech recognition, the research area can be classified into the following two categories: one which is concerned with the development of phoneme-level recognition system, the other with the efficiency of word-level recognition system. The resonable phoneme-level recognition system should detect the phonemic boundaries appropriately and have the improved recognition abilities all the more. The traditional LPC methods detect the phoneme boundaries using Itakura-Saito method which measures the distance between LPC of the standard phoneme data and that of the target speech frame. The MFCC methods which treat spectral transitions as the phonemic boundaries show the lack of adaptability. In this paper, we present new speech recognition system which uses auto-correlation method in the phonemic boundary detection process and the multi-layered Feed-Forward neural network in the recognition process respectively. The proposed system outperforms the traditional methods in the sense of adaptability and another advantage of the proposed system is that feature-extraction part is independent of the recognition process. The results show that frame-unit phonemic recognition system should be possibly implemented.

  • PDF

A Practical Implementation of the LTJ Adaptive Filter and Its Application to the Adaptive Echo Canceller (LTJ 적응필터의 실용적 구현과 적응반향제거기에 대한 적용)

  • Yoo, Jae-Ha
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.227-235
    • /
    • 2004
  • In this paper, we proposed a new practical implementation method of the lattice transversal joint (LTJ) adaptive filter using speech codec's information. And it was applied to the adaptive echo cancellation problem to verify the efficiency of the proposed method. Realtime implementation of the LTJ adaptive filter is very difficult due to high computational complexity for the filter coefficients compensation. However, in case of using speech codec, complexity can be reduced since linear predictive coding (LPC) coefficients are updated each frame or sub-frame instead of every sample. Furthermore, LPC coefficients can be acquired from speech decoder and transformed to the reflection coefficients. Therefore, the computational complexity for updates of the reflection coefficients can be reduced. The effectiveness of the proposed LTJ adaptive filter was verified by the experiments about convergence and tracking performance of the adaptive echo canceller.

  • PDF

Speech synthesis using acoustic Doppler signal (초음파 도플러 신호를 이용한 음성 합성)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.134-142
    • /
    • 2016
  • In this paper, a method synthesizing speech signal using the 40 kHz ultrasonic signals reflected from the articulatory muscles was introduced and performance was evaluated. When the ultrasound signals are radiated to articulating face, the Doppler effects caused by movements of lips, jaw, and chin observed. The signals that have different frequencies from that of the transmitted signals are found in the received signals. These ADS (Acoustic-Doppler Signals) were used for estimating of the speech parameters in this study. Prior to synthesizing speech signal, a quantitative correlation analysis between ADS and speech signals was carried out on each frequency bin. According to the results, the feasibility of the ADS-based speech synthesis was validated. ADS-to-speech transformation was achieved by the joint Gaussian mixture model-based conversion rules. The experimental results from the 5 subjects showed that filter bank energy and LPC (Linear Predictive Coefficient) cepstrum coefficients are the optimal features for ADS, and speech, respectively. In the subjective evaluation where synthesized speech signals were obtained using the excitation sources extracted from original speech signals, it was confirmed that the ADS-to-speech conversion method yielded 72.2 % average recognition rates.