• Title/Summary/Keyword: cepstrum coefficients

Search Result 68, Processing Time 0.023 seconds

Speaker Verification Using Hidden LMS Adaptive Filtering Algorithm and Competitive Learning Neural Network (Hidden LMS 적응 필터링 알고리즘을 이용한 경쟁학습 화자검증)

  • Cho, Seong-Won;Kim, Jae-Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.2
    • /
    • pp.69-77
    • /
    • 2002
  • Speaker verification can be classified in two categories, text-dependent speaker verification and text-independent speaker verification. In this paper, we discuss text-dependent speaker verification. Text-dependent speaker verification system determines whether the sound characteristics of the speaker are equal to those of the specific person or not. In this paper we obtain the speaker data using a sound card in various noisy conditions, apply a new Hidden LMS (Least Mean Square) adaptive algorithm to it, and extract LPC (Linear Predictive Coding)-cepstrum coefficients as feature vectors. Finally, we use a competitive learning neural network for speaker verification. The proposed hidden LMS adaptive filter using a neural network reduces noise and enhances features in various noisy conditions. We construct a separate neural network for each speaker, which makes it unnecessary to train the whole network for a new added speaker and makes the system expansion easy. We experimentally prove that the proposed method improves the speaker verification performance.

A Hybrid Neural Network model for Enhancement of Speaker Recognition in Video Stream (비디오 화자 인식 성능 향상을 위한 복합 신경망 모델)

  • Lee, Beom-Jin;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.396-398
    • /
    • 2012
  • 대부분의 실세계 데이터는 시간성을 띄고 있으므로 시간성을 지닌 데이터를 분석할 수 있는 기계 학습 방법론은 매우 중요하다. 이런 관점에서 비디오 데이터는 다양한 모달리티가 결합된 대표적인 시간 데이터 이므로 비디오 데이터를 대상으로 하는 기계 학습 방법은 큰 의미를 갖는다. 본 논문에서는 음성 채널에기반한 비디오 데이터 분석 방법의 예비 연구로 비디오 데이터에 등장하는 화자를 인식할 수 있는 간단한 방법을 소개한다. 제안 방법은 MFCC (Mel-frequency cepstrum coefficients)를 이용하여 인간 음성 특성의 분포를 분석한 후 분석 결과를 신경망에 입력하여 목표한 화자를 인식하는 복합 신경망 모델을 특징으로 한다. 실제 TV 드라마 데이터에서 가우시안 혼합모델, 가우시안 혼합 신경망 모델, 제안 방법의 화자 인식 성능을 비교한 결과 제안 방법이 가장 우수한 인식 성능을 보임을 확인하였다.

A Study on EMG functional Recognition Using Neural Network (신경 회로망을 이용한 EMG신호 기능 인식에 관한 연구)

  • Jo, Jeong-Ho;Choi, Joon-Ho;Wang, Moon-Sung;Park, Sang-Hui
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1990 no.05
    • /
    • pp.73-78
    • /
    • 1990
  • In this study, LPC cepstrum coefficients are used as feature vector extracted from AR model of EMG signal, and a reduced-connection network which has reduced connection between nodes is constructed to classify and recognize EMG functional classes. The proposed network reduces learning time and improves system stability. Therefore it is shown that the proposed network is appropriate in recognizing the function of EMG signal.

  • PDF

Classification of Phornographic Videos Based on the Audio Information (오디오 신호에 기반한 음란 동영상 판별)

  • Kim, Bong-Wan;Choi, Dae-Lim;Lee, Yong-Ju
    • MALSORI
    • /
    • no.63
    • /
    • pp.139-151
    • /
    • 2007
  • As the Internet becomes prevalent in our lives, harmful contents, such as phornographic videos, have been increasing on the Internet, which has become a very serious problem. To prevent such an event, there are many filtering systems mainly based on the keyword-or image-based methods. The main purpose of this paper is to devise a system that classifies pornographic videos based on the audio information. We use the mel-cepstrum modulation energy (MCME) which is a modulation energy calculated on the time trajectory of the mel-frequency cepstral coefficients (MFCC) as well as the MFCC as the feature vector. For the classifier, we use the well-known Gaussian mixture model (GMM). The experimental results showed that the proposed system effectively classified 98.3% of pornographic data and 99.8% of non-pornographic data. We expect the proposed method can be applied to the more accurate classification system which uses both video and audio information.

  • PDF

Speech Recognition Using HMM Based on Fuzzy (피지에 기초를 둔 HMM을 이용한 음성 인식)

  • 안태옥;김순협
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.12
    • /
    • pp.68-74
    • /
    • 1991
  • This paper proposes a HMM model based on fuzzy, as a method on the speech recognition of speaker-independent. In this recognition method, multi-observation sequences which give proper probabilities by fuzzy rule according to order of short distance from VQ codebook are obtained. Thereafter, the HMM model using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. The vocabularies for recognition experiment are 146 DDD are names, and the feature parameter is 10S0thT LPC cepstrum coefficients. Besides the speech recognition experiments of proposed model, for comparison with it, we perform the experiments by DP, MSVQ and general HMM under same condition and data. Through the experiment results, it is proved that HMM model using fuzzy proposed in this paper is superior to DP method, MSVQ and general HMM model in recognition rate and computational time.

  • PDF

A Study on Speech Recognition by One Stage MSVQ/DP (One stage MSVQ/DP를 이용한 음성 인식에 관한연구)

  • Jeoung, Eui-Bung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.2
    • /
    • pp.5-12
    • /
    • 1994
  • This paper proposes One Stage MSVQ/DP method for word recognition system university administration branch names are selected for the recognition experiment and 10 LPC cepstrum coefficients is used as the feature parameter. Besides the speech recognition experiments by proposed method, for comparision with it, we perform the experiments on the same data by Level Building DTW and One Stage DP method. The Recognition rates with the LBDTW and the One Stage method are $83.3\%$ and $87.5\%$, but the recognition rate with the proposed method is $91.6\%$.

  • PDF

Engine Fault Diagnosis Using Sound Source Analysis Based on Hidden Markov Model (HMM기반 소음분석에 의한 엔진고장 진단기법)

  • Le, Tran Su;Lee, Jong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.244-250
    • /
    • 2014
  • The Most Serious Engine Faults Are Those That Occur Within The Engine. Traditional Engine Fault Diagnosis Is Highly Dependent On The Engineer'S Technical Skills And Has A High Failure Rate. Neural Networks And Support Vector Machine Were Proposed For Use In A Diagnosis Model. In This Paper, Noisy Sound From Faulty Engines Was Represented By The Mel Frequency Cepstrum Coefficients, Zero Crossing Rate, Mean Square And Fundamental Frequency Features, Are Used In The Hidden Markov Model For Diagnosis. Our Experimental Results Indicate That The Proposed Method Performs The Diagnosis With A High Accuracy Rate Of About 98% For All Eight Fault Types.

Audio-Visual Integration based Multi-modal Speech Recognition System (오디오-비디오 정보 융합을 통한 멀티 모달 음성 인식 시스템)

  • Lee, Sahng-Woon;Lee, Yeon-Chul;Hong, Hun-Sop;Yun, Bo-Hyun;Han, Mun-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.707-710
    • /
    • 2002
  • 본 논문은 오디오와 비디오 정보의 융합을 통한 멀티 모달 음성 인식 시스템을 제안한다. 음성 특징 정보와 영상 정보 특징의 융합을 통하여 잡음이 많은 환경에서 효율적으로 사람의 음성을 인식하는 시스템을 제안한다. 음성 특징 정보는 멜 필터 캡스트럼 계수(Mel Frequency Cepstrum Coefficients: MFCC)를 사용하며, 영상 특징 정보는 주성분 분석을 통해 얻어진 특징 벡터를 사용한다. 또한, 영상 정보 자체의 인식률 향상을 위해 피부 색깔 모델과 얼굴의 형태 정보를 이용하여 얼굴 영역을 찾은 후 강력한 입술 영역 추출 방법을 통해 입술 영역을 검출한다. 음성-영상 융합은 변형된 시간 지연 신경 회로망을 사용하여 초기 융합을 통해 이루어진다. 실험을 통해 음성과 영상의 정보 융합이 음성 정보만을 사용한 것 보다 대략 5%-20%의 성능 향상을 보여주고 있다.

  • PDF

A Signal Processing Technique for Predictive Fault Detection based on Vibration Data (진동 데이터 기반 설비고장예지를 위한 신호처리기법)

  • Song, Ye Won;Lee, Hong Seong;Park, Hoonseok;Kim, Young Jin;Jung, Jae-Yoon
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.2
    • /
    • pp.111-121
    • /
    • 2018
  • Many problems in rotating machinery such as aircraft engines, wind turbines and motors are caused by bearing defects. The abnormalities of the bearing can be detected by analyzing signal data such as vibration or noise, proper pre-processing through a few signal processing techniques is required to analyze their frequencies. In this paper, we introduce the condition monitoring method for diagnosing the failure of the rotating machines by analyzing the vibration signal of the bearing. From the collected signal data, the normal states are trained, and then normal or abnormal state data are classified based on the trained normal state. For preprocessing, a Hamming window is applied to eliminate leakage generated in this process, and the cepstrum analysis is performed to obtain the original signal of the signal data, called the formant. From the vibration data of the IMS bearing dataset, we have extracted 6 statistic indicators using the cepstral coefficients and showed that the application of the Mahalanobis distance classifier can monitor the bearing status and detect the failure in advance.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.