• 제목/요약/키워드: Speech recognition systems

검색결과 358건 처리시간 0.025초

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

지능형 홈네트워크 시스템을 위한 가변어휘 연속음성인식시스템에 관한 연구 (A Study on Vocabulary-Independent Continuous Speech Recognition System for Intelligent Home Network System)

  • 이호웅;정희석
    • 한국ITS학회 논문지
    • /
    • 제7권2호
    • /
    • pp.37-42
    • /
    • 2008
  • 본 논문에서는 지능형 홈네트워크의 음성제어를 위한 가변어휘 연속음성인식시스템을 개발하였다. 또한 자연스런 음성명령에 대한 인식을 위해 핵심어 기반의 자연스런 연속어휘에 대한 대화형 시나리오를 작성하였고, 핵심어기반의 인식 엔진 및 데이터베이스를 구축하여 인식엔진의 성능을 최적화하였다.

  • PDF

소프트컴퓨팅 기법을 이용한 다음절 단어의 음성인식 (Speech Recognition of Multi-Syllable Words Using Soft Computing Techniques)

  • 이종수;윤지원
    • 정보저장시스템학회논문집
    • /
    • 제6권1호
    • /
    • pp.18-24
    • /
    • 2010
  • The performance of the speech recognition mainly depends on uncertain factors such as speaker's conditions and environmental effects. The present study deals with the speech recognition of a number of multi-syllable isolated Korean words using soft computing techniques such as back-propagation neural network, fuzzy inference system, and fuzzy neural network. Feature patterns for the speech recognition are analyzed with 12th order thirty frames that are normalized by the linear predictive coding and Cepstrums. Using four models of speech recognizer, actual experiments for both single-speakers and multiple-speakers are conducted. Through this study, the recognizers of combined fuzzy logic and back-propagation neural network and fuzzy neural network show the better performance in identifying the speech recognition.

음성 변환을 사용한 감정 변화에 강인한 음성 인식 (Emotion Robust Speech Recognition using Speech Transformation)

  • 김원구
    • 한국지능시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.683-687
    • /
    • 2010
  • 본 논문에서는 인간의 감정 변화에 강인한 음성 인식 시스템을 구현하기 위하여 음성 변환 방법 중의 한가지인 주파수 와핑 방법을 사용한 연구를 수행하였다. 이러한 목표를 위하여 다양한 감정이 포함된 음성 데이터베이스를 사용하여 감정의 변화에 따라 음성의 스펙트럼이 변화한다는 것과 이러한 변화는 음성 인식 시스템의 성능을 저하시키는 원인 중의 하나임을 관찰하였다. 본 논문에서는 이러한 음성의 변화를 감소시키는 방법으로 주파수 와핑을 학습 과정에 사용하는 방법을 제안하여 감정 변화에 강인한 음성 인식 시스템을 구현하였고 성도 길이 정규화 방법을 사용한 방법과 성능을 비교하였다. HMM을 사용한 단독음 인식 실험에서 제안된 학습 방법은 사용하면 감정이 포함된 데이터에 대한 인식 오차가 기존 방법보다 감소되었다.

Pattern Recognition Methods for Emotion Recognition with speech signal

  • Park Chang-Hyun;Sim Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.150-154
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition are determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section.

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • 제35권1호
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.

자동 음성 인식기를 위한 단채널 음질 향상 알고리즘의 성능 분석 (Performance Analysis of a Class of Single Channel Speech Enhancement Algorithms for Automatic Speech Recognition)

  • 송명석;이창헌;이석필;강홍구
    • The Journal of the Acoustical Society of Korea
    • /
    • 제29권2E호
    • /
    • pp.86-99
    • /
    • 2010
  • This paper analyzes the performance of various single channel speech enhancement algorithms when they are applied to automatic speech recognition (ASR) systems as a preprocessor. The functional modules of speech enhancement systems are first divided into four major modules such as a gain estimator, a noise power spectrum estimator, a priori signal to noise ratio (SNR) estimator, and a speech absence probability (SAP) estimator. We investigate the relationship between speech recognition accuracy and the roles of each module. Simulation results show that the Wiener filter outperforms other gain functions such as minimum mean square error-short time spectral amplitude (MMSE-STSA) and minimum mean square error-log spectral amplitude (MMSE-LSA) estimators when a perfect noise estimator is applied. When the performance of the noise estimator degrades, however, MMSE methods including the decision directed module to estimate a priori SNR and the SAP estimation module helps to improve the performance of the enhancement algorithm for speech recognition systems.

자동차 잡음환경에서의 음성인식에 적용된 두 종류의 일반화된 감마분포 기반의 음성추정 알고리즘 비교 (Comparison of Two Speech Estimation Algorithms Based on Generalized-Gamma Distribution Applied to Speech Recognition in Car Noisy Environment)

  • 김형국;이진호
    • 한국ITS학회 논문지
    • /
    • 제8권4호
    • /
    • pp.28-32
    • /
    • 2009
  • 본 논문은 DFT기반의 단일마이크 음성향상 방식에 적용된 두 종류의 generalized-Gamma 분포기반의 음성추정 알고리즘을 비교한다. 음성향상 방식으로서는 최소잡음성분에 의한 회귀적인 평균스펙트럼 값으로부터 유도되는 잡음 추정을 각각 $\kappa$=1인 경우와 $\kappa$=2인 경우의 Gamma 분포를 이용한 음성추정 기법에 결합하여 음질을 향상시켰다. 각 방식에 의해 향상된 음성신호를 자동차 환경에서의 음성인식에 적용하여 그 성능을 비교하였다.

  • PDF

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제12권3호
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2006년도 춘계학술대회 학술발표 논문집 제16권 제1호
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF