• 제목/요약/키워드: Audio recognition

검색결과 166건 처리시간 0.013초

얼굴과 음성 정보를 이용한 바이모달 사용자 인식 시스템 설계 및 구현 (Design and Implementation of a Bimodal User Recognition System using Face and Audio)

  • 김명훈;이지근;소인미;정성태
    • 한국컴퓨터정보학회논문지
    • /
    • 제10권5호
    • /
    • pp.353-362
    • /
    • 2005
  • 최근 들어 바이모달 인식에 관한 연구가 활발히 진행되고 있다. 본 논문에서는 음성 정보와 얼굴정보를 이용하여 바이모달 시스템을 구현하였다. 얼굴인식은 얼굴 검출과 얼굴 인식 두 부분으로 나누어서 실험을 하였다. 얼굴 검출 단계에서는 AdaBoost를 이용하여 얼굴 후보 영역을 검출 한 뒤 PCA를 통해 특징 벡터 계수를 줄였다. PCA를 통해 추출된 특징 벡터를 객체 분류 기법인 SVM을 이용하여 얼굴을 검출 및 인식하였다. 음성인식은 MFCC를 이용하여 음성 특징 추출을 하였으며 HMM을 이용하여 음성인식을 하였다. 인식결과, 단일 인식을 사용하는 것보다 얼굴과 음성을 같이 사용하였을 때 인식률의 향상을 가져왔고, 잡음 환경에서는 더욱 높은 성능을 나타냈었다.

  • PDF

오디오 Fingerprint를 이용한 음악인식 연구 동향 (Music Recognition Using Audio Fingerprint: A Survey)

  • 이동현;임민규;김지환
    • 말소리와 음성과학
    • /
    • 제4권1호
    • /
    • pp.77-87
    • /
    • 2012
  • Interest in music recognition has been growing dramatically after NHN and Daum released their mobile applications for music recognition in 2010. Methods in music recognition based on audio analysis fall into two categories: music recognition using audio fingerprint and Query-by-Singing/Humming (QBSH). While music recognition using audio fingerprint receives music as its input, QBSH involves taking a user-hummed melody. In this paper, research trends are described for music recognition using audio fingerprint, focusing on two methods: one based on fingerprint generation using energy difference between consecutive bands and the other based on hash key generation between peak points. Details presented in the representative papers of each method are introduced.

A Novel Integration Scheme for Audio Visual Speech Recognition

  • Pham, Than Trung;Kim, Jin-Young;Na, Seung-You
    • 한국음향학회지
    • /
    • 제28권8호
    • /
    • pp.832-842
    • /
    • 2009
  • Automatic speech recognition (ASR) has been successfully applied to many real human computer interaction (HCI) applications; however, its performance tends to be significantly decreased under noisy environments. The invention of audio visual speech recognition (AVSR) using an acoustic signal and lip motion has recently attracted more attention due to its noise-robustness characteristic. In this paper, we describe our novel integration scheme for AVSR based on a late integration approach. Firstly, we introduce the robust reliability measurement for audio and visual modalities using model based information and signal based information. The model based sources measure the confusability of vocabulary while the signal is used to estimate the noise level. Secondly, the output probabilities of audio and visual speech recognizers are normalized respectively before applying the final integration step using normalized output space and estimated weights. We evaluate the performance of our proposed method via Korean isolated word recognition system. The experimental results demonstrate the effectiveness and feasibility of our proposed system compared to the conventional systems.

잡음 환경 하에서의 입술 정보와 PSO-NCM 최적화를 통한 거절 기능 성능 향상 (Improvement of Rejection Performance using the Lip Image and the PSO-NCM Optimization in Noisy Environment)

  • 김병돈;최승호
    • 말소리와 음성과학
    • /
    • 제3권2호
    • /
    • pp.65-70
    • /
    • 2011
  • Recently, audio-visual speech recognition (AVSR) has been studied to cope with noise problems in speech recognition. In this paper we propose a novel method of deciding weighting factors for audio-visual information fusion. We adopt the particle swarm optimization (PSO) to weighting factor determination. The AVSR experiments show that PSO-based normalized confidence measures (NCM) improve the rejection performance of mis-recognized words by 33%.

  • PDF

청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현 (Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information)

  • 이종석;박철훈
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.

자동차 잡음 및 오디오 출력신호가 존재하는 자동차 실내 환경에서의 강인한 음성인식 (Robust Speech Recognition in the Car Interior Environment having Car Noise and Audio Output)

  • 박철호;배재철;배건성
    • 대한음성학회지:말소리
    • /
    • 제62호
    • /
    • pp.85-96
    • /
    • 2007
  • In this paper, we carried out recognition experiments for noisy speech having various levels of car noise and output of an audio system using the speech interface. The speech interface consists of three parts: pre-processing, acoustic echo canceller, post-processing. First, a high pass filter is employed as a pre-processing part to remove some engine noises. Then, an echo canceller implemented by using an FIR-type filter with an NLMS adaptive algorithm is used to remove the music or speech coming from the audio system in a car. As a last part, the MMSE-STSA based speech enhancement method is applied to the out of the echo canceller to remove the residual noise further. For recognition experiments, we generated test signals by adding music to the car noisy speech from Aurora 2 database. The HTK-based continuous HMM system is constructed for a recognition system. Experimental results show that the proposed speech interface is very promising for robust speech recognition in a noisy car environment.

  • PDF

Improved Bimodal Speech Recognition Study Based on Product Hidden Markov Model

  • Xi, Su Mei;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권3호
    • /
    • pp.164-170
    • /
    • 2013
  • Recent years have been higher demands for automatic speech recognition (ASR) systems that are able to operate robustly in an acoustically noisy environment. This paper proposes an improved product hidden markov model (HMM) used for bimodal speech recognition. A two-dimensional training model is built based on dependently trained audio-HMM and visual-HMM, reflecting the asynchronous characteristics of the audio and video streams. A weight coefficient is introduced to adjust the weight of the video and audio streams automatically according to differences in the noise environment. Experimental results show that compared with other bimodal speech recognition approaches, this approach obtains better speech recognition performance.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

신경망 기반 음성, 영상 및 문맥 통합 음성인식 (Speech Recognition by Integrating Audio, Visual and Contextual Features Based on Neural Networks)

  • 김명원;한문성;이순신;류정우
    • 전자공학회논문지CI
    • /
    • 제41권3호
    • /
    • pp.67-77
    • /
    • 2004
  • 최근 잡음환경에서 신뢰도 높은 음성인식을 위해 음성정보와 영상정보를 융합하는 방법이 활발히 연구되고 있다. 본 논문에서는 이절적인 정보의 융합에 적합한 신경망 모델을 기반으로 음성, 영상 및 문맥 정보 등 다양한 정보를 융합하여 잡음 환경에서 고려단어를 인식하는 음성인식 기법에 대하여 기술한다. 음성과 영상 특징을 이용한 이중 모드 신경망 BMNN(BiModal Neural Network)을 제안한다. BMM은 4개 층으로 이루어진 다층퍼셉트론의 구조를 가지며 각 층은 입력 특징의 추상화 기능을 수행한다. BMNN에서는 제 3층이 잡음에 의한 음성 정보의 손실을 보상하기 위하여 음성과 영상 특징을 통합하는 기능을 수행한다. 또한, 잡음환경에서 음성 인식률을 향상시키기 위해 사용자가 말한 단어들의 순차 패턴을 나타내는 문맥정보를 이용한 후처리 방법을 제안한다. 잡음환경에서 BMNN은 단순히 음성만을 사용한 것 보다 높은 성능을 보임으로써 그 타당성을 확인할 수 있을 뿐 아니라, 특히 문맥을 이용한 후처리를 하였을 경우 잡음 환경에서 90%이상의 인식률을 달성하였다 본 연구는 잡음환경에서 강인한 음성인식을 위해 다양한 추가 정보를 사용함으로써 성능을 향상시킬 수 있음을 제시한다.

Noise Robust Automatic Speech Recognition Scheme with Histogram of Oriented Gradient Features

  • Park, Taejin;Beack, SeungKwan;Lee, Taejin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제3권5호
    • /
    • pp.259-266
    • /
    • 2014
  • In this paper, we propose a novel technique for noise robust automatic speech recognition (ASR). The development of ASR techniques has made it possible to recognize isolated words with a near perfect word recognition rate. However, in a highly noisy environment, a distinct mismatch between the trained speech and the test data results in a significantly degraded word recognition rate (WRA). Unlike conventional ASR systems employing Mel-frequency cepstral coefficients (MFCCs) and a hidden Markov model (HMM), this study employ histogram of oriented gradient (HOG) features and a Support Vector Machine (SVM) to ASR tasks to overcome this problem. Our proposed ASR system is less vulnerable to external interference noise, and achieves a higher WRA compared to a conventional ASR system equipped with MFCCs and an HMM. The performance of our proposed ASR system was evaluated using a phonetically balanced word (PBW) set mixed with artificially added noise.