• 제목/요약/키워드: Speech recognition systems

검색결과 358건 처리시간 0.021초

이기종 음성 인식 시스템에 독립적으로 적용 가능한 특징 보상 기반의 음성 향상 기법 (Speech Enhancement Based on Feature Compensation for Independently Applying to Different Types of Speech Recognition Systems)

  • 김우일
    • 한국정보통신학회논문지
    • /
    • 제18권10호
    • /
    • pp.2367-2374
    • /
    • 2014
  • 본 논문에서는 이기종 음성 인식 시스템에 독립적으로 적용할 수 있는 음성 향상 기법을 제안한다. 잡음 환경 음성 인식에 효과적인 것으로 알려져 있는 특징 보상 기법이 효과적으로 적용되기 위해서는 특징 추출 기법와 음향 모델이 음성 인식 시스템과 일치해야 한다. 상용화된 음성 인식 시스템에 부가적으로 전처리 기법을 적용하는 상황과 같이, 음성 인식 시스템에 대한 정보가 알려져 있지 않은 상황에서는 기존의 특징 보상 기법을 적용하기가 어렵다. 본 논문에서는 기존의 PCGMM 기반의 특징 보상 기법에서 얻어지는 이득을 이용하는 음성 향상 기술을 제안한다. 실험 결과에서는 본 논문에서 제안하는 기법이 미지의 (Unknown) 음성 인식 시스템 적용 환경에서 기존의 전처리 기법에 비해 다양한 잡음 및 SNR 조건에서 월등한 인식 성능을 나타내는 것을 확인한다.

응급상황에서의 음성인식을 위한 필터기 구현 (Implementation of Speech Recognition Filtering at Emergency)

  • 조영임;장성순
    • 한국지능시스템학회논문지
    • /
    • 제20권2호
    • /
    • pp.208-213
    • /
    • 2010
  • 일반적으로 음성인식 시스템의 사용에 가장 저해되는 요소에는 배경 잡음을 들 수 있다. 잡음은 음성인식 시스템의 성능을 저하시키고, 이로 인해 사용 장소의 제약을 많이 받게 되는 이유가 된다. 이런 잡음의 영향을 해결하기 위해 본 논문에서는 음질 향상에 목적을 두고 신호단계에서부터 잡음성분을 제거하는 필터 중 FIR필터의 대역통과를 이용하여 일반적으로 사람의 음성 주파수 영역과 잡음 영역을 추출한 정보를 토대로 Wiener 필터를 구현, 그 성능을 향상하여, 전송되어지는 음성신호구간에서 잡음구간과 음성구간에 따라 잡음을 유연하게 처리하도록 구현하였다.

Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems

  • Sanghun Jeon;Jieun Lee;Dohyeon Yeo;Yong-Ju Lee;SeungJun Kim
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.22-34
    • /
    • 2024
  • Exposure to varied noisy environments impairs the recognition performance of artificial intelligence-based speech recognition technologies. Degraded-performance services can be utilized as limited systems that assure good performance in certain environments, but impair the general quality of speech recognition services. This study introduces an audiovisual speech recognition (AVSR) model robust to various noise settings, mimicking human dialogue recognition elements. The model converts word embeddings and log-Mel spectrograms into feature vectors for audio recognition. A dense spatial-temporal convolutional neural network model extracts features from log-Mel spectrograms, transformed for visual-based recognition. This approach exhibits improved aural and visual recognition capabilities. We assess the signal-to-noise ratio in nine synthesized noise environments, with the proposed model exhibiting lower average error rates. The error rate for the AVSR model using a three-feature multi-fusion method is 1.711%, compared to the general 3.939% rate. This model is applicable in noise-affected environments owing to its enhanced stability and recognition rate.

순환 신경망 모델을 이용한 한국어 음소의 음성인식에 대한 연구 (A Study on the Speech Recognition of Korean Phonemes Using Recurrent Neural Network Models)

  • 김기석;황희영
    • 대한전기학회논문지
    • /
    • 제40권8호
    • /
    • pp.782-791
    • /
    • 1991
  • In the fields of pattern recognition such as speech recognition, several new techniques using Artifical Neural network Models have been proposed and implemented. In particular, the Multilayer Perception Model has been shown to be effective in static speech pattern recognition. But speech has dynamic or temporal characteristics and the most important point in implementing speech recognition systems using Artificial Neural Network Models for continuous speech is the learning of dynamic characteristics and the distributed cues and contextual effects that result from temporal characteristics. But Recurrent Multilayer Perceptron Model is known to be able to learn sequence of pattern. In this paper, the results of applying the Recurrent Model which has possibilities of learning tedmporal characteristics of speech to phoneme recognition is presented. The test data consist of 144 Vowel+ Consonant + Vowel speech chains made up of 4 Korean monothongs and 9 Korean plosive consonants. The input parameters of Artificial Neural Network model used are the FFT coefficients, residual error and zero crossing rates. The Baseline model showed a recognition rate of 91% for volwels and 71% for plosive consonants of one male speaker. We obtained better recognition rates from various other experiments compared to the existing multilayer perceptron model, thus showed the recurrent model to be better suited to speech recognition. And the possibility of using Recurrent Models for speech recognition was experimented by changing the configuration of this baseline model.

음성의 감성요소 추출을 통한 감성 인식 시스템 (The Emotion Recognition System through The Extraction of Emotional Components from Speech)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

Robust Speech Detection Based on Useful Bands for Continuous Digit Speech over Telephone Networks

  • Ji, Mi-Kyongi;Suh, Young-Joo;Kim, Hoi-Rin;Kim, Sang-Hun
    • The Journal of the Acoustical Society of Korea
    • /
    • 제22권3E호
    • /
    • pp.113-123
    • /
    • 2003
  • One of the most important problems in speech recognition is to detect the presence of speech in adverse environments. In other words, the accurate detection of speech boundary is critical to the performance of speech recognition. Furthermore the speech detection problem becomes severer when recognition systems are used over the telephone network, especially wireless network and noisy environment. Therefore this paper describes various speech detection algorithms for continuous digit recognition system used over wire/wireless telephone networks and we propose a algorithm in order to improve the robustness of speech detection using useful band selection under noisy telephone networks. In this paper, we compare some speech detection algorithms with the proposed one, and present experimental results done with various SNRs. The results show that the new algorithm outperforms the other speech detection methods.

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.

음성명령기반 26관절 보행로봇 실시간 작업동작제어에 관한 연구 (A Study on Real-Time Walking Action Control of Biped Robot with Twenty Six Joints Based on Voice Command)

  • 조상영;김민성;양준석;구영목;정양근;한성현
    • 제어로봇시스템학회논문지
    • /
    • 제22권4호
    • /
    • pp.293-300
    • /
    • 2016
  • The Voice recognition is one of convenient methods to communicate between human and robots. This study proposes a speech recognition method using speech recognizers based on Hidden Markov Model (HMM) with a combination of techniques to enhance a biped robot control. In the past, Artificial Neural Networks (ANN) and Dynamic Time Wrapping (DTW) were used, however, currently they are less commonly applied to speech recognition systems. This Research confirms that the HMM, an accepted high-performance technique, can be successfully employed to model speech signals. High recognition accuracy can be obtained by using HMMs. Apart from speech modeling techniques, multiple feature extraction methods have been studied to find speech stresses caused by emotions and the environment to improve speech recognition rates. The procedure consisted of 2 parts: one is recognizing robot commands using multiple HMM recognizers, and the other is sending recognized commands to control a robot. In this paper, a practical voice recognition system which can recognize a lot of task commands is proposed. The proposed system consists of a general purpose microprocessor and a useful voice recognition processor which can recognize a limited number of voice patterns. By simulation and experiment, it was illustrated the reliability of voice recognition rates for application of the manufacturing process.

주파수 와핑을 이용한 감정에 강인한 음성 인식 학습 방법 (A Training Method for Emotionally Robust Speech Recognition using Frequency Warping)

  • 김원구
    • 한국지능시스템학회논문지
    • /
    • 제20권4호
    • /
    • pp.528-533
    • /
    • 2010
  • 본 논문에서는 인간의 감정 변화의 영향을 적게 받는 음성 인식 시스템의 학습 방법에 관한 연구를 수행하였다. 이를 위하여 우선 다양한 감정이 포함된 음성 데이터베이스를 사용하여 감정 변화가 음성 신호와 음성 인식 시스템의 성능에 미치는 영향에 관한 연구를 수행하였다. 감정이 포함되지 않은 평상의 음성으로 학습된 음성 인식 시스템에 감정이 포함된 인식 데이터가 입력되는 경우 감정에 따른 음성의 차이가 인식 시스템의 성능을 저하시킨다. 본 연구에서는 감정의 변화에 따라 화자의 성도 길이가 변화한다는 것과 이러한 변화는 음성 인식 시스템의 성능을 저하시키는 원인 중의 하나임을 관찰하였다. 본 연구에서는 이러한 음성의 변화를 포함하는 학습 방법을 제안하여 감정 변화에 강인한 음성 인식 시스템을 개발하였다. HMM을 사용한 단독음 인식 실험에서 제안된 학습 방법을 사용하면 감정 데이터의 오차가 기존 방법보다 28.4% 감소되었다.

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF