• 제목/요약/키워드: Speech recognition systems

검색결과 358건 처리시간 0.028초

음성 인식 기술 평가 동향 (On the Evaluation of Speech Recognition Systems)

  • 유하진;김동현;육동석
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.201-206
    • /
    • 2005
  • We present a survey on the evaluation methods of speech recognition technology and propose a procedure for evaluating Korean speech recognition systems. Currently there are various kinds of evaluation events conducted by NIST and ELDA every year. In this paper, we introduce these activities, and propose an evaluation procedure for Korean speech recognition systems. In designing the procedure, we consider the characteristics of Korean language, as well as the trends of Korean speech technology industry.

  • PDF

A Study on Design and Implementation of Embedded System for speech Recognition Process

  • Kim, Jung-Hoon;Kang, Sung-In;Ryu, Hong-Suk;Lee, Sang-Bae
    • 한국지능시스템학회논문지
    • /
    • 제14권2호
    • /
    • pp.201-206
    • /
    • 2004
  • This study attempted to develop a speech recognition module applied to a wheelchair for the physically handicapped. In the proposed speech recognition module, TMS320C32 was used as a main processor and Mel-Cepstrum 12 Order was applied to the pro-processor step to increase the recognition rate in a noisy environment. DTW (Dynamic Time Warping) was used and proven to be excellent output for the speaker-dependent recognition part. In order to utilize this algorithm more effectively, the reference data was compressed to 1/12 using vector quantization so as to decrease memory. In this paper, the necessary diverse technology (End-point detection, DMA processing, etc.) was managed so as to utilize the speech recognition system in real time

음성 자료에 대한 규칙 기반 Named Entity 인식 (Rule-based Named Entity (NE) Recognition from Speech)

  • 김지환
    • 대한음성학회지:말소리
    • /
    • 제58호
    • /
    • pp.45-66
    • /
    • 2006
  • In this paper, a rule-based (transformation-based) NE recognition system is proposed. This system uses Brill's rule inference approach. The performance of the rule-based system and IdentiFinder, one of most successful stochastic systems, are compared. In the baseline case (no punctuation and no capitalisation), both systems show almost equal performance. They also have similar performance in the case of additional information such as punctuation, capitalisation and name lists. The performances of both systems degrade linearly with the number of speech recognition errors, and their rates of degradation are almost equal. These results show that automatic rule inference is a viable alternative to the HMM-based approach to NE recognition, but it retains the advantages of a rule-based approach.

  • PDF

네트워크 환경에서 서버용 음성 인식을 위한 MFCC 기반 음성 부호화기 설계 (A MFCC-based CELP Speech Coder for Server-based Speech Recognition in Network Environments)

  • 이길호;윤재삼;오유리;김홍국
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.27-43
    • /
    • 2005
  • Existing standard speech coders can provide speech communication of high quality while they degrade the performance of speech recognition systems that use the reconstructed speech by the coders. The main cause of the degradation is that the spectral envelope parameters in speech coding are optimized to speech quality rather than to the performance of speech recognition. For example, mel-frequency cepstral coefficient (MFCC) is generally known to provide better speech recognition performance than linear prediction coefficient (LPC) that is a typical parameter set in speech coding. In this paper, we propose a speech coder using MFCC instead of LPC to improve the performance of a server-based speech recognition system in network environments. However, the main drawback of using MFCC is to develop the efficient MFCC quantization with a low-bit rate. First, we explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel error. As a result, we propose a 8.7 kbps MFCC-based CELP coder. It is shown from a PESQ test that the proposed speech coder has a comparable speech quality to 8 kbps G.729 while it is shown that the performance of speech recognition using the proposed speech coder is better than that using G.729.

  • PDF

음성인식과 얼굴인식을 사용한 사용자 환경의 상호작용 (User-customized Interaction using both Speech and Face Recognition)

  • 김성일
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2007년도 춘계학술대회 학술발표 논문집 제17권 제1호
    • /
    • pp.397-400
    • /
    • 2007
  • In this paper, we discuss the user-customized interaction for intelligent home environments. The interactive system is based upon the integrated techniques using both speech and face recognition. For essential modules, the speech recognition and synthesis were basically used for a virtual interaction between user and proposed system. In experiments, particularly, the real-time speech recognizer based on the HM-Net(Hidden Markov Network) was incorporated into the integrated system. Besides, the face identification was adopted to customize home environments for a specific user. In evaluation, the results showed that the proposed system was easy to use for intelligent home environments, even though the performance of the speech recognizer did not show a satisfactory results owing to the noisy environments.

  • PDF

디지털 통신 시스템에서의 음성 인식 성능 향상을 위한 전처리 기술 (Pre-Processing for Performance Enhancement of Speech Recognition in Digital Communication Systems)

  • 서진호;박호종
    • 한국음향학회지
    • /
    • 제24권7호
    • /
    • pp.416-422
    • /
    • 2005
  • 디지털 통신 시스템에서의 음성 인식은 음성 부호화기에 의한 음성 신호의 왜곡으로 인하여 성능이 크게 저하된다. 본 논문에서는 음성 부호화기에 의한 스펙트럼 왜곡을 분석하고 왜곡된 주파수 정보를 보상하는 전처리 과정을 통하여 음성 인식 성능을 향상시키는 방법을 제안한다. 현재 널리 사용되는 표준 음성 부호화기인 IS-127 EVRC, ITU G.729 CS-ACELP. IS-96 QCELP를 사용하여 부호화에 의한 왜곡을 분석하고, 모든 음성 부호화기에 공통으로 적용하여 왜곡을 보상할 수 있는 전처리 방법을 개발하였다. 본 논문에서 제안하는 왜곡 보상 방법을 세 종류의 음성부호화기에 각각 적용하였으며, 왜곡된 음성 신호에 대한 음성 인식률에 비하여 최대 $15.6\%$의 인식률 향상을 얻을 수 있었다.

The Performance Improvement of Speech Recognition System based on Stochastic Distance Measure

  • Jeon, B.S.;Lee, D.J.;Song, C.K.;Lee, S.H.;Ryu, J.W.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권2호
    • /
    • pp.254-258
    • /
    • 2004
  • In this paper, we propose a robust speech recognition system under noisy environments. Since the presence of noise severely degrades the performance of speech recognition system, it is important to design the robust speech recognition method against noise. The proposed method adopts a new distance measure technique based on stochastic probability instead of conventional method using minimum error. For evaluating the performance of the proposed method, we compared it with conventional distance measure for the 10-isolated Korean digits with car noise. Here, the proposed method showed better recognition rate than conventional distance measure for the various car noisy environments.

잡음 환경에서의 음성 검출 알고리즘 비교 연구 (A Comparative Study of Voice Activity Detection Algorithms in Adverse Environments)

  • 양경철;육동석
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.45-48
    • /
    • 2006
  • As the speech recognition systems are used in many emerging applications, robust performance of speech recognition systems under extremely noisy conditions become more important. The voice activity detection (VAD) has been taken into account as one of the important factors for robust speech recognition. In this paper, we investigate conventional VAD algorithms and analyze the weak and the strong points of each algorithm.

  • PDF

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

잡음에 강인한 음성인식을 위한 Generalized Gamma 분포기반과 Spectral Gain Floor를 결합한 음성향상기법 (Speech Estimators Based on Generalized Gamma Distribution and Spectral Gain Floor Applied to an Automatic Speech Recognition)

  • 김형국;신동;이진호
    • 한국ITS학회 논문지
    • /
    • 제8권3호
    • /
    • pp.64-70
    • /
    • 2009
  • 본 논문은 잡음에 강인한 음성인식 성능을 획득하기 위해 generalized Gamma 분포기반의 음성향상 기법을 제안한다. 우수한 음성향상을 위해서 제안된 방식에서는 generalized Gamma분포와 spectral gain floor를 이용한 음성추적 기법에 스펙트럼 최소잡음성분에 의한 희귀적인 평균 스펙트럼 값으로부터 유도되는 잡음추정을 결합하여 음질을 향상시켜 음성인식에 적용하였다. Spectral component, spectral amplitude 그리고 log spectral amplitude에 기반하여 제안된 음성향상 기법을 잡음환경에서의 음성인식에 적용하여 그 성능을 측정하였다.

  • PDF