• Title/Summary/Keyword: 음성인식률

Search Result 549, Processing Time 0.023 seconds

CHMM Modeling using LMS Algorithm for Continuous Speech Recognition Improvement (연속 음성 인식 향상을 위해 LMS 알고리즘을 이용한 CHMM 모델링)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.377-382
    • /
    • 2012
  • In this paper, the echo noise robust CHMM learning model using echo cancellation average estimator LMS algorithm is proposed. To be able to adapt to the changing echo noise. For improving the performance of a continuous speech recognition, CHMM models were constructed using echo noise cancellation average estimator LMS algorithm. As a results, SNR of speech obtained by removing Changing environment noise is improved as average 1.93dB, recognition rate improved as 2.1%.

Effective Recognition of Velopharyngeal Insufficiency (VPI) Patient's Speech Using DNN-HMM-based System (DNN-HMM 기반 시스템을 이용한 효과적인 구개인두부전증 환자 음성 인식)

  • Yoon, Ki-mu;Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.33-38
    • /
    • 2019
  • This paper proposes an effective recognition method of VPI patient's speech employing DNN-HMM-based speech recognition system, and evaluates the recognition performance compared to GMM-HMM-based system. The proposed method employs speaker adaptation technique to improve VPI speech recognition. This paper proposes to use simulated VPI speech for generating a prior model for speaker adaptation and selective learning of weight matrices of DNN, in order to effectively utilize the small size of VPI speech for model adaptation. We also apply Linear Input Network (LIN) based model adaptation technique for the DNN model. The proposed speaker adaptation method brings 2.35% improvement in average accuracy compared to GMM-HMM based ASR system. The experimental results demonstrate that the proposed DNN-HMM-based speech recognition system is effective for VPI speech with small-sized speech data, compared to conventional GMM-HMM system.

Noise filtering method based on voice frequency correlation to increase STT efficiency (STT 효율 증대를 위한 음성 주파수 correlation 기반 노이즈 필터링 방안)

  • Lim, Jiwon;Hwang, Yonghae;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.176-179
    • /
    • 2021
  • 현재 음성인식 기술은 인공지능 비서, 전화자동응답, 네비게이션 등 다양한 분야에서 사용되고 있으며 인간의 음성을 디바이스에 전달하기 위해 음성 신호를 텍스트로 변환하는 Speech-To-Text (STT) 기술을 필요로 한다. 초기의 STT 기술의 대부분은 확률 통계 방식인 Hidden Markov Model (HMM)기반으로 이루졌으며, 딥러닝 기술의 발전으로 HMM과 함께 Recurrent Nural Network (RNN), Deep Nural Network (DNN) 기법을 사용함으로써 과거보다 단어 인식 오류를 개선하며 20%의 성능 향상을 이루어냈다. 그러나 다수의 화자 혹은 생활소음, 노래 등 소음이 있는 주변 환경의 간섭 신호 영향을 받으면 인식 정확도에 차이가 발생한다. 본 논문에서는 이러한 문제를 해결하기 위하여 음성 신호를 추출하여 주파수성분을 분석하고 오디오 신호 사이의 주파수 영역 correlation 연산을 통해 음성 신호와 노이즈 신호를 구분하는 것으로 STT 인식률을 높이고, 목소리 신호를 더욱 효율적으로 STT 기술에 입력하기 위한 방안을 제안한다.

  • PDF

A Study on The Improvement of Emotion Recognition by Gender Discrimination (성별 구분을 통한 음성 감성인식 성능 향상에 대한 연구)

  • Cho, Youn-Ho;Park, Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.107-114
    • /
    • 2008
  • In this paper, we constructed a speech emotion recognition system that classifies four emotions - neutral, happy, sad, and anger from speech based on male/female gender discrimination. At first, the proposed system distinguish between male and female from a queried speech, then the system performance can be improved by using separate optimized feature vectors for each gender for the emotion classification. As a emotion feature vector, this paper adopts ZCPA(Zero Crossings with Peak Amplitudes) which is well known for its noise-robustic characteristic from the speech recognition area and the features are optimized using SFS method. For a pattern classification of emotion, k-NN and SVM classifiers are compared experimentally. From the computer simulation results, the proposed system was proven to be highly efficient for speech emotion classification about 85.3% regarding four emotion states. This might promise the use the proposed system in various applications such as call-center, humanoid robots, ubiquitous, and etc.

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.1-7
    • /
    • 2021
  • Recently, since most of the research on correcting speech recognition errors is based on English, there is not enough research on Korean speech recognition. Compared to English speech recognition, however, Korean speech recognition has many errors due to the linguistic characteristics of Korean language, such as Korean Fortis and Korean Liaison, thus research on Korean speech recognition is needed. Furthermore, earlier works primarily focused on editorial distance algorithms and syllable restoration rules, making it difficult to correct the error types of Korean Fortis and Korean Liaison. In this paper, we propose a context-sensitive post-processing model of speech recognition using a LSTM-based sequence-to-sequence model and Bahdanau attention mechanism to correct Korean speech recognition errors caused by the pronunciation. Experiments showed that by using the model, the speech recognition performance was improved from 64% to 77% for Fortis, 74% to 90% for Liaison, and from 69% to 84% for average recognition than before. Based on the results, it seems possible to apply the proposed model to real-world applications based on speech recognition.

Crossword Game Using Speech Technology (음성기술을 이용한 십자말 게임)

  • Yu, Il-Soo;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.213-218
    • /
    • 2003
  • In this paper, we implement a crossword game, which operate by speech. The CAA (Cross Array Algorithm) produces the crossword array randomly and automatically using an domain-dictionary. For producing the crossword array, we construct seven domain-dictionaries. The crossword game is operated by a mouse and a keyboard and is also operated by speech. For the user interface by speech, we use a speech recognizer and a speech synthesizer and this provide more comfortable interface to the user. The efficiency evaluation of CAA is performed by estimating the processing times of producing the crossword array and the generation ratio of the crossword array. As the results of the CAA's efficiency evaluation, the processing times is about 10ms and the generation ratio of the crossword array is about 50%. Also, the recognition rates were 95.5%, 97.6% and 96.2% for the window sizes of "$7{\times}7$", "$9{\times}9$," and "$11{\times}11$" respectively.}11$" respectively.vely.

A Study on Korean Digit Recognition Using Syllable Based Neural Network (음절 기반 신경망을 이용한 한국어 숫자음 인식에 관한 연구)

  • Kum Ji Soo;Lee Hyon Soo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.78-81
    • /
    • 1999
  • 본 논문에서는 인간의 정보처리 기술을 모방한 신경망과 한국어 음절 구성의 특성을 이용하여 음절을 기반으로 하는 신경망 음성인식 방법을 제안한다. 제안한 방법에서는 임계비율을 정의하여 한국어 음절을 구성하는 초성$\cdot$중성$\cdot$종성을 구분하였고, 구분된 음절의 일부 구간 특징을 학습 및 인식의 특징 패턴으로 사용하여 음성인식 시스템의 전체적인 처리 단계를 줄였다. 한국어 숫자음 인식에 대한 성능 평가에서 20대 남성과 여성을 대상으로 화자 종속에서 $96.5\%$의 인식률을 화자 독립에서 $93\%$의 인식률을 얻었다.

  • PDF

Feature Extraction by Optimizing the Cepstral Resolution of Frequency Sub-bands (주파수 부대역의 켑스트럼 해상도 최적화에 의한 특징추출)

  • 지상문;조훈영;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.35-41
    • /
    • 2003
  • Feature vectors for conventional speech recognition are usually extracted in full frequency band. Therefore, each sub-band contributes equally to final speech recognition results. In this paper, feature Teeters are extracted indepedently in each sub-band. The cepstral resolution of each sub-band feature is controlled for the optimal speech recognition. For this purpose, different dimension of each sub-band ceptral vectors are extracted based on the multi-band approach, which extracts feature vector independently for each sub-band. Speech recognition rates and clustering quality are suggested as the criteria for finding the optimal combination of sub-band Teeter dimension. In the connected digit recognition experiments using TIDIGITS database, the proposed method gave string accuracy of 99.125%, 99.775% percent correct, and 99.705% percent accuracy, which is 38%, 32% and 37% error rate reduction relative to baseline full-band feature vector, respectively.

Effective Feature Extraction in the Individual frequency Sub-bands for Speech Recognition (음성인식을 위한 주파수 부대역별 효과적인 특징추출)

  • 지상문
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.598-603
    • /
    • 2003
  • This paper presents a sub-band feature extraction approach in which the feature extraction method in the individual frequency sub-bands is determined in terms of speech recognition accuracy. As in the multi-band paradigm, features are extracted independently in frequency sub-regions of the speech signal. Since the spectral shape is well structured in the low frequency region, the all pole model is effective for feature extraction. But, in the high frequency region, the nonparametric transform, discrete cosine transform is effective for the extraction of cepstrum. Using the sub-band specific feature extraction method, the linguistic information in the individual frequency sub-bands can be extracted effectively for automatic speech recognition. The validity of the proposed method is shown by comparing the results of speech recognition experiments for our method with those obtained using a full-band feature extraction method.

A Speech Translation System for Hotel Reservation (호텔예약을 위한 음성번역시스템)

  • 구명완;김재인;박상규;김우성;장두성;홍영국;장경애;김응인;강용범
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.24-31
    • /
    • 1996
  • In this paper, we present a speech translation system for hotel reservation, KT_STS(Korea Telecom Speech Translation System). KT-STS is a speech-to-speech translation system which translates a spoken utterance in Korean into one in Japanese. The system has been designed around the task of hotel reservation(dialogues between a Korean customer and a hotel reservation de나 in Japan). It consists of a Korean speech recognition system, a Korean-to-Japanese machine translation system and a korean speech synthesis system. The Korean speech recognition system is an HMM(Hidden Markov model)-based speaker-independent, continuous speech recognizer which can recognize about 300 word vocabularies. Bigram language model is used as a forward language model and dependency grammar is used for a backward language model. For machine translation, we use dependency grammar and direct transfer method. And Korean speech synthesizer uses the demiphones as a synthesis unit and the method of periodic waveform analysis and reallocation. KT-STS runs in nearly real time on the SPARC20 workstation with one TMS320C30 DSP board. We have achieved the word recognition rate of 94. 68% and the sentence recognition rate of 82.42% after the speech recognition tests. On Korean-to-Japanese translation tests, we achieved translation success rate of 100%. We had an international joint experiment in which our system was connected with another system developed by KDD in Japan using the leased line.

  • PDF