• Title/Summary/Keyword: Speech recognition model

Search Result 624, Processing Time 0.024 seconds

Semantic Ontology Speech Recognition Performance Improvement using ERB Filter (ERB 필터를 이용한 시맨틱 온톨로지 음성 인식 성능 향상)

  • Lee, Jong-Sub
    • Journal of Digital Convergence
    • /
    • v.12 no.10
    • /
    • pp.265-270
    • /
    • 2014
  • Existing speech recognition algorithm have a problem with not distinguish the order of vocabulary, and the voice detection is not the accurate of noise in accordance with recognized environmental changes, and retrieval system, mismatches to user's request are problems because of the various meanings of keywords. In this article, we proposed to event based semantic ontology inference model, and proposed system have a model to extract the speech recognition feature extract using ERB filter. The proposed model was used to evaluate the performance of the train station, train noise. Noise environment of the SNR-10dB, -5dB in the signal was performed to remove the noise. Distortion measure results confirmed the improved performance of 2.17dB, 1.31dB.

An Implementation of Speech Recognition System for Car's Control (자동차 제어용 음성 인식시스템 구현)

  • 이광석;김현덕
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.451-458
    • /
    • 2001
  • In this paper, we propose speech control system for a various control device in the car with real time control speech. A real time speech control system is detected start-end points from speech data processing by A/D conversion, and recognize by one pass dynamic programming method. The results displays a monitor, and transports control data to control interfaces. The HMM model is modeled by a continuous control speech consists of control speech and digit speech for controlling of a various control device in the car The recognition rates is an average 97.3% in case of word & control speech, and is an average 96.3% in case of digit speech.

  • PDF

Improvement of Speech Reconstructed from MFCC Using GMM (GMM을 이용한 MFCC로부터 복원된 음성의 개선)

  • Choi, Won-Young;Choi, Mu-Yeol;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.53
    • /
    • pp.129-141
    • /
    • 2005
  • The goal of this research is to improve the quality of reconstructed speech in the Distributed Speech Recognition (DSR) system. For the extended DSR, we estimate the variable Maximum Voiced Frequency (MVF) from Mel-Frequency Cepstral Coefficient (MFCC) based on Gaussian Mixture Model (GMM), to implement realistic harmonic plus noise model for the excitation signal. For the standard DSR, we also make the voiced/unvoiced decision from MFCC based on GMM because the pitch information is not available in that case. The perceptual test reveals that speech reconstructed by the proposed method is preferred to the one by the conventional methods.

  • PDF

On a robust text-dependent speaker identification over telephone channels (전화음성에 강인한 문장종속 화자인식에 관한 연구)

  • Jung, Eu-Sang;Choi, Hong-Sub
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.57-66
    • /
    • 1997
  • This paper studies the effects of the method, CMS(Cepstral Mean Subtraction), (which compensates for some of the speech distortion. caused by telephone channels), on the performance of the text-dependent speaker identification system. This system is based on the VQ(Vector Quantization) and HMM(Hidden Markov Model) method and chooses the LPC-Cepstrum and Mel-Cepstrum as the feature vectors extracted from the speech data transmitted through telephone channels. Accordingly, we can compare the correct recognition rates of the speaker identification system between the use of LPC-Cepstrum and Mel-Cepstrum. Finally, from the experiment results table, it is found that the Mel-Cepstrum parameter is proven to be superior to the LPC-Cepstrum and that recognition performance improves by about 10% when compensating for telephone channel using the CMS.

  • PDF

A Speech Recognition System based on a New Endpoint Estimation Method jointly using Audio/Video Informations (음성/영상 정보를 이용한 새로운 끝점추정 방식에 기반을 둔 음성인식 시스템)

  • 이동근;김성준;계영철
    • Journal of Broadcast Engineering
    • /
    • v.8 no.2
    • /
    • pp.198-203
    • /
    • 2003
  • We develop the method of estimating the endpoints of speech by jointly using the lip motion (visual speech) and speech being included in multimedia data and then propose a new speech recognition system (SRS) based on that method. The endpoints of noisy speech are estimated as follows : For each test word, two kinds of endpoints are detected from visual speech and clean speech, respectively Their difference is made and then added to the endpoints of visual speech to estimate those for noisy speech. This estimation method for endpoints (i.e. speech interval) is applied to form a new SRS. The SRS differs from the convention alone in that each word model in the recognizer is provided an interval of speech not Identical but estimated respectively for the corresponding word. Simulation results show that the proposed method enables the endpoints to be accurately estimated regardless of the amount of noise and consequently achieves 8 o/o improvement in recognition rate.

GMM based Speaker Identification using Pitch Information (피치 정보를 이용한 GMM 기반의 화자 식별)

  • Park Taesun;Hahn Minsoo
    • MALSORI
    • /
    • no.47
    • /
    • pp.121-129
    • /
    • 2003
  • This paper describes the use of pitch information for speaker identification. The recognition system is a GMM based one with 4 connected Korean digits speech database. The mean of the pitch period in voiced sections of speech are shown to be ,useful at discriminating between speakers. Utilizing this feature with Gaussian mixture model in the speaker identification system gave a marked improvement, maximum 6% improvement comparing to the baseline Gaussian mixture model.

  • PDF

Performance Improvement of Continuous Digits Speech Recognition using the Transformed Successive State Splitting and Demi-syllable pair (반음절쌍과 변형된 연쇄 상태 분할을 이용한 연속 숫자음 인식의 성능 향상)

  • Kim Dong-Ok;Park No-Jin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.8
    • /
    • pp.1625-1631
    • /
    • 2005
  • This paper describes an optimization of a language model and an acoustic model that improve the ability of speech recognition with Korean nit digit. Recognition errors of the language model are decreasing by analysis of the grammatical feature of korean unit digits, and then is made up of fsn-node with a disyllable. Acoustic model make use of demi-syllable pair to decrease recognition errors by inaccuracy division of a phone, a syllable because of a monosyllable, a short pronunciation and an articulation. we have used the k-means clustering algorithm with the transformed successive state splining in feature level for the efficient modelling of the feature of recognition unit . As a result of experimentations, $10.5\%$ recognition rate is raised in the case of the proposed language model. The demi-syllable pair with an acoustic model increased $12.5\%$ recognition rate and $1.5\%$ recognition rate is improved in transformed successive state splitting.

Visual analysis of attention-based end-to-end speech recognition (어텐션 기반 엔드투엔드 음성인식 시각화 분석)

  • Lim, Seongmin;Goo, Jahyun;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.41-49
    • /
    • 2019
  • An end-to-end speech recognition model consisting of a single integrated neural network model was recently proposed. The end-to-end model does not need several training steps, and its structure is easy to understand. However, it is difficult to understand how the model recognizes speech internally. In this paper, we visualized and analyzed the attention-based end-to-end model to elucidate its internal mechanisms. We compared the acoustic model of the BLSTM-HMM hybrid model with the encoder of the end-to-end model, and visualized them using t-SNE to examine the difference between neural network layers. As a result, we were able to delineate the difference between the acoustic model and the end-to-end model encoder. Additionally, we analyzed the decoder of the end-to-end model from a language model perspective. Finally, we found that improving end-to-end model decoder is necessary to yield higher performance.

Speech Recognition Model Based on CNN using Spectrogram (스펙트로그램을 이용한 CNN 음성인식 모델)

  • Won-Seog Jeong;Haeng-Woo Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.4
    • /
    • pp.685-692
    • /
    • 2024
  • In this paper, we propose a new CNN model to improve the recognition performance of command voice signals. This method obtains a spectrogram image after performing a short-time Fourier transform (STFT) of the input signal and improves command recognition performance through supervised learning using a CNN model. After Fourier transforming the input signal for each short-time section, a spectrogram image is obtained and multi-classification learning is performed using a CNN deep learning model. This effectively classifies commands by converting the time domain voice signal to the frequency domain to express the characteristics well and performing deep learning training using the spectrogram image for the conversion parameters. To verify the performance of the speech recognition system proposed in this study, a simulation program using Tensorflow and Keras libraries was created and a simulation experiment was performed. As a result of the experiment, it was confirmed that an accuracy of 92.5% could be obtained using the proposed deep learning algorithm.

Primary Study for dialogue based on Ordering Chatbot

  • Kim, Ji-Ho;Park, JongWon;Moon, Ji-Bum;Lee, Yulim;Yoon, Andy Kyung-yong
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.209-214
    • /
    • 2018
  • Today is the era of artificial intelligence. With the development of artificial intelligence, machines have begun to impersonate various human characteristics today. Chatbot is one instance of this interactive artificial intelligence. Chatbot is a computer program that enables to conduct natural conversations with people. As mentioned above, Chatbot conducted conversations in text, but Chatbot, in this study evolves to perform commands based on speech-recognition. In order for Chatbot to perfectly emulate a human dialogue, it is necessary to analyze the sentence correctly and extract appropriate response. To accomplish this, the sentence is classified into three types: objects, actions, and preferences. This study shows how objects is analyzed and processed, and also demonstrates the possibility of evolving from an elementary model to an advanced intelligent system. By this study, it will be evaluated that speech-recognition based Chatbot have improved order-processing time efficiency compared to text based Chatbot. Once this study is done, speech-recognition based Chatbot have the potential to automate customer service and reduce human effort.