• Title/Summary/Keyword: Speech recognition model

Search Result 624, Processing Time 0.023 seconds

Implementation of a Multimodal Controller Combining Speech and Lip Information (음성과 영상정보를 결합한 멀티모달 제어기의 구현)

  • Kim, Cheol;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.40-45
    • /
    • 2001
  • In this paper, we implemented a multimodal system combining speech and lip information, and evaluated its performance. We designed speech recognizer using speech information and lip recognizer using image information. Both recognizers were based on HMM recognition engine. As a combining method we adopted the late integration method in which weighting ratio for speech and lip is 8:2. By the way, Our constructed multi-modal recognition system was ported on DARC system. That is, our system was used to control Comdio of DARC. The interrace between DARC and our system was done with TCP/IP socked. The experimental results of controlling Comdio showed that lip recognition can be used for an auxiliary means of speech recognizer by improving the rate of the recognition. Also, we expect that multi-model system will be successfully applied to o traffic information system and CNS (Car Navigation System).

  • PDF

Construction of Customer Appeal Classification Model Based on Speech Recognition

  • Sheng Cao;Yaling Zhang;Shengping Yan;Xiaoxuan Qi;Yuling Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.258-266
    • /
    • 2023
  • Aiming at the problems of poor customer satisfaction and poor accuracy of customer classification, this paper proposes a customer classification model based on speech recognition. First, this paper analyzes the temporal data characteristics of customer demand data, identifies the influencing factors of customer demand behavior, and determines the process of feature extraction of customer voice signals. Then, the emotional association rules of customer demands are designed, and the classification model of customer demands is constructed through cluster analysis. Next, the Euclidean distance method is used to preprocess customer behavior data. The fuzzy clustering characteristics of customer demands are obtained by the fuzzy clustering method. Finally, on the basis of naive Bayesian algorithm, a customer demand classification model based on speech recognition is completed. Experimental results show that the proposed method improves the accuracy of the customer demand classification to more than 80%, and improves customer satisfaction to more than 90%. It solves the problems of poor customer satisfaction and low customer classification accuracy of the existing classification methods, which have practical application value.

Implementation of A Fast Preprocessor for Isolated Word Recognition (고립단어 인식을 위한 빠른 전처리기의 구현)

  • Ahn, Young-Mok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.96-99
    • /
    • 1997
  • This paper proposes a very fast preprocessor for isolated word recognition. The proposed preprocessor has a small computational cost for extracting candidate words. In the preprocessor, we used a feature sorting algorithm instead of vector quantization to reduce the computational cost. In order to show the effectiveness of our preprocessor, we compared it to a speech recognition system based on semi-continuous hidden Markov Model and a VQ-based preprocessor by computing their recognition performances of a speaker independent isolated word recognition. For the experiments, we used the speech database consisting of 244 words which were uttered by 40 male speakers. The set of speech data uttered by 20 male speakers was used for training, and the other set for testing. As the results, the accuracy of the proposed preprocessor was 99.9% with 90% reduction rate for the speech database.

  • PDF

A study on the Method of the Keyword Spotting Recognition in the Continuous speech using Neural Network (신경 회로망을 이용한 연속 음성에서의 keyword spotting 인식 방식에 관한 연구)

  • Yang, Jin-Woo;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.43-49
    • /
    • 1996
  • This research proposes a system for speaker independent Korean continuous speech recognition with 247 DDD area names using keyword spotting technique. The applied recognition algorithm is the Dynamic Programming Neural Network(DPNN) based on the integration of DP and multi-layer perceptron as model that solves time axis distortion and spectral pattern variation in the speech. To improve performance, we classify word model into keyword model and non-keyword model. We make an experiment on postprocessing procedure for the evaluation of system performance. Experiment results are as follows. The recognition rate of the isolated word is 93.45% in speaker dependent case. The recognition rate of the isolated word is 84.05% in speaker independent case. The recognition rate of simple dialogic sentence in keyword spotting experiment is 77.34% as speaker dependent, and 70.63% as speaker independent.

  • PDF

Keyword Retrieval-Based Korean Text Command System Using Morphological Analyzer (형태소 분석기를 이용한 키워드 검색 기반 한국어 텍스트 명령 시스템)

  • Park, Dae-Geun;Lee, Wan-Bok
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.2
    • /
    • pp.159-165
    • /
    • 2019
  • Based on deep learning technology, speech recognition method has began to be applied to commercial products, but it is still difficult to be used in the area of VR contents, since there is no easy and efficient way to process the recognized text after the speech recognition module. In this paper, we propose a Korean Language Command System, which can efficiently recognize and respond to Korean speech commands. The system consists of two components. One is a morphological analyzer to analyze sentence morphemes and the other is a retrieval based model which is usually used to develop a chatbot system. Experimental results shows that the proposed system requires only 16% commands to achieve the same level of performance when compared with the conventional string comparison method. Furthermore, when working with Google Cloud Speech module, it revealed 60.1% of success rate. Experimental results show that the proposed system is more efficient than the conventional string comparison method.

HearCAM Embedded Platform Design (히어 캠 임베디드 플랫폼 설계)

  • Hong, Seon Hack;Cho, Kyung Soon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we implemented the HearCAM platform with Raspberry PI B+ model which is an open source platform. Raspberry PI B+ model consists of dual step-down (buck) power supply with polarity protection circuit and hot-swap protection, Broadcom SoC BCM2835 running at 700MHz, 512MB RAM solered on top of the Broadcom chip, and PI camera serial connector. In this paper, we used the Google speech recognition engine for recognizing the voice characteristics, and implemented the pattern matching with OpenCV software, and extended the functionality of speech ability with SVOX TTS(Text-to-speech) as the matching result talking to the microphone of users. And therefore we implemented the functions of the HearCAM for identifying the voice and pattern characteristics of target image scanning with PI camera with gathering the temperature sensor data under IoT environment. we implemented the speech recognition, pattern matching, and temperature sensor data logging with Wi-Fi wireless communication. And then we directly designed and made the shape of HearCAM with 3D printing technology.

A Training Method for Emotion Recognition using Emotional Adaptation (감정 적응을 이용한 감정 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.998-1003
    • /
    • 2020
  • In this paper, an emotion training method using emotional adaptation is proposed to improve the performance of the existing emotion recognition system. For emotion adaptation, an emotion speech model was created from a speech model without emotion using a small number of training emotion voices and emotion adaptation methods. This method showed superior performance even when using a smaller number of emotional voices than the existing method. Since it is not easy to obtain enough emotional voices for training, it is very practical to use a small number of emotional voices in real situations. In the experimental results using a Korean database containing four emotions, the proposed method using emotional adaptation showed better performance than the existing method.

Application of sinusoidal model to perception of electrical hearing in cochlear implants (인공와우 전기 청각 인지에 대한 정현파 모델 적용에 관한 연구)

  • Lee, Sungmin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.52-57
    • /
    • 2022
  • Speech consists of the sum of complex sine-waves. This study investigated the perception of electrical hearing by applying the sinusoidal model to cochlear implant simulation. Fourteen adults with normal hearing participated in this study. The sentence recognition tests were implemented using the sentence lists processed by the sinusoidal model which extracts 2, 4, 6, 8 sine-wave components and sentence lists processed by the same sinusoidal model along with cochlear implant simulation (8 channel vocoders). The results showed lower speech recognition for the sentence lists processed by the sinusoidal model and cochlear implant simulation compared to those by the sinusoidal model alone. Notably, the lower the number of sine-wave components (2), the larger the difference was. This study provides the perceptual pattern of sine-wave speech for electrical hearing by cochlear implant listeners, and basic data for development of speech processing algorithms in cochlear implants.

A study on the new hybrid recurrent TDNN-HMM architecture for speech recognition (음성인식을 위한 새로운 혼성 recurrent TDNN-HMM 구조에 관한 연구)

  • Jang, Chun-Seo
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.699-704
    • /
    • 2001
  • ABSTRACT In this paper, a new hybrid modular recurrent TDNN (time-delay neural network)-HMM (hidden Markov model) architecture for speech recognition has been studied. In TDNN, the recognition rate could be increased if the signal window is extended. To obtain this effect in the neural network, a high-level memory generated through a feedback within the first hidden layer of the neural network unit has been used. To increase the ability to deal with the temporal structure of phonemic features, the input layer of the network has been divided into multiple states in time sequence and has feature detector for each states. To expand the network from small recognition task to the full speech recognition system, modular construction method has been also used. Furthermore, the neural network and HMM are integrated by feeding output vectors from the neural network to HMM, and a new parameter smoothing method which can be applied to this hybrid system has been suggested.

  • PDF

Optimizing Multiple Pronunciation Dictionary Based on a Confusability Measure for Non-native Speech Recognition (타언어권 화자 음성 인식을 위한 혼잡도에 기반한 다중발음사전의 최적화 기법)

  • Kim, Min-A;Oh, Yoo-Rhee;Kim, Hong-Kook;Lee, Yeon-Woo;Cho, Sung-Eui;Lee, Seong-Ro
    • MALSORI
    • /
    • no.65
    • /
    • pp.93-103
    • /
    • 2008
  • In this paper, we propose a method for optimizing a multiple pronunciation dictionary used for modeling pronunciation variations of non-native speech. The proposed method removes some confusable pronunciation variants in the dictionary, resulting in a reduced dictionary size and less decoding time for automatic speech recognition (ASR). To this end, a confusability measure is first defined based on the Levenshtein distance between two different pronunciation variants. Then, the number of phonemes for each pronunciation variant is incorporated into the confusability measure to compensate for ASR errors due to words of a shorter length. We investigate the effect of the proposed method on ASR performance, where Korean is selected as the target language and Korean utterances spoken by Chinese native speakers are considered as non-native speech. It is shown from the experiments that an ASR system using the multiple pronunciation dictionary optimized by the proposed method can provide a relative average word error rate reduction of 6.25%, with 11.67% less ASR decoding time, as compared with that using a multiple pronunciation dictionary without the optimization.

  • PDF