• Title/Summary/Keyword: Speech recognition model

Search Result 623, Processing Time 0.025 seconds

Smart Home Personalization Service based on Context Information using Speech (음성인식을 이용한 상황정보 기반의 스마트 흠 개인화 서비스)

  • Kim, Jong-Hun;Song, Chang-Woo;Kim, Ju-Hyun;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.80-89
    • /
    • 2009
  • The importance of personalized services has been attracted in smart home environments according to the development of ubiquitous computering. In this paper, we proposed the smart home personalized service system based on context information using the speech recognition. The proposed service consists of an OSGi framework based service mobile manager, service manager, voice recognition manager, and location manager. Also, this study defines the smart home space and configures the commands of units, sensor information, and user information that are largely used in the defined space as context information. In particular, this service identifies users who exist in the same space that shows a difficulty in the identification using RFID through the training model and pattern matching in voice recognition and supports the personalized service of smart home applications. In the results of the experiment, it was verified that the OSGi based automated and personalized service can be achieved through verifying users in the same space.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

Conformer with lexicon transducer for Korean end-to-end speech recognition (Lexicon transducer를 적용한 conformer 기반 한국어 end-to-end 음성인식)

  • Son, Hyunsoo;Park, Hosung;Kim, Gyujin;Cho, Eunsoo;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.530-536
    • /
    • 2021
  • Recently, due to the development of deep learning, end-to-end speech recognition, which directly maps graphemes to speech signals, shows good performance. Especially, among the end-to-end models, conformer shows the best performance. However end-to-end models only focuses on the probability of which grapheme will appear at the time. The decoding process uses a greedy search or beam search. This decoding method is easily affected by the final probability output by the model. In addition, the end-to-end models cannot use external pronunciation and language information due to structual problem. Therefore, in this paper conformer with lexicon transducer is proposed. We compare phoneme-based model with lexicon transducer and grapheme-based model with beam search. Test set is consist of words that do not appear in training data. The grapheme-based conformer with beam search shows 3.8 % of CER. The phoneme-based conformer with lexicon transducer shows 3.4 % of CER.

A Study on Development of ECS for Severly Handicaped (중증 장애인을 위한 생활환경 제어장치개발에 관한 연구)

  • 임동철;이행세;홍석교;이일영
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.5
    • /
    • pp.427-434
    • /
    • 2003
  • In this paper, we present a speech-based Environmental Control System(ECS) and its application. In the concrete, an ECS using the speech recognition and an portable wheelchair lift control system with the speech synthesis are developed through the simulation and the embodiment. The developed system apply to quadriplegic man and we evaluate the result of physical effect and of mental effect. Speech recognition system is constructed by real time modules using HMM model. For the clinical application of the device, we investigate the result applied to 54-years old quadriplegic man during a week through the questionnaires of Beck Depression Inventory and of Activity Pattern Indicator. Also the motor drive control system of potable wheelchair lift is implemented and the mechanical durability is tested by structural analysis. Speech recognition rate results in over 95% through the experiment. The result of the questionnaires shows higher satisfaction and lower nursing loads. In addition, the depression tendency of the subject were decreased. The potable wheelchair lift shows good fatigue life-cycle as the material supporting the upper wheelchair and shows the centroid mobility of safety. In this paper we present an example of ECS which consists of real-time speech recognition system and potable wheelchair lift. Also the experiments shows needs of the ECS for korean environments. This study will be the base of a commercial use.

Lip-Synch System Optimization Using Class Dependent SCHMM (클래스 종속 반연속 HMM을 이용한 립싱크 시스템 최적화)

  • Lee, Sung-Hee;Park, Jun-Ho;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.7
    • /
    • pp.312-318
    • /
    • 2006
  • The conventional lip-synch system has a two-step process, speech segmentation and recognition. However, the difficulty of speech segmentation procedure and the inaccuracy of training data set due to the segmentation lead to a significant Performance degradation in the system. To cope with that, the connected vowel recognition method using Head-Body-Tail (HBT) model is proposed. The HBT model which is appropriate for handling relatively small sized vocabulary tasks reflects co-articulation effect efficiently. Moreover the 7 vowels are merged into 3 classes having similar lip shape while the system is optimized by employing a class dependent SCHMM structure. Additionally in both end sides of each word which has large variations, 8 components Gaussian mixture model is directly used to improve the ability of representation. Though the proposed method reveals similar performance with respect to the CHMM based on the HBT structure. the number of parameters is reduced by 33.92%. This reduction makes it a computationally efficient method enabling real time operation.

Recognizing Hand Digit Gestures Using Stochastic Models

  • Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.807-815
    • /
    • 2008
  • A simple efficient method of spotting and recognizing hand gestures in video is presented using a network of hidden Markov models and dynamic programming search algorithm. The description starts from designing a set of isolated trajectory models which are stochastic and robust enough to characterize highly variable patterns like human motion, handwriting, and speech. Those models are interconnected to form a single big network termed a spotting network or a spotter that models a continuous stream of gestures and non-gestures as well. The inference over the model is based on dynamic programming. The proposed model is highly efficient and can readily be extended to a variety of recurrent pattern recognition tasks. The test result without any engineering has shown the potential for practical application. At the end of the paper we add some related experimental result that has been obtained using a different model - dynamic Bayesian network - which is also a type of stochastic model.

  • PDF

Efficient context dependent process modeling using state tying and decision tree-based method (상태 공유와 결정트리 방법을 이용한 효율적인 문맥 종속 프로세스 모델링)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.369-377
    • /
    • 2010
  • In vocabulary recognition systems based on HMM(Hidden Markov Model)s, training process unseen model bring on show a low recognition rate. If recognition vocabulary modify and make an addition then recreated modeling of executed database collected and training sequence on account of bring on additional expenses and take more time. This study suggest efficient context dependent process modeling method using decision tree-based state tying. On study suggest method is reduce recreated of model and it's offered that robustness and accuracy of context dependent acoustic modeling. Also reduce amount of model and offered training process unseen model as concerns context dependent a likely phoneme model has been used unseen model solve the matter. System performance as a result of represent vocabulary dependence recognition rate of 98.01%, vocabulary independence recognition rate of 97.38%.

Speech Enhancement System Using a Model of Auditory Mechanism (청각기강의 모델을 이용한 음성강조 시스템)

  • 최재승
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.295-302
    • /
    • 2004
  • On the field of speech processing the treatment of noise is still important problems for speech research. Especially, it has been noticed that the background noise causes remarkable reduction of speech recognition ratio. As the examples of the background noise, there are such various non-stationary noises existing in the real environment as driving noise of automobiles on the road or typing noise of printer. The treatment for these kinds of noises is not so simple as could be eliminated by the former Wiener filter, but needs more skillful techniques. In this paper as one of these trials, we show an algorithm which is a speech enhancement method using a model of mutual inhibition for noise reduction in speech which is contaminated by white noise or background noise mentioned above. It is confirmed that the proposed algorithm is effective for the speech degraded not only by white noise but also by colored noise, judging from the spectral distortion measurement.

Impostor Detection in Speaker Recognition Using Confusion-Based Confidence Measures

  • Kim, Kyu-Hong;Kim, Hoi-Rin;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.28 no.6
    • /
    • pp.811-814
    • /
    • 2006
  • In this letter, we introduce confusion-based confidence measures for detecting an impostor in speaker recognition, which does not require an alternative hypothesis. Most traditional speaker verification methods are based on a hypothesis test, and their performance depends on the robustness of an alternative hypothesis. Compared with the conventional Gaussian mixture model-universal background model (GMM-UBM) scheme, our confusion-based measures show better performance in noise-corrupted speech. The additional computational requirements for our methods are negligible when used to detect or reject impostors.

  • PDF

A Minimum-Error-Rate Training Algorithm for Pattern Classifiers and Its Application to the Predictive Neural Network Models (패턴분류기를 위한 최소오차율 학습알고리즘과 예측신경회로망모델에의 적용)

  • 나경민;임재열;안수길
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.108-115
    • /
    • 1994
  • Most pattern classifiers have been designed based on the ML (Maximum Likelihood) training algorithm which is simple and relatively powerful. The ML training is an efficient algorithm to individually estimate the model parameters of each class under the assumption that all class models in a classifier are statistically independent. That assumption, however, is not valid in many real situations, which degrades the performance of the classifier. In this paper, we propose a minimum-error-rate training algorithm based on the MAP (Maximum a Posteriori) approach. The algorithm regards the normalized outputs of the classifier as estimates of the a posteriori probability, and tries to maximize those estimates. According to Bayes decision theory, the proposed algorithm satisfies the condition of minimum-error-rate classificatin. We apply this algorithm to NPM (Neural Prediction Model) for speech recognition, and derive new disrminative training algorithms. Experimental results on ten Korean digits recognition have shown the reduction of 37.5% of the number of recognition errors.

  • PDF