• Title/Summary/Keyword: Speech recognition model

Search Result 623, Processing Time 0.038 seconds

Speech Recognition based Smart Home System using 5W1H Programming Model (5W1H 프로그래밍 모델을 기반으로 한 음성인식 스마트 홈 시스템)

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong;Sin, Bo-Bae
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.01a
    • /
    • pp.43-44
    • /
    • 2017
  • 본 논문에서는 상용화된 음성-인식 디바이스가 다른 임베디드 모듈과 통신하며 스마트홈 중앙처리 서버역할을 수행하려 할 때 제작사에 의해 개발되어지지 않거나 제한된 모듈과 서비스만을 제공한다는 문제점을 해결하기 위해 사용자가 직접 간단한 작업으로 원하는 기능의 모듈을 개발하여 자유롭게 음성인식명령을 추가할 수 있는 플랫폼을 제안한다. 본 논문에서 제안하는 플랫폼의 개념은 특정 OS에 종속되지 않으므로 다양한 시스템에서 제공될 수 있도록 설계되었으며 실험 플랫폼은 Windows기반으로 제작되었으나 다른 시스템에도 같은 개념을 적용하여 제작할 수 있다.

  • PDF

ASR (Automatic Speech Recognition)-based welfare information search model to prevent digital alienation of the elderly (고령층의 디지털 소외 방지를 위한 ASR(Automatic Speech Recognition, 음성 인식 기술) 기반 복지 정보 검색 모델 연구)

  • Jang-Won Ha;Hwa-Rang Im;Dong-Gue Jung;Hye-won Lee;Youngjong Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.771-772
    • /
    • 2023
  • 복지 정보와 인터넷 사용에 대한 이해도가 낮은 고령층의 디지털 소외 문제를 해결하고자, 고령층 친화 UI/UX 및 음성 인식 기술 등의 기술을 활용한 <고령층의 디지털 소외 방지를 위한 ASR 기반 복지 정보 검색 모델>의 개발을 제안한다.

Performance comparison of wake-up-word detection on mobile devices using various convolutional neural networks (다양한 합성곱 신경망 방식을 이용한 모바일 기기를 위한 시작 단어 검출의 성능 비교)

  • Kim, Sanghong;Lee, Bowon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.454-460
    • /
    • 2020
  • Artificial intelligence assistants that provide speech recognition operate through cloud-based voice recognition with high accuracy. In cloud-based speech recognition, Wake-Up-Word (WUW) detection plays an important role in activating devices on standby. In this paper, we compare the performance of Convolutional Neural Network (CNN)-based WUW detection models for mobile devices by using Google's speech commands dataset, using the spectrogram and mel-frequency cepstral coefficient features as inputs. The CNN models used in this paper are multi-layer perceptron, general convolutional neural network, VGG16, VGG19, ResNet50, ResNet101, ResNet152, MobileNet. We also propose network that reduces the model size to 1/25 while maintaining the performance of MobileNet is also proposed.

An Overview and Market Review of Speaker Recognition Technology (화자인식 기술 및 국내외시장 동향)

  • Yu, Ha-Jin
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.91-97
    • /
    • 2004
  • We provide a brief overview of the area of speaker recognition, describing underlying techniques and current market review. We describe the techniques mainly based on GMM(gaussian mixture model) that is the most prevalent and effective approach. Following the technical overview, we will outline the market review of the area inside and outside of the country.

  • PDF

An automatic pronunciation evaluation system using non-native teacher's speech model (비원어민 교수자 음성모델을 이용한 자동발음평가 시스템)

  • Park, Hye-bin;Kim, Dong Heon;Joung, Jinoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.131-136
    • /
    • 2016
  • An appropriate evaluation on learner's pronunciation has been an important part of foreign language education. The learners should be evaluated and receive proper feedback for pronunciation improvement. Due to the cost and consistency problem of human evaluation, automatic pronunciation evaluation system has been studied. The most of the current automatic evaluation systems utilizes underlying Automatic Speech Recognition (ASR) technology. We suggest in this work to evaluate learner's pronunciation accuracy and fluency in word-level using the ASR and non-native teacher's speech model. Through the performance evaluation on our system, we confirm the overall evaluation result of pronunciation accuracy and fluency actually represents the learner's English skill level quite accurately.

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

Speaker-Independent Korean Digit Recognition Using HCNN with Weighted Distance Measure (가중 거리 개념이 도입된 HCNN을 이용한 화자 독립 숫자음 인식에 관한 연구)

  • 김도석;이수영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.10
    • /
    • pp.1422-1432
    • /
    • 1993
  • Nonlinear mapping function of the HCNN( Hidden Control Neural Network ) can change over time to model the temporal variability of a speech signal by combining the nonlinear prediction of conventional neural networks with the segmentation capability of HMM. We have two things in this paper. first, we showed that the performance of the HCNN is better than that of HMM. Second, the HCNN with its prediction error measure given by weighted distance is proposed to use suitable distance measure for the HCNN, and then we showed that the superiority of the proposed system for speaker-independent speech recognition tasks. Weighted distance considers the differences between the variances of each component of the feature vector extraced from the speech data. Speaker-independent Korean digit recognition experiment showed that the recognition rate of 95%was obtained for the HCNN with Euclidean distance. This result is 1.28% higher than HMM, and shows that the HCNN which models the dynamical system is superior to HMM which is based on the statistical restrictions. And we obtained 97.35% for the HCNN with weighted distance, which is 2.35% better than the HCNN with Euclidean distance. The reason why the HCNN with weighted distance shows better performance is as follows : it reduces the variations of the recognition error rate over different speakers by increasing the recognition rate for the speakers who have many misclassified utterances. So we can conclude that the HCNN with weighted distance is more suit-able for speaker-independent speech recognition tasks.

  • PDF

Character Recognition Algorithm in Low-Quality Legacy Contents Based on Alternative End-to-End Learning (대안적 통째학습 기반 저품질 레거시 콘텐츠에서의 문자 인식 알고리즘)

  • Lee, Sung-Jin;Yun, Jun-Seok;Park, Seon-hoo;Yoo, Seok Bong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1486-1494
    • /
    • 2021
  • Character recognition is a technology required in various platforms, such as smart parking and text to speech, and many studies are being conducted to improve its performance through new attempts. However, with low-quality image used for character recognition, a difference in resolution of the training image and test image for character recognition occurs, resulting in poor accuracy. To solve this problem, this paper designed an end-to-end learning neural network that combines image super-resolution and character recognition so that the character recognition model performance is robust against various quality data, and implemented an alternative whole learning algorithm to learn the whole neural network. An alternative end-to-end learning and recognition performance test was conducted using the license plate image among various text images, and the effectiveness of the proposed algorithm was verified with the performance test.

L1-norm Regularization for State Vector Adaptation of Subspace Gaussian Mixture Model (L1-norm regularization을 통한 SGMM의 state vector 적응)

  • Goo, Jahyun;Kim, Younggwan;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.131-138
    • /
    • 2015
  • In this paper, we propose L1-norm regularization for state vector adaptation of subspace Gaussian mixture model (SGMM). When you design a speaker adaptation system with GMM-HMM acoustic model, MAP is the most typical technique to be considered. However, in MAP adaptation procedure, large number of parameters should be updated simultaneously. We can adopt sparse adaptation such as L1-norm regularization or sparse MAP to cope with that, but the performance of sparse adaptation is not good as MAP adaptation. However, SGMM does not suffer a lot from sparse adaptation as GMM-HMM because each Gaussian mean vector in SGMM is defined as a weighted sum of basis vectors, which is much robust to the fluctuation of parameters. Since there are only a few adaptation techniques appropriate for SGMM, our proposed method could be powerful especially when the number of adaptation data is limited. Experimental results show that error reduction rate of the proposed method is better than the result of MAP adaptation of SGMM, even with small adaptation data.

A Prior Model of Structural SVMs for Domain Adaptation

  • Lee, Chang-Ki;Jang, Myung-Gil
    • ETRI Journal
    • /
    • v.33 no.5
    • /
    • pp.712-719
    • /
    • 2011
  • In this paper, we study the problem of domain adaptation for structural support vector machines (SVMs). We consider a number of domain adaptation approaches for structural SVMs and evaluate them on named entity recognition, part-of-speech tagging, and sentiment classification problems. Finally, we show that a prior model for structural SVMs outperforms other domain adaptation approaches in most cases. Moreover, the training time for this prior model is reduced compared to other domain adaptation methods with improvements in performance.