• Title/Summary/Keyword: 음성인식률

Search Result 549, Processing Time 0.024 seconds

A Study on the Speaker Adaptation in CDHMM (CDHMM의 화자적응에 관한 연구)

  • Kim, Gwang-Tae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.116-127
    • /
    • 2002
  • A new approach to improve the speaker adaptation algorithm by means of the variable number of observation density functions for CDHMM speech recognizer has been proposed. The proposed method uses the observation density function with more than one mixture in each state to represent speech characteristics in detail. The number of mixtures in each state is determined by the number of frames and the determinant of the variance, respectively. The each MAP Parameter is extracted in every mixture determined by these two methods. In addition, the state segmentation method requiring speaker adaptation can segment the adapting speech more Precisely by using speaker-independent model trained from sufficient database as a priori knowledge. And the state duration distribution is used lot adapting the speech duration information owing to speaker's utterance habit and speed. The recognition rate of the proposed methods are significantly higher than that of the conventional method using one mixture in each state.

Efficient Iris Recognition using Deep-Learning Convolution Neural Network (딥러닝 합성곱 신경망을 이용한 효율적인 홍채인식)

  • Choi, Gwang-Mi;Jeong, Yu-Jeong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.3
    • /
    • pp.521-526
    • /
    • 2020
  • This paper presents an improved HOLP neural network that adds 25 average values to a typical HOLP neural network using 25 feature vector values as input values by applying high-order local autocorrelation function, which is excellent for extracting immutable feature values of iris images. Compared with deep learning structures with different types, we compared the recognition rate of iris recognition using Back-Propagation neural network, which shows excellent performance in voice and image field, and synthetic product neural network that integrates feature extractor and classifier.

Experiments on Extraction of Non-Parametric Warping Functions for Speaker Normalization (화자 정규화를 위한 비정형 워핑함수 도출에 관한 실험)

  • Shin, Ok-Keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.5
    • /
    • pp.255-261
    • /
    • 2005
  • In this paper. experiments are conducted to extract a set of non-Parametric warping functions to examine the characteristics of the warping among speakers' utterances. For this Purpose. we made use of MFCC and LP spectra of vowels in choosing reference spectrum of each vowel as well as representative spectra of each speaker. These spectra are compared by DTW to give the warping functions of each speaker. The set of warping functions are then defined by clustering the warping functions of all the speakers. Noting that male and female warping functions have shapes similar to Piecewise linear function and Power function respectively, a new hybrid set of warping functions is defined. The effectiveness of the extracted warping functions are evaluated by conducting phone level recognition experiments, and improvements in accuracy rate are observed in both warping functions.

Lip-reading System based on Bayesian Classifier (베이지안 분류를 이용한 립 리딩 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Pronunciation recognition systems that use only video information and ignore voice information can be applied to various customized services. In this paper, we develop a system that applies a Bayesian classifier to distinguish Korean vowels via lip shapes in images. We extract feature vectors from the lip shapes of facial images and apply them to the designed machine learning model. Our experiments show that the system's recognition rate is 94% for the pronunciation of 'A', and the system's average recognition rate is approximately 84%, which is higher than that of the CNN tested for comparison. Our results show that our Bayesian classification method with feature values from lip region landmarks is efficient on a small training set. Therefore, it can be used for application development on limited hardware such as mobile devices.

Human Touching Behavior Recognition based on Neural Network in the Touch Detector using Force Sensors (힘 센서를 이용한 접촉감지부에서 신경망기반 인간의 접촉행동 인식)

  • Ryu, Joung-Woo;Park, Cheon-Shu;Sohn, Joo-Chan
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.910-917
    • /
    • 2007
  • Of the possible interactions between human and robot, touch is an important means of providing human beings with emotional relief. However, most previous studies have focused on interactions based on voice and images. In this paper. a method of recognizing human touching behaviors is proposed for developing a robot that can naturally interact with humans through touch. In this method, the recognition process is divided into pre-process and recognition Phases. In the Pre-Process Phase, recognizable characteristics are calculated from the data generated by the touch detector which was fabricated using force sensors. The force sensor used an FSR (force sensing register). The recognition phase classifies human touching behaviors using a multi-layer perceptron which is a neural network model. Experimental data was generated by six men employing three types of human touching behaviors including 'hitting', 'stroking' and 'tickling'. As the experimental result of a recognizer being generated for each user and being evaluated as cross-validation, the average recognition rate was 82.9% while the result of a single recognizer for all users showed a 74.5% average recognition rate.

A Study on the Diphone Recognition of Korean Connected Words and Eojeol Reconstruction (한국어 연결단어의 이음소 인식과 어절 형성에 관한 연구)

  • ;Jeong, Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.46-63
    • /
    • 1995
  • This thesis described an unlimited vocabulary connected speech recognition system using Time Delay Neural Network(TDNN). The recognition unit is the diphone unit which includes the transition section of two phonemes, and the number of diphone unit is 329. The recognition processing of korean connected speech is composed by three part; the feature extraction section of the input speech signal, the diphone recognition processing and post-processing. In the feature extraction section, the extraction of diphone interval in input speech signal is carried and then the feature vectors of 16th filter-bank coefficients are calculated for each frame in the diphone interval. The diphone recognition processing is comprised by the three stage hierachical structure and is carried using 30 Time Delay Neural Networks. particularly, the structure of TDNN is changed so as to increase the recognition rate. The post-processing section, mis-recognized diphone strings are corrected using the probability of phoneme transition and the probability o phoneme confusion and then the eojeols (Korean word or phrase) are formed by combining the recognized diphones.

  • PDF

Efficient context dependent process modeling using state tying and decision tree-based method (상태 공유와 결정트리 방법을 이용한 효율적인 문맥 종속 프로세스 모델링)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.369-377
    • /
    • 2010
  • In vocabulary recognition systems based on HMM(Hidden Markov Model)s, training process unseen model bring on show a low recognition rate. If recognition vocabulary modify and make an addition then recreated modeling of executed database collected and training sequence on account of bring on additional expenses and take more time. This study suggest efficient context dependent process modeling method using decision tree-based state tying. On study suggest method is reduce recreated of model and it's offered that robustness and accuracy of context dependent acoustic modeling. Also reduce amount of model and offered training process unseen model as concerns context dependent a likely phoneme model has been used unseen model solve the matter. System performance as a result of represent vocabulary dependence recognition rate of 98.01%, vocabulary independence recognition rate of 97.38%.

Effective Speaker Recognition Technology Using Noise (잡음을 활용한 효과적인 화자 인식 기술)

  • Ko, Suwan;Kang, Minji;Bang, Sehee;Jung, Wontae;Lee, Kyungroul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.259-262
    • /
    • 2022
  • 정보화 시대 스마트폰이 대중화되고 실시간 인터넷 사용이 가능해짐에 따라, 본인을 식별하기 위한 사용자 인증이 필수적으로 요구된다. 대표적인 사용자 인증 기술로는 아이디와 비밀번호를 이용한 비밀번호 인증이 있지만, 키보드로부터 입력받는 이러한 인증 정보는 시각 장애인이나 손 사용이 불편한 사람, 고령층과 같은 사람들이 많은 서비스로부터 요구되는 아이디와 비밀번호를 기억하고 입력하기에는 불편함이 따를 뿐만 아니라, 키로거와 같은 공격에 노출되는 문제점이 존재한다. 이러한 문제점을 해결하기 위하여, 자신의 신체의 특징을 활용하는 생체 인증이 대두되고 있으며, 그중 목소리로 사용자를 인증한다면, 효과적으로 비밀번호 인증의 한계점을 극복할 수 있다. 이러한 화자 인식 기술은 KT의 기가 지니와 같은 음성 인식 기술에서 활용되고 있지만, 목소리는 위조 및 변조가 비교적 쉽기에 지문이나 홍채 등을 활용하는 인증 방식보다 정확도가 낮고 음성 인식 오류 또한 높다는 한계점이 존재한다. 상기 목소리를 활용한 사용자 인증 기술인 화자 인식 기술을 활용하기 위하여, 사용자 목소리를 학습시켰으며, 목소리의 주파수를 추출하는 MFCC 알고리즘을 이용해 테스트 목소리와 정확도를 측정하였다. 그리고 악의적인 공격자가 사용자 목소리를 흉내 내는 경우나 사용자 목소리를 마이크로 녹음하는 등의 방법으로 획득하였을 경우에는 높은 확률로 인증의 우회가 가능한 것을 검증하였다. 이에 따라, 더욱 효과적으로 화자 인식의 정확도를 향상시키기 위하여, 본 논문에서는 목소리에 잡음을 섞는 방법으로 화자를 인식하는 방안을 제안한다. 제안하는 방안은 잡음이 정확도에 매우 민감하게 반영되기 때문에, 기존의 인증 우회 방법을 무력화하고, 더욱 효과적으로 목소리를 활용한 화자 인식 기술을 제공할 것으로 사료된다.

  • PDF

On Implementing a Robust Speech Recognition System Based on a Signal Bias Removal Algorithm (신호편의제거 알고리듬에 기초한 강인한 음성 인식시스템의 구현)

  • 임계종;계영철;구명완
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.67-72
    • /
    • 2000
  • Particularly based on the signal bias removal(SBR) algorithm for compensating the corrupted speech, this paper presents a new algorithm which is independent of environments, minimizes the amount of computation, and is readily applicable to the conventional recognition system. To this end, a multiple-bias algorithm and a partial codebook search algorithm have been added to the conventional SBR algorithm. The simulation results show that combining the two algorithms proposed in this paper provides a reduction of computation time to 1/8 times as well as an improvement of the recognition rate from 77.58% of the conventional system to 81.32%.

  • PDF

The Effects of Secondhand Smoking on Articulators Based on Phonetic Analysis (음성학적 분석 기반의 간접흡연이 조음기관에 미치는 영향)

  • Seo, Kyoung-Won;Kang, Deok-Hyun;Bae, Jung-Su;Jang, Yong-Jo;Yean, Yong-Hem;Lim, Soon-Yong;Min, Ji-Seon;Kim, Bong-Hyun;Ka, Min-Kyoung;Cho, Dong-Uk
    • Annual Conference of KIPS
    • /
    • 2010.11a
    • /
    • pp.648-651
    • /
    • 2010
  • 웰빙의 바람을 타고 이제 자신의 건강을 관리하는 사람들이 많아지고, 흡연에 대한 좋지 않은 인식이 높아지면서 금연의 열풍이 강하게 불고 있다. 하지만 금연을 한다고 해도 주위의 담배연기는 우리 몸의 건강을 해치기 때문에 담배연기로부터 해방되기는 매우 어렵다. 실제로 흡연하는 배우자를 가진 사람은 그렇지 않은 사람에 비해 심장병 발생률은 40%, 폐암 발생률은 30%가 더 높다. 따라서 본 논문에서는 간접흡연이 인체의 조음기관에 미치는 영향을 분석하기 위해 간접흡연에 따른 음성의 변화를 측정하고 비교, 분석하는 실험을 수행하였다. 이를 위해 간접흡연 전과 후의 음성을 수집하여 음성분석학적 요소 기술 중 Pitch, Jitter, Shimmer 등의 성대 진동 요소를 적용하고 인체 내의 공명기관을 분석하는 Formant를 적용하여 실험을 수행하여 간접흡연이 음성에 미치는 영향을 연구하였다.