• Title/Summary/Keyword: 화자 검증

Search Result 63, Processing Time 0.03 seconds

Realization a Text Independent Speaker Identification System with Frame Level Likelihood Normalization (프레임레벨유사도정규화를 적용한 문맥독립화자식별시스템의 구현)

  • 김민정;석수영;김광수;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.1
    • /
    • pp.8-14
    • /
    • 2002
  • In this paper, we realized a real-time text-independent speaker recognition system using gaussian mixture model, and applied frame level likelihood normalization method which shows its effects in verification system. The system has three parts as front-end, training, recognition. In front-end part, cepstral mean normalization and silence removal method were applied to consider speaker's speaking variations. In training, gaussian mixture model was used for speaker's acoustic feature modeling, and maximum likelihood estimation was used for GMM parameter optimization. In recognition, likelihood score was calculated with speaker models and test data at frame level. As test sentences, we used text-independent sentences. ETRI 445 and KLE 452 database were used for training and test, and cepstrum coefficient and regressive coefficient were used as feature parameters. The experiment results show that the frame-level likelihood method's recognition result is higher than conventional method's, independently the number of registered speakers.

  • PDF

Driver Verification System Using Biometrical GMM Supervector Kernel (생체기반 GMM Supervector Kernel을 이용한 운전자검증 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.3
    • /
    • pp.67-72
    • /
    • 2010
  • This paper presents biometrical driver verification system in car experiment through analysis of speech, and face information. We have used Mel-scale Frequency Cesptral Coefficients (MFCCs) for speaker verification using speech information. For face verification, face region is detected by AdaBoost algorithm and dimension-reduced feature vector is extracted by using principal component analysis only from face region. In this paper, we apply the extracted speech- and face feature vectors to an SVM kernel with Gaussian Mixture Models(GMM) supervector. The experimental results of the proposed approach show a clear improvement compared to a simple GMM or SVM approach.

The Method of Resident Management Using the Voice Identification System (화자인증시스템을 이용한 입주자 관리 방안)

  • Yang, Dong-Suk;Kee, Ho-Young
    • Annual Conference of KIPS
    • /
    • 2015.10a
    • /
    • pp.1117-1118
    • /
    • 2015
  • 공공임대주택은 서민을 위한 주거안정 차원에서 중요한 복지정책 중 하나이다. 그러나 입주자에 대한 적합성 여부를 검증하는 데 있어서 실제 거주하고 있는지를 검증하는 데 많은 비용과 어려움이 발생하여 입주자가 거주하고 있지 않는 상태에서 재계약하는 경우가 많이 발생하고 있다. 본 연구에서는 거주하고 있는 사실을 검증할 수 있는 화자인증 시스템을 활용한 입주자 관리 방안을 제안한다. 제안된 방안으로 구현된 시스템을 통하여 적은 비용 및 높은 신뢰도로 거주사실을 확인할 수 있을 것이라 기대된다.

Frame Selection, Hybrid, Modified Weighting Model Rank Method for Robust Text-independent Speaker Identification (강건한 문맥독립 화자식별을 위한 프레임 선택방법, 복합방법, 수정된 가중모델순위 방법)

  • 김민정;오세진;정호열;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.8
    • /
    • pp.735-743
    • /
    • 2002
  • In this paper, we propose three new text-independent speaker identification methods. At first, to exclude the frames not having enough features of speaker's vocal from calculation of the maximum likelihood, we propose the FS(Frame Selection) method. This approach selects the important frames by evaluating the difference between the biggest likelihood and the second in each frame, and uses only the frames in calculating the score of likelihood. Our secondly proposed, called the Hybrid, is a combined version of the FS and WMR(Weighting Model Rank). This method determines the claimed speaker using exponential function weights, instead of likelihood itself, only on the selected frames obtained from the FS method. The last proposed, called MWMR (Modified WMR), considers both original likelihood itself and its relative position, when the claimed speaker is determined. It is different from the WMR that take into account only the relative position of likelihood. Through the experiments of the speaker identification, we show that the all the proposed have higher identification rates than the ML. In addition, the Hybrid and MWMR have higher identification rate about 2% and about 3% than WMR, respectively.

Effective Speaker Recognition Technology Using Noise (잡음을 활용한 효과적인 화자 인식 기술)

  • Ko, Suwan;Kang, Minji;Bang, Sehee;Jung, Wontae;Lee, Kyungroul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.259-262
    • /
    • 2022
  • 정보화 시대 스마트폰이 대중화되고 실시간 인터넷 사용이 가능해짐에 따라, 본인을 식별하기 위한 사용자 인증이 필수적으로 요구된다. 대표적인 사용자 인증 기술로는 아이디와 비밀번호를 이용한 비밀번호 인증이 있지만, 키보드로부터 입력받는 이러한 인증 정보는 시각 장애인이나 손 사용이 불편한 사람, 고령층과 같은 사람들이 많은 서비스로부터 요구되는 아이디와 비밀번호를 기억하고 입력하기에는 불편함이 따를 뿐만 아니라, 키로거와 같은 공격에 노출되는 문제점이 존재한다. 이러한 문제점을 해결하기 위하여, 자신의 신체의 특징을 활용하는 생체 인증이 대두되고 있으며, 그중 목소리로 사용자를 인증한다면, 효과적으로 비밀번호 인증의 한계점을 극복할 수 있다. 이러한 화자 인식 기술은 KT의 기가 지니와 같은 음성 인식 기술에서 활용되고 있지만, 목소리는 위조 및 변조가 비교적 쉽기에 지문이나 홍채 등을 활용하는 인증 방식보다 정확도가 낮고 음성 인식 오류 또한 높다는 한계점이 존재한다. 상기 목소리를 활용한 사용자 인증 기술인 화자 인식 기술을 활용하기 위하여, 사용자 목소리를 학습시켰으며, 목소리의 주파수를 추출하는 MFCC 알고리즘을 이용해 테스트 목소리와 정확도를 측정하였다. 그리고 악의적인 공격자가 사용자 목소리를 흉내 내는 경우나 사용자 목소리를 마이크로 녹음하는 등의 방법으로 획득하였을 경우에는 높은 확률로 인증의 우회가 가능한 것을 검증하였다. 이에 따라, 더욱 효과적으로 화자 인식의 정확도를 향상시키기 위하여, 본 논문에서는 목소리에 잡음을 섞는 방법으로 화자를 인식하는 방안을 제안한다. 제안하는 방안은 잡음이 정확도에 매우 민감하게 반영되기 때문에, 기존의 인증 우회 방법을 무력화하고, 더욱 효과적으로 목소리를 활용한 화자 인식 기술을 제공할 것으로 사료된다.

  • PDF

Robust Speaker Identification using Independent Component Analysis (독립성분 분석을 이용한 강인한 화자식별)

  • Jang, Gil-Jin;Oh, Yung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.5
    • /
    • pp.583-592
    • /
    • 2000
  • This paper proposes feature parameter transformation method using independent component analysis (ICA) for speaker identification. The proposed method assumes that the cepstral vectors from various channel-conditioned speech are constructed by a linear combination of some characteristic functions with random channel noise added, and transforms them into new vectors using ICA. The resultant vector space can give emphasis to the repetitive speaker information and suppress the random channel distortions. Experimental results show that the transformation method is effective for the improvement of speaker identification system.

  • PDF

The Study on Speaker Change Verification Using SNR based weighted KL distance (SNR 기반 가중 KL 거리를 활용한 화자 변화 검증에 관한 연구)

  • Cho, Joon-Beom;Lee, Ji-eun;Lee, Kyong-Rok
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.159-166
    • /
    • 2017
  • In this paper, we have experimented to improve the verification performance of speaker change detection on broadcast news. It is to enhance the input noisy speech and to apply the KL distance $D_s$ using the SNR-based weighting function $w_m$. The basic experimental system is the verification system of speaker change using GMM-UBM based KL distance D(Experiment 0). Experiment 1 applies the input noisy speech enhancement using MMSE Log-STSA. Experiment 2 applies the new KL distance $D_s$ to the system of Experiment 1. Experiments were conducted under the condition of 0% MDR in order to prevent missing information of speaker change. The FAR of Experiment 0 was 71.5%. The FAR of Experiment 1 was 67.3%, which was 4.2% higher than that of Experiment 0. The FAR of experiment 2 was 60.7%, which was 10.8% higher than that of experiment 0.

Text Independent Speaker Verficiation Using Dominant State Information of HMM-UBM (HMM-UBM의 주 상태 정보를 이용한 음성 기반 문맥 독립 화자 검증)

  • Shon, Suwon;Rho, Jinsang;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.2
    • /
    • pp.171-176
    • /
    • 2015
  • We present a speaker verification method by extracting i-vectors based on dominant state information of Hidden Markov Model (HMM) - Universal Background Model (UBM). Ergodic HMM is used for estimating UBM so that various characteristic of individual speaker can be effectively classified. Unlike Gaussian Mixture Model(GMM)-UBM based speaker verification system, the proposed system obtains i-vectors corresponding to each HMM state. Among them, the i-vector for feature is selected by extracting it from the specific state containing dominant state information. Relevant experiments are conducted for validating the proposed system performance using the National Institute of Standards and Technology (NIST) 2008 Speaker Recognition Evaluation (SRE) database. As a result, 12 % improvement is attained in terms of equal error rate.

Authentication Performance Optimization for Smart-phone based Multimodal Biometrics (스마트폰 환경의 인증 성능 최적화를 위한 다중 생체인식 융합 기법 연구)

  • Moon, Hyeon-Joon;Lee, Min-Hyung;Jeong, Kang-Hun
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.151-156
    • /
    • 2015
  • In this paper, we have proposed personal multimodal biometric authentication system based on face detection, recognition and speaker verification for smart-phone environment. Proposed system detect the face with Modified Census Transform algorithm then find the eye position in the face by using gabor filter and k-means algorithm. Perform preprocessing on the detected face and eye position, then we recognize with Linear Discriminant Analysis algorithm. Afterward in speaker verification process, we extract the feature from the end point of the speech data and Mel Frequency Cepstral Coefficient. We verified the speaker through Dynamic Time Warping algorithm because the speech feature changes in real-time. The proposed multimodal biometric system is to fuse the face and speech feature (to optimize the internal operation by integer representation) for smart-phone based real-time face detection, recognition and speaker verification. As mentioned the multimodal biometric system could form the reliable system by estimating the reasonable performance.

Development of Advanced Personal Identification System Using Iris Image and Speech Signal (홍채와 음성을 이용한 고도의 개인확인시스템)

  • Lee, Dae-Jong;Go, Hyoun-Joo;Kwak, Keun-Chang;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.348-354
    • /
    • 2003
  • This proposes a new algorithm for advanced personal identification system using iris pattern and speech signal. Since the proposed algorithm adopts a fusion scheme to take advantage of iris recognition and speaker identification, it shows robustness for noisy environments. For evaluating the performance of the proposed scheme, we compare it with the iris pattern recognition and speaker identification respectively. In the experiments, the proposed method showed more 56.7% improvements than the iris recognition method and more 10% improvements than the speaker identification method for high quality security level. Also, in noisy environments, the proposed method showed more 30% improvements than the iris recognition method and more 60% improvements than the speaker identification method for high quality security level.