• Title/Summary/Keyword: 화자 검증

Search Result 63, Processing Time 0.031 seconds

Optimal Feature Parameters Extraction for Speech Recognition of Ship's Wheel Orders (조타명령의 음성인식을 위한 최적 특징파라미터 검출에 관한 연구)

  • Moon, Serng-Bae;Chae, Yang-Bum;Jun, Seung-Hwan
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.13 no.2 s.29
    • /
    • pp.161-167
    • /
    • 2007
  • The goal of this paper is to develop the speech recognition system which can control the ship's auto pilot. The feature parameters predicting the speaker's intention was extracted from the sample wheel orders written in SMCP(IMO Standard Marine Communication Phrases). And we designed the post-recognition procedure based on the parameters which could make a final decision from the list of candidate words. To evaluate the effectiveness of these parameters and the procedure, the basic experiment was conducted with total 525 wheel orders. From the experimental results, the proposed pattern recognition procedure has enhanced about 42.3% over the pre-recognition procedure.

  • PDF

Performance Comparison of Deep Feature Based Speaker Verification Systems (깊은 신경망 특징 기반 화자 검증 시스템의 성능 비교)

  • Kim, Dae Hyun;Seong, Woo Kyeong;Kim, Hong Kook
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.9-16
    • /
    • 2015
  • In this paper, several experiments are performed according to deep neural network (DNN) based features for the performance comparison of speaker verification (SV) systems. To this end, input features for a DNN, such as mel-frequency cepstral coefficient (MFCC), linear-frequency cepstral coefficient (LFCC), and perceptual linear prediction (PLP), are first compared in a view of the SV performance. After that, the effect of a DNN training method and a structure of hidden layers of DNNs on the SV performance is investigated depending on the type of features. The performance of an SV system is then evaluated on the basis of I-vector or probabilistic linear discriminant analysis (PLDA) scoring method. It is shown from SV experiments that a tandem feature of DNN bottleneck feature and MFCC feature gives the best performance when DNNs are configured using a rectangular type of hidden layers and trained with a supervised training method.

Speaker Verification Using Hidden LMS Adaptive Filtering Algorithm and Competitive Learning Neural Network (Hidden LMS 적응 필터링 알고리즘을 이용한 경쟁학습 화자검증)

  • Cho, Seong-Won;Kim, Jae-Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.2
    • /
    • pp.69-77
    • /
    • 2002
  • Speaker verification can be classified in two categories, text-dependent speaker verification and text-independent speaker verification. In this paper, we discuss text-dependent speaker verification. Text-dependent speaker verification system determines whether the sound characteristics of the speaker are equal to those of the specific person or not. In this paper we obtain the speaker data using a sound card in various noisy conditions, apply a new Hidden LMS (Least Mean Square) adaptive algorithm to it, and extract LPC (Linear Predictive Coding)-cepstrum coefficients as feature vectors. Finally, we use a competitive learning neural network for speaker verification. The proposed hidden LMS adaptive filter using a neural network reduces noise and enhances features in various noisy conditions. We construct a separate neural network for each speaker, which makes it unnecessary to train the whole network for a new added speaker and makes the system expansion easy. We experimentally prove that the proposed method improves the speaker verification performance.

Short utterance speaker verification using PLDA model adaptation and data augmentation (PLDA 모델 적응과 데이터 증강을 이용한 짧은 발화 화자검증)

  • Yoon, Sung-Wook;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.85-94
    • /
    • 2017
  • Conventional speaker verification systems using time delay neural network, identity vector and probabilistic linear discriminant analysis (TDNN-Ivector-PLDA) are known to be very effective for verifying long-duration speech utterances. However, when test utterances are of short duration, duration mismatch between enrollment and test utterances significantly degrades the performance of TDNN-Ivector-PLDA systems. To compensate for the I-vector mismatch between long and short utterances, this paper proposes to use probabilistic linear discriminant analysis (PLDA) model adaptation with augmented data. A PLDA model is trained on vast amount of speech data, most of which have long duration. Then, the PLDA model is adapted with the I-vectors obtained from short-utterance data which are augmented by using vocal tract length perturbation (VTLP). In computer experiments using the NIST SRE 2008 database, the proposed method is shown to achieve significantly better performance than the conventional TDNN-Ivector-PLDA systems when there exists duration mismatch between enrollment and test utterances.

Recognize the Emotional state of the Speaker by using HMM (HMM을 이용한 화자의 감정 상태 인식)

  • Lee, Na-Ra;Han, Ki-Hong;Kim, Hyun-jung;Won, Il-Young
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.1517-1520
    • /
    • 2013
  • 사용자 중심의 다양한 서비스를 제공하기 위해 음성을 통한 자동화된 감정 인식은 중요한 연구분야라고 할 수 있다. 앞선 연구에서는 감독학습과 비감독 학습을 결합하여 적용하였지만, 만족할만한 성능은 얻지 못했다. 이는 음성의 시간성을 고려하지 않은 학습방법의 사용하지 않았기 때문이다. 본 연구에서는 HMM(Hidden Markov Model)을 사용하여 학습하고 실험으로 검증하였다. 실험 결과는 기존의 방법들 보다 성능이 향상됨을 관찰할 수 있었다.

VR Companion Animal Communion System for Pet Loss Syndrome (펫로스 증후군을 위한 VR 반려동물 교감 시스템)

  • Choi, Hyeong-Mun;Moon, Mikyeong;Lee, Gun-ho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.563-564
    • /
    • 2021
  • 반려동물 보유 가구 수가 증가하면서 반려동물의 상실로 인한 펫로스 증후군을 호소하는 반려인 또한 증가하고 있다. 펫로스 증후군을 치유하기 위해 반려동물을 가상으로라도 만나서 평소에 하던 말과 행동을 할 수 있도록 하여 차츰 이별을 할 수 있도록 할 필요가 있다. 본 논문에서는 VR을 통하여 반려인이 3D로 모델링 된 반려동물과 직접 교감할 수 있는 시스템에 대한 연구 내용을 기술한다. 이 시스템을 통해 떠나보낸 반려동물과 평소와 같은 말과 행동을 할 수 있도록 도와주어 감정의 정화를 서서히 할 수 있도록 해준다.

  • PDF

α-feature map scaling for raw waveform speaker verification (α-특징 지도 스케일링을 이용한 원시파형 화자 인증)

  • Jung, Jee-weon;Shim, Hye-jin;Kim, Ju-ho;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.441-446
    • /
    • 2020
  • In this paper, we propose the α-Feature Map Scaling (α-FMS) method which extends the FMS method that was designed to enhance the discriminative power of feature maps of deep neural networks in Speaker Verification (SV) systems. The FMS derives a scale vector from a feature map and then adds or multiplies them to the features, or sequentially apply both operations. However, the FMS method not only uses an identical scale vector for both addition and multiplication, but also has a limitation that it can only add a value between zero and one in case of addition. In this study, to overcome these limitations, we propose α-FMS to add a trainable parameter α to the feature map element-wise, and then multiply a scale vector. We compare the performance of the two methods: the one where α is a scalar, and the other where it is a vector. Both α-FMS methods are applied after each residual block of the deep neural network. The proposed system using the α-FMS methods are trained using the RawNet2 and tested using the VoxCeleb1 evaluation set. The result demonstrates an equal error rate of 2.47 % and 2.31 % for the two α-FMS methods respectively.

CASA Based Approach to Estimate Acoustic Transfer Function Ratios (CASA 기반의 마이크간 전달함수 비 추정 알고리즘)

  • Shin, Minkyu;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.1
    • /
    • pp.54-59
    • /
    • 2014
  • Identification of RTF (Relative Transfer Function) between sensors is essential to multichannel speech enhancement system. In this paper, we present an approach for estimating the relative transfer function of speech signal. This method adapts a CASA (Computational Auditory Scene Analysis) technique to the conventional OM-LSA (Optimally-Modified Log-Spectral Amplitude) based approach. Evaluation of the proposed approach is performed under simulated stationary and nonstationary WGN (White Gaussian Noise). Experimental results confirm advantages of the proposed approach.

Noise Robust Speaker Verification Using Subband-Based Reliable Feature Selection (신뢰성 높은 서브밴드 특징벡터 선택을 이용한 잡음에 강인한 화자검증)

  • Kim, Sung-Tak;Ji, Mi-Kyong;Kim, Hoi-Rin
    • MALSORI
    • /
    • no.63
    • /
    • pp.125-137
    • /
    • 2007
  • Recently, many techniques have been proposed to improve the noise robustness for speaker verification. In this paper, we consider the feature recombination technique in multi-band approach. In the conventional feature recombination for speaker verification, to compute the likelihoods of speaker models or universal background model, whole feature components are used. This computation method is not effective in a view point of multi-band approach. To deal with non-effectiveness of the conventional feature recombination technique, we introduce a subband likelihood computation, and propose a modified feature recombination using subband likelihoods. In decision step of speaker verification system in noise environments, a few very low likelihood scores of a speaker model or universal background model cause speaker verification system to make wrong decision. To overcome this problem, a reliable feature selection method is proposed. The low likelihood scores of unreliable feature are substituted by likelihood scores of the adaptive noise model. In here, this adaptive noise model is estimated by maximum a posteriori adaptation technique using noise features directly obtained from noisy test speech. The proposed method using subband-based reliable feature selection obtains better performance than conventional feature recombination system. The error reduction rate is more than 31 % compared with the feature recombination-based speaker verification system.

  • PDF

Development of Korean Consonant Perception Test (자음지각검사 (KCPT)의 개발)

  • Kim, Jin-Sook;Shin, Eun-Yeong;Shin, Hyun-Wook;Lee, Ki-Do
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.5
    • /
    • pp.295-302
    • /
    • 2011
  • The purpose of this study was to develop Korean Consonant Perception Test (KCPT), that is a phonemic level including elementary data to evaluate speech and consonant perception ability of the normal and the hearing impaired qualitatively and quantitatively. KCPT was completed with meaningful monosyllabic words out of possible all Korean monosyllabic words, considering articulation characteristics, the degree of difficulty, and the frequency of the phonemic appearance, after assembling a tentative initial and final consonants testing items using four multiple-choice method which were applied to the seven final consonant regulation and controlled with the familiarity of the target words. Conclusively, the final three hundred items were developed including two- and one-hundred items for initial and final testing items, respectively, with the evaluation of the 20 normal hearing adults. Through this process, the final KCPT was composed upon the colloquial frequency following identification of no speakers' variances statistically and elimination of the highly difficult items. The 30 hearing impaired were tested with KCPT and found that the half lists, A and B, were not different statistically and the initial and final testing items were appropriate for evaluating initial and final consonants, respectively.