• Title/Summary/Keyword: speaker

Search Result 1,676, Processing Time 0.025 seconds

A Study on the Context-dependent Speaker Recognition Adopting the Method of Weighting the Frame-based Likelihood Using SNR (SNR을 이용한 프레임별 유사도 가중방법을 적용한 문맥종속 화자인식에 관한 연구)

  • Choi, Hong-Sub
    • MALSORI
    • /
    • no.61
    • /
    • pp.113-123
    • /
    • 2007
  • The environmental differences between training and testing mode are generally considered to be the critical factor for the performance degradation in speaker recognition systems. Especially, general speaker recognition systems try to get as clean speech as possible to train the speaker model, but it's not true in real testing phase due to environmental and channel noise. So in this paper, the new method of weighting the frame-based likelihood according to frame SNR is proposed in order to cope with that problem. That is to make use of the deep correlation between speech SNR and speaker discrimination rate. To verify the usefulness of this proposed method, it is applied to the context dependent speaker identification system. And the experimental results with the cellular phone speech DB which is designed by ETRI for Koran speaker recognition show that the proposed method is effective and increase the identification accuracy by 11% at maximum.

  • PDF

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

A Study on the User Experience of Smart Speaker in China - Focused on Tmall Genie and Mi AI Speaker - (중국 인공지능 스피커 사용자 경험에 관한 연구 - 티몰 지니와 샤오미 스마트 스피커를 중심으로 -)

  • Xiao, Xin-Ting;Kim, Seung-In
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.409-414
    • /
    • 2018
  • In China, the usage of smart speaker is continuously increasing. In this study, it is aimed to research on the user experience of the Chinese smart speaker. Therefore, we did literature research followed with theoretical background of smart speaker, and did case study of worldwide popular smart speaker brands. On this basis, we conducted in-depth interview with 8 users who have experienced with the top-selling Chinese smart speaker product "Tmall Genie" and "Mi AI speaker". The interview is based on 7 principles named Honeycomb model, which created by Peter Morville. As a result, users' discomfort was found in the functional part and the usability part of the smart speaker. Furthermore, the users were highly unsatisfied with the smart speaker in the credibility part. Accordingly, Chinese smart speaker should consider the user experience aspects to complement functional and usability parts for user.

A Study on SVM-Based Speaker Classification Using GMM-supervector (GMM-supervector를 사용한 SVM 기반 화자분류에 대한 연구)

  • Lee, Kyong-Rok
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1022-1027
    • /
    • 2020
  • In this paper, SVM-based speaker classification is experimented with GMM-supervector. To create a speaker cluster, conventional speaker change detection is performed with the KL distance using the SNR-based weighting function. SVM-based speaker classification consists of two steps. In the first step, SVM-based classification between UBM and speaker models is performed, speaker information is indexed in each cluster, and then grouped by speaker. In the second step, the SVM-based classification between UBM and speaker models is performed by inputting the speaker cluster group. Linear and RBF are applied as kernel functions for SVM-based classification. As a result, in the first step, the case of applying the linear kernel showed better performance than RBF with 148 speaker clusters, MDR 0, FAR 47.3, and ER 50.7. The second step experiment result also showed the best performance with 109 speaker clusters, MDR 1.3, FAR 28.4, and ER 32.1 when the linear kernel was applied.

Korean Speaker Verification Using Speaker Adaptation Methods (화자 적응 기술을 이용한 한국어 화자 확인)

  • Choi Dong-Jin;Oh Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.139-142
    • /
    • 2006
  • Speaker verification systems can be implemented using speaker adaptation methods if the amount of speech available for each target speaker is too small to train the speaker model. This paper shows experimental results using well-known adaptation methods, namely Maximum A Posteriori (MAP) and Maximum Likelihood Linear Regression (MLLR). Experimental results using Korean speech show that MLLR is more effective than MAP for short enrollment utterances.

  • PDF

Double Compensation Framework Based on GMM For Speaker Recognition (화자 인식을 위한 GMM기반의 이중 보상 구조)

  • Kim Yu-Jin;Chung Jae-Ho
    • MALSORI
    • /
    • no.45
    • /
    • pp.93-105
    • /
    • 2003
  • In this paper, we present a single framework based on GMM for speaker recognition. The proposed framework can simultaneously minimize environmental variations on mismatched conditions and adapt the bias free and speaker-dependent characteristics of claimant utterances to the background GMM to create a speaker model. We compare the closed-set speaker identification for conventional method and the proposed method both on TIMIT and NTIMIT. In the several sets of experiments we show the improved recognition rates on a simulated channel and a telephone channel condition by 7.2% and 27.4% respectively.

  • PDF

Speaker Adaptation using ICA-based Feature Transformation (ICA 기반의 특징변환을 이용한 화자적응)

  • Park ManSoo;Kim Hoi-Rin
    • MALSORI
    • /
    • no.43
    • /
    • pp.127-136
    • /
    • 2002
  • The speaker adaptation technique is generally used to reduce the speaker difference in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the transformation matrix is learned from a speaker independent training data. When the amount of data is small, however, it is necessary to adjust the ICA-based transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method: through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. We observed that the proposed technique is effective to adaptation performance.

  • PDF

Speaker Adaptation Using ICA-Based Feature Transformation

  • Jung, Ho-Young;Park, Man-Soo;Kim, Hoi-Rin;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.24 no.6
    • /
    • pp.469-472
    • /
    • 2002
  • Speaker adaptation techniques are generally used to reduce speaker differences in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the feature transformation matrices are estimated from the training data and adaptation data. Since the adaptation data is not sufficient to reliably estimate the ICA-based feature transformation matrix, it is necessary to adjust the ICA-based feature transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. From our experiments, we observed that the proposed method is more effective in the mismatched case. In the mismatched case, the adaptation performance is improved because the smoothed feature transformation matrix makes speaker adaptation using noisy speech more robust.

  • PDF

Speaker Separation Based on Directional Filter and Harmonic Filter (Directional Filter와 Harmonic Filter 기반 화자 분리)

  • Baek, Seung-Eun;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.125-136
    • /
    • 2005
  • Automatic speech recognition is much more difficult in real world. Speech recognition according to SIR (Signal to Interface Ratio) is difficult in situations in which noise of surrounding environment and multi-speaker exists. Therefore, study on main speaker's voice extractions a very important field in speech signal processing in binaural sound. In this paper, we used directional filter and harmonic filter among other existing methods to extract the main speaker's information in binaural sound. The main speaker's voice was extracted using directional filter, and other remaining speaker's information was removed using harmonic filter through main speaker's pitch detection. As a result, voice of the main speaker was enhanced.

  • PDF

F-ratio of Speaker Variability in Emotional Speech

  • Yi, So-Pae
    • Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.63-72
    • /
    • 2008
  • Various acoustic features were extracted and analyzed to estimate the inter- and intra-speaker variability of emotional speech. Tokens of vowel /a/ from sentences spoken with different modes of emotion (sadness, neutral, happiness, fear and anger) were analyzed. All of the acoustic features (fundamental frequency, spectral slope, HNR, H1-A1 and formant frequency) indicated greater contribution to inter- than intra-speaker variability across all emotions. Each acoustic feature of speech signal showed a different degree of contribution to speaker discrimination in different emotional modes. Sadness and neutral indicated greater speaker discrimination than other emotional modes (happiness, fear, anger in descending order of F-ratio). In other words, the speaker specificity was better represented in sadness and neutral than in happiness, fear and anger with any of the acoustic features.

  • PDF