• Title/Summary/Keyword: 화자 특징

Search Result 299, Processing Time 0.029 seconds

Histogram Equalization Using Centroids of Fuzzy C-Means of Background Speakers' Utterances for Majority Voting Based Speaker Identification (다수 투표 기반의 화자 식별을 위한 배경 화자 데이터의 퍼지 C-Means 중심을 이용한 히스토그램 등화기법)

  • Kim, Myung-Jae;Yang, Il-Ho;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.1
    • /
    • pp.68-74
    • /
    • 2014
  • In a previous work, we proposed a novel approach of histogram equalization using a supplement set which is composed of centroids of Fuzzy C-Means of the background utterances. The performance of the proposed method is affected by the size of the supplement set, but it is difficult to find the best size at the point of recognition. In this paper, we propose a histogram equalization using a supplement set for majority voting based speaker identification. The proposed method identifies test utterances using a majority voting on the histogram equalization methods with various sizes of supplement sets. The proposed method is compared with the conventional feature normalization methods such as CMN(Cepstral Mean Normalization), MVN(Mean and Variance Normalization), and HEQ(Histogram Equalization) and the histogram equalization method using a supplement set.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

Performance Improvement of Fast Speaker Adaptation Based on Dimensional Eigenvoice and Adaptation Mode Selection (차원별 Eigenvoice와 화자적응 모드 선택에 기반한 고속화자적응 성능 향상)

  • 송화전;이윤근;김형순
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.48-53
    • /
    • 2003
  • Eigenvoice method is known to be adequate for fast speaker adaptation, but it hardly shows additional improvement with increased amount of adaptation data. In this paper, to deal with this problem, we propose a modified method estimating the weights of eigenvoices in each feature vector dimension. We also propose an adaptation mode selection scheme that one method with higher performance among several adaptation methods is selected according to the amount of adaptation data. We used POW DB to construct the speaker independent model and eigenvoices, and utterances(ranging from 1 to 50) from PBW 452 DB and the remaining 400 utterances were used for adaptation and evaluation, respectively. With the increased amount of adaptation data, proposed dimensional eigenvoice method showed higher performance than both conventional eigenvoice method and MLLR. Up to 26% of word error rate was reduced by the adaptation mode selection between eigenvoice and dimensional eigenvoice methods in comparison with conventional eigenvoice method.

Speaker Recognition Performance Improvement by Voiced/Unvoiced Classification and Heterogeneous Feature Combination (유/무성음 구분 및 이종적 특징 파라미터 결합을 이용한 화자인식 성능 개선)

  • Kang, Jihoon;Jeong, Sangbae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1294-1301
    • /
    • 2014
  • In this paper, separate probabilistic distribution models for voiced and unvoiced speech are estimated and utilized to improve speaker recognition performance. Also, in addition to the conventional mel-frequency cepstral coefficient, skewness, kurtosis, and harmonic-to-noise ratio are extracted and used for voiced speech intervals. Two kinds of scores for voiced and unvoiced speech are linearly fused with the optimal weight found by exhaustive search. The performance of the proposed speaker recognizer is compared with that of the conventional recognizer which uses mel-frequency cepstral coefficient and a unified probabilistic distribution function based on the Gassian mixture model. Experimental results show that the lower the number of Gaussian mixture, the greater the performance improvement by the proposed algorithm.

A Study on the Speaker Adaptation in CDHMM (CDHMM의 화자적응에 관한 연구)

  • Kim, Gwang-Tae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.116-127
    • /
    • 2002
  • A new approach to improve the speaker adaptation algorithm by means of the variable number of observation density functions for CDHMM speech recognizer has been proposed. The proposed method uses the observation density function with more than one mixture in each state to represent speech characteristics in detail. The number of mixtures in each state is determined by the number of frames and the determinant of the variance, respectively. The each MAP Parameter is extracted in every mixture determined by these two methods. In addition, the state segmentation method requiring speaker adaptation can segment the adapting speech more Precisely by using speaker-independent model trained from sufficient database as a priori knowledge. And the state duration distribution is used lot adapting the speech duration information owing to speaker's utterance habit and speed. The recognition rate of the proposed methods are significantly higher than that of the conventional method using one mixture in each state.

A Study on the Honorific system in Different Versions of (≪천의소감언해(闡義昭鑑諺解)≫ 이본(異本)에 나타나는 높임 표현 양상)

  • Ju, Kyoung Mi
    • (The)Study of the Eastern Classic
    • /
    • no.68
    • /
    • pp.473-508
    • /
    • 2017
  • This article aims to compare the honorific expressions in Different Versions of . Cheonuisogam, a version in classical Chinese, has two different versions of translation in Korean, Cheonuisogameonhae: a hand-written version and a woodblock-printed version. A hand-written version was written in 1755 and a woodblock-printed version was published in 1756. This article covered two themes in different versions: honorific subject-particles '-gyeoosyeo(-겨오셔)'/'-gyeoʌosyeo()' and prefinal-ending '-sʌp-(--). The honorific subject-particles '-gyeoosyeo(-겨오셔)' appeared more in a hand-written version than a woodblock-printed version. The '-gyeoʌosyeo()' is not found in a woodblock-printed version. The prefinal-ending '-sʌp-(--) has three functions: agent modesty of objects, speaker modesty of agent, speaker modesty of listener. The '-sʌp-(--), in Cheonuisogameonhae, has mainly function as a speaker modesty. The '-sʌp-(--) appeared more in a hand-written version than a woodblock-printed version.

Driver Verification System Using Biometrical GMM Supervector Kernel (생체기반 GMM Supervector Kernel을 이용한 운전자검증 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.3
    • /
    • pp.67-72
    • /
    • 2010
  • This paper presents biometrical driver verification system in car experiment through analysis of speech, and face information. We have used Mel-scale Frequency Cesptral Coefficients (MFCCs) for speaker verification using speech information. For face verification, face region is detected by AdaBoost algorithm and dimension-reduced feature vector is extracted by using principal component analysis only from face region. In this paper, we apply the extracted speech- and face feature vectors to an SVM kernel with Gaussian Mixture Models(GMM) supervector. The experimental results of the proposed approach show a clear improvement compared to a simple GMM or SVM approach.

A Study on the Features for Building Korean Digit Recognition System Based on Multilayer Perceptron (다층 퍼셉트론에 기반한 한국어 숫자음 인식시스템 구현을 위한 특징 연구)

  • 김인철;김대영
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.4
    • /
    • pp.81-88
    • /
    • 2001
  • In this paper, a Korean digit recognition system based on a multilayer Perceptron is implemented. We also investigate the performance of widely used speech features, such as the Mel-scale filterbank, MFCC, LPCC, and PLP coefficients, by applying them as input of the proposed recognition system. In order to build a robust speech system, the experiments for demonstrating its recognition performance for the clean data as well as corrupt data are carried out. In experiments of recognizing 20 Korean digit, we found that the Mel-scale filterbank coefficients performs best in terms of recognition accuracy for the speech dependent and speech independent database even though noise is considerably added.

  • PDF

Speaker verification with ECAPA-TDNN trained on new dataset combined with Voxceleb and Korean (Voxceleb과 한국어를 결합한 새로운 데이터셋으로 학습된 ECAPA-TDNN을 활용한 화자 검증)

  • Keumjae Yoon;Soyoung Park
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.209-224
    • /
    • 2024
  • Speaker verification is becoming popular as a method of non-face-to-face identity authentication. It involves determining whether two voice data belong to the same speaker. In cases where the criminal's voice remains at the crime scene, it is vital to establish a speaker verification system that can accurately compare the two voice evidence. In this study, to achieve this, a new speaker verification system was built using a deep learning model for Korean language. High-dimensional voice data with a high variability like background noise made it necessary to use deep learning-based methods for speaker matching. To construct the matching algorithm, the ECAPA-TDNN model, known as the most famous deep learning system for speaker verification, was selected. A large dataset of the voice data, Voxceleb, collected from people of various nationalities without Korean. To study the appropriate form of datasets necessary for learning the Korean language, experiments were carried out to find out how Korean voice data affects the matching performance. The results showed that when comparing models learned only with Voxceleb and models learned with datasets combining Voxceleb and Korean datasets to maximize language and speaker diversity, the performance of learning data, including Korean, is improved for all test sets.

Geometric Correction of Lips Using Lip Information (입술정보를 이용한 입술모양의 기하학적 보정)

  • 황동국;박희정;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.834-841
    • /
    • 2004
  • There can be lips transformed geometrically in the lip images according to the location or the pose of camera and speaker. This transformation of the lip images changes geometric information of original lip phases. Therefore, for enhancing global lip information by using partial information of lips to correct lip phases transformed geometrically, in this paper we propose a method that can geometrically correct lips. The method is composed of two steps - the feature-deciding step and the correcting step. In the former, it is for us to extract key points and features of source image according to the its lip model and to create that of target image according to the its lip model. In the latter, we decide mapping relation after partition a source and target image based on information extracted in the previous step into each 4 regions. and then, after mapping, we unite corrected sub-images to a result image. As experiment image, we use fames that contain pronunciation on short vowels of the Korean language and use lip symmetry for evaluating the proposed algorithm. In experiment result, the correcting rate of the lower lip than the upper lip and that of lips moving largely than little was highly enhanced.