• Title/Summary/Keyword: 화자 특징

Search Result 299, Processing Time 0.025 seconds

A Hybrid Neural Network model for Enhancement of Speaker Recognition in Video Stream (비디오 화자 인식 성능 향상을 위한 복합 신경망 모델)

  • Lee, Beom-Jin;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.396-398
    • /
    • 2012
  • 대부분의 실세계 데이터는 시간성을 띄고 있으므로 시간성을 지닌 데이터를 분석할 수 있는 기계 학습 방법론은 매우 중요하다. 이런 관점에서 비디오 데이터는 다양한 모달리티가 결합된 대표적인 시간 데이터 이므로 비디오 데이터를 대상으로 하는 기계 학습 방법은 큰 의미를 갖는다. 본 논문에서는 음성 채널에기반한 비디오 데이터 분석 방법의 예비 연구로 비디오 데이터에 등장하는 화자를 인식할 수 있는 간단한 방법을 소개한다. 제안 방법은 MFCC (Mel-frequency cepstrum coefficients)를 이용하여 인간 음성 특성의 분포를 분석한 후 분석 결과를 신경망에 입력하여 목표한 화자를 인식하는 복합 신경망 모델을 특징으로 한다. 실제 TV 드라마 데이터에서 가우시안 혼합모델, 가우시안 혼합 신경망 모델, 제안 방법의 화자 인식 성능을 비교한 결과 제안 방법이 가장 우수한 인식 성능을 보임을 확인하였다.

A Semi-Noniterative VQ Design Algorithm for Text Dependent Speaker Recognition (문맥종속 화자인식을 위한 준비반복 벡터 양자기 설계 알고리즘)

  • Lim, Dong-Chul;Lee, Haing-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.67-72
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text dependent speaker recognition. In a concrete way, we present the non-Iterative method which makes a vector quantization codebook and this method Is nut Iterative learning so that the computational complexity is epochally reduced. The proposed semi-noniterative VQ design method contrasts with the existing design method which uses the iterative learning algorithm for every training speaker. The characteristics of a semi-noniterative VQ design is as follows. First, the proposed method performs the iterative learning only for the reference speaker, but the existing method performs the iterative learning for every speaker. Second, the quantization region of the non-reference speaker is equivalent for a quantization region of the reference speaker. And the quantization point of the non-reference speaker is the optimal point for the statistical distribution of the non-reference speaker In the numerical experiment, we use the 12th met-cepstrum feature vectors of 20 speakers and compare it with the existing method, changing the codebook size from 2 to 32. The recognition rate of the proposed method is 100% for suitable codebook size and adequate training data. It is equal to the recognition rate of the existing method. Therefore the proposed semi-noniterative VQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal.

A Study on SVM-Based Speaker Classification Using GMM-supervector (GMM-supervector를 사용한 SVM 기반 화자분류에 대한 연구)

  • Lee, Kyong-Rok
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1022-1027
    • /
    • 2020
  • In this paper, SVM-based speaker classification is experimented with GMM-supervector. To create a speaker cluster, conventional speaker change detection is performed with the KL distance using the SNR-based weighting function. SVM-based speaker classification consists of two steps. In the first step, SVM-based classification between UBM and speaker models is performed, speaker information is indexed in each cluster, and then grouped by speaker. In the second step, the SVM-based classification between UBM and speaker models is performed by inputting the speaker cluster group. Linear and RBF are applied as kernel functions for SVM-based classification. As a result, in the first step, the case of applying the linear kernel showed better performance than RBF with 148 speaker clusters, MDR 0, FAR 47.3, and ER 50.7. The second step experiment result also showed the best performance with 109 speaker clusters, MDR 1.3, FAR 28.4, and ER 32.1 when the linear kernel was applied.

Improvement in Supervector Linear Kernel SVM for Speaker Identification Using Feature Enhancement and Training Length Adjustment (특징 강화 기법과 학습 데이터 길이 조절에 의한 Supervector Linear Kernel SVM 화자식별 개선)

  • So, Byung-Min;Kim, Kyung-Wha;Kim, Min-Seok;Yang, Il-Ho;Kim, Myung-Jae;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.6
    • /
    • pp.330-336
    • /
    • 2011
  • In this paper, we propose a new method to improve the performance of supervector linear kernel SVM (Support Vector Machine) for speaker identification. This method is based on splitting one training datum into several pieces of utterances. We use four different databases for evaluating performance and use PCA (Principal Component Analysis), GKPCA (Greedy Kernel PCA) and KMDA (Kernel Multimodal Discriminant Analysis) for feature enhancement. As a result, the proposed method shows improved performance for speaker identification using supervector linear kernel SVM.

A Study on Speaker Adaptation of Large Continuous Spoken Language Using back-off bigram (Back-off bigram을 이랑한 대용량 연속어의 화자적응에 관한 연구)

  • 최학윤
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9C
    • /
    • pp.884-890
    • /
    • 2003
  • In this paper, we studied the speaker adaptation methods that improve the speaker independent recognition system. For the independent speakers, we compared the results between bigram and back-off bigram, MAP and MLLR. Cause back-off bigram applys unigram and back-off weighted value as bigram probability value, it has the effect adding little weighted value to bigram probability value. We did an experiment using total 39-feature vectors as featuring voice parameter with 12-MFCC, log energy and their delta and delta-delta parameter. For this recognition experiment, We constructed a system made by CHMM and tri-phones recognition unit and bigram and back-off bigrams language model.

Efficient Speaker Identification based on Robust VQ-PCA (강인한 VQ-PCA에 기반한 효율적인 화자 식별)

  • Lee Ki-Yong
    • Journal of Internet Computing and Services
    • /
    • v.5 no.3
    • /
    • pp.57-62
    • /
    • 2004
  • In this paper, an efficient speaker identification based on robust vector quantizationprincipal component analysis (VQ-PCA) is proposed to solve the problems from outliers and high dimensionality of training feature vectors in speaker identification, Firstly, the proposed method partitions the data space into several disjoint regions by roust VQ based on M-estimation. Secondly, the robust PCA is obtained from the covariance matrix in each region. Finally, our method obtains the Gaussian Mixture model (GMM) for speaker from the transformed feature vectors with reduced dimension by the robust PCA in each region, Compared to the conventional GMM with diagonal covariance matrix, under the same performance, the proposed method gives faster results with less storage and, moreover, shows robust performance to outliers.

  • PDF

Feature Extraction from the Strange Attractor for Speaker Recognition (화자인식을 위한 어트랙터로 부터의 음성특징추출)

  • Kim, Tae-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.2E
    • /
    • pp.26-31
    • /
    • 1994
  • A new feature extraction technique utilizing strange attractor and artificial neural network for speaker recognition is presented. Since many signals change their characteristics over long periods of time, simple time-domain processing techniques should e capable of providing useful information of signal features. In many cases, normal time series can be viewed as a dynamical system with a low-dimensional attractor that can be reconstructed from the time series using time delay. The reconstruction of strange attractor is described. In the technique, the raw signal will be reproduced into a geometric three dimensional attractor. Classification decision for speaker recognition is based upon the processing or sets of feature vectors that are derived from the attractor. Three different methods for feature extraction will be discussed. The methods include box-counting dimension, natural measure with regular hexahedron and plank-type box. An artificial neural network is designed for training the feature data generated by the method. The recognition rates are about 82%-96% depending on the extraction method.

  • PDF

Speech emotion recognition for affective human robot interaction (감성적 인간 로봇 상호작용을 위한 음성감정 인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.555-558
    • /
    • 2006
  • 감정을 포함하고 있는 음성은 청자로 하여금 화자의 심리상태를 파악할 수 있게 하는 요소 중에 하나이다. 음성신호에 포함되어 있는 감정을 인식하여 사람과 로봇과의 원활한 감성적 상호작용을 위하여 특징을 추출하고 감정을 분류한 방법을 제시한다. 음성신호로부터 음향정보 및 운율정보인 기본 특징들을 추출하고 이로부터 계산된 통계치를 갖는 특징벡터를 입력으로 support vector machine (SVM) 기반의 패턴분류기를 사용하여 6가지의 감정- 화남(angry), 지루함(bored), 기쁨(happy), 중립(neutral), 슬픔(sad) 그리고 놀람(surprised)으로 분류한다. SVM에 의한 인식실험을 한 경우 51.4%의 인식률을 보였고 사람의 판단에 의한 경우는 60.4%의 인식률을 보였다. 또한 화자가 판단한 감정 데이터베이스의 감정들을 다수의 청자가 판단한 감정 상태로 변경한 입력을 SVM에 의해서 감정을 분류한 결과가 51.2% 정확도로 감정인식하기 위해 사용한 기본 특징들이 유효함을 알 수 있다.

  • PDF

A Study on Isolated Words Speech Recognition in a Running Automobile (주행중인 자동차 환경에서의 고립단어 음성인식 연구)

  • 유봉근
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.381-384
    • /
    • 1998
  • 본 논문은 주행중인 자동차 환경에서 운전자의 안전성 및 편의성의 동시 확보를 위하여, 보조적인 스위치 조작없이 상시 음성의 입, 출력이 가능하도록 한다. 이때 잡음에 강인한 threshold 값을 구하기 위하여, 일정한 시간마다 기준 에너지와 영교차율(Zero Crossing Rate)을 변경하며, 밴드패스 필터(bandpass filter)를 이용하여 1차, 2차로 나누어 실시간 상태에서 자동으로, 정확하게 끝점검출(End Point Detection)을 처리한다. 기준패턴(reference pattern)은 DMS(Dynamic Multi-Section)을 사용하며, 화자의 변별력을 높이기 위하여 2개의 모델사용을 제안한다. 또한 주행중인 차량의 잡음환경에 강인하기 위하여 일반주행(80km/h 이내), 고속주행(80km/h 이상)등으로 나누며 차량의 가변잡음 크기에 따라 자동으로 선택하도록 한다. 음성의 특징 벡터와 인식 알고리즘은 PLP 13차와 One-Stage Dynamic Programming (OSDP)를 이용한다. 실험결과, 자주 사용되는 차량 편의장치 제어명령 33개에 대하여 중부, 영동 고속도로(시속 80Km/h 이상)에서 화자독립 89.75%, 화자종속 90.08%의 인식율을 구하였으며, 경부 고속도로에서는 화자독립 92.29%, 화자종속 92.42%의 인식율을 구하였다. 그리고 저속 주행중인 자동차 환경(80km/h 이내, 시멘트, 아스팔트 등의 서울시내 및 시외독립)에서는 화자독립 92.89%, 화자종속 94.44% 인식율을 구하였다.

  • PDF

The Speaker Identification Using Incremental Learning (Incremental Learning을 이용한 화자 인식)

  • Sim, Kwee-Bo;Heo, Kwang-Seung;Park, Chang-Hyun;Lee, Dong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.5
    • /
    • pp.576-581
    • /
    • 2003
  • Speech signal has the features of speakers. In this paper, we propose the speaker identification system which use the incremental learning based on neural network. Recorded speech signal through the Mic is passed the end detection and is divided voiced signal and unvoiced signal. The extracted 12 order cpestrum are used the input data for neural network. Incremental learning is the learning algorithm that the learned weights are remembered and only the new weights, that is created as adding new speaker, are trained. The architecture of neural network is extended with the number of speakers. So, this system can learn without the restricted number of speakers.