• Title/Summary/Keyword: 화자 패턴

Search Result 111, Processing Time 0.026 seconds

The Speaker Recognition System using the Pitch Alteration (피치변경을 이용한 화자인식 시스템)

  • Jung JongSoon;Bae MyungJin
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.115-118
    • /
    • 2002
  • Parameters used in a speaker recognition system are desirable expressing speaker's characteristics filly and have in a speech. That is to say, if inter-speaker than intra-speaker variance a big characteristic, it is useful to distinguish between speakers. Also, to make minimum error between speakers, it is required the improved recognition technology as well as the distinguishing characteristics. When we see the result of recent simulation performance, we obtain more exact performance by using dynamic characteristics and constant characteristics by a speaking habit. Therefore we suggest it to solve this problem as followings. The prosodic information is used by a characteristic vector of speech. Characteristics vector generally using in speaker recognition system is a modeling spectrum information and is working for a high performance in non-noise circumstance. However, it is found a problem that characteristic vector is distorted in noise circumstance and it makes a reduction of recognition rate. In this paper, we change pitch line divided by segment which can estimate a dynamic characteristic and it is used as a recognition characteristic. we confirmed that the dynamic characteristic is very robust in noise circumstance with a simulation. We make a decision of acceptance or rejection by comparing test pattern and recognition rate using the proposed algorithm has more improvement than using spectrum and prosodic information. Especially stational recognition rate can be obtained in noise circumstance through the simulation.

  • PDF

Development of Advanced Personal Identification System Using Iris Image and Speech Signal (홍채와 음성을 이용한 고도의 개인확인시스템)

  • Lee, Dae-Jong;Go, Hyoun-Joo;Kwak, Keun-Chang;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.348-354
    • /
    • 2003
  • This proposes a new algorithm for advanced personal identification system using iris pattern and speech signal. Since the proposed algorithm adopts a fusion scheme to take advantage of iris recognition and speaker identification, it shows robustness for noisy environments. For evaluating the performance of the proposed scheme, we compare it with the iris pattern recognition and speaker identification respectively. In the experiments, the proposed method showed more 56.7% improvements than the iris recognition method and more 10% improvements than the speaker identification method for high quality security level. Also, in noisy environments, the proposed method showed more 30% improvements than the iris recognition method and more 60% improvements than the speaker identification method for high quality security level.

Selective Attentive Learning for Fast Speaker Adaptation in Multilayer Perceptron (다층 퍼셉트론에서의 빠른 화자 적응을 위한 선택적 주의 학습)

  • 김인철;진성일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.48-53
    • /
    • 2001
  • In this paper, selectively attentive learning method has been proposed to improve the learning speed of multilayer Perceptron based on the error backpropagation algorithm. Three attention criterions are introduced to effectively determine which set of input patterns is or which portion of network is attended to for effective learning. Such criterions are based on the mean square error function of the output layer and class-selective relevance of the hidden nodes. The acceleration of learning time is achieved by lowering the computational cost per iteration. Effectiveness of the proposed method is demonstrated in a speaker adaptation task of isolated word recognition system. The experimental results show that the proposed selective attention technique can reduce the learning time more than 60% in an average sense.

  • PDF

The Application of an HMM-based Clustering Method to Speaker Independent Word Recognition (HMM을 기본으로한 집단화 방법의 불특정화자 단어 인식에 응용)

  • Lim, H.;Park, S.-Y.;Park, M.-W.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.5
    • /
    • pp.5-10
    • /
    • 1995
  • In this paper we present a clustering procedure based on the use of HMM in order to get multiple statistical models which can well absorb the variants of each speaker with different ways of saying words. The HMM-clustered models obtained from the developed technique are applied to the speaker independent isolated word recognition. The HMM clustering method splits off all observation sequences with poor likelihood scores which fall below threshold from the training set and create a new model out of the observation sequences in the new cluster. Clustering is iterated by classifying each observation sequence as belonging to the cluster whose model has the maximum likelihood score. If any clutter has changed from the previous iteration the model in that cluster is reestimated by using the Baum-Welch reestimation procedure. Therefore, this method is more efficient than the conventional template-based clustering technique due to the integration capability of the clustering procedure and the parameter estimation. Experimental data show that the HMM-based clustering procedure leads to $1.43\%$ performance improvements over the conventional template-based clustering method and $2.08\%$ improvements over the single HMM method for the case of recognition of the isolated korean digits.

  • PDF

Development of a lipsync algorithm based on A/V corpus (코퍼스 기반의 립싱크 알고리즘 개발)

  • 하영민;김진영;정수경
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.145-148
    • /
    • 2000
  • 이 논문에서는 2차원 얼굴 좌표데이터를 합성하기 위한 음성과 영상 동기화 알고리즘을 제안한다. 영상변수의 획득을 위해 화자의 얼굴에 부착된 표시를 추적함으로써 영상변수를 획득하였고, 음소정보뿐만 아니라 운율정보들과의 영상과의 상관관계를 분석하였으며 합성단위로 시각소에 기반한 코퍼스를 선택하고, 주변의 음운환경도 함께 고려하여 연음현상을 모델링하였다. 입력된 코퍼스에 해당되는 패턴들을 lookup table에서 선택하여 주변음소에 대해 기준패턴과의 음운거리를 계산하고 음성파일에서 운율정보들을 추출해 운율거리를 계산한 후 가중치를 주어 패턴과의 거리를 얻는다. 이중가장 근접한 다섯개의 패턴들의 연결부분에 대해 Viterbi Search를 수행하여 최적의 경로를 선택하고 주성분분석된 영상정보를 복구하고 시간정보를 조절한다.

  • PDF

A Speech Representation and Recognition Method using Sign Patterns (부호패턴에 의한 음성표현과 인식방법)

  • Kim Young Hwa;Kim Un Il;Lee Hee Jeong;Park Byung Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.5
    • /
    • pp.86-94
    • /
    • 1989
  • In this paper the method using a sign pattern( +,- ) of Mel-cepstrum coefficients as a new speech representation is proposed. Relatively stable patterns can be obtained for speech signals which has strong stationarity like vowels and nasals, and the phonemic difference according to the individuality of speakers can be absorbed without affecting characteristics of the phoneme. In this paper we show that the reduction of recognition procedure of phonemes and training procedure of phoneme models can be achieved through the representation of Korean phonemes using such a sign pattern.

  • PDF

Analysis and Use of Intonation Features for Emotional States (감정 상태에 따른 발화문의 억양 특성 분석 및 활용)

  • Lee, Ho-Joon;Park, Jong C.
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.145-150
    • /
    • 2008
  • 본 논문에서는 8개의 문장에 대해서 6명의 화자가 5가지 감정 상태로 발화한 총 240개의 문장을 감정 음성 말뭉치로 활용하여 각 감정 상태에서 특징적으로 나타나는 억양 패턴을 분석하고, 이러한 억양 패턴을 음성 합성 시스템에 적용하는 방법에 대해서 논의한다. 이를 위해 본 논문에서는 감정 상태에 따른 특징적 억양 패턴을 억양구의 길이, 억양구의 구말 경계 성조, 하강 현상에 중점을 두어 분석하고, 기쁨, 슬픔, 화남, 공포의 감정을 구분 지을 수 있는 억양 특징들을 음성 합성 시스템에 적용하는 과정을 보인다. 본 연구를 통해 화남의 감정에서 나타나는 억양의 상승 현상을 확인할 수 있었고, 각 감정에 따른 특징적 억양 패턴을 찾을 수 있었다.

  • PDF

RPCA-GMM for Speaker Identification (화자식별을 위한 강인한 주성분 분석 가우시안 혼합 모델)

  • 이윤정;서창우;강상기;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.519-527
    • /
    • 2003
  • Speech is much influenced by the existence of outliers which are introduced by such an unexpected happenings as additive background noise, change of speaker's utterance pattern and voice detection errors. These kinds of outliers may result in severe degradation of speaker recognition performance. In this paper, we proposed the GMM based on robust principal component analysis (RPCA-GMM) using M-estimation to solve the problems of both ouliers and high dimensionality of training feature vectors in speaker identification. Firstly, a new feature vector with reduced dimension is obtained by robust PCA obtained from M-estimation. The robust PCA transforms the original dimensional feature vector onto the reduced dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector. Secondly, the GMM with diagonal covariance matrix is obtained from these transformed feature vectors. We peformed speaker identification experiments to show the effectiveness of the proposed method. We compared the proposed method (RPCA-GMM) with transformed feature vectors to the PCA and the conventional GMM with diagonal matrix. Whenever the portion of outliers increases by every 2%, the proposed method maintains almost same speaker identification rate with 0.03% of little degradation, while the conventional GMM and the PCA shows much degradation of that by 0.65% and 0.55%, respectively This means that our method is more robust to the existence of outlier.

Speech emotion recognition for affective human robot interaction (감성적 인간 로봇 상호작용을 위한 음성감정 인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.555-558
    • /
    • 2006
  • 감정을 포함하고 있는 음성은 청자로 하여금 화자의 심리상태를 파악할 수 있게 하는 요소 중에 하나이다. 음성신호에 포함되어 있는 감정을 인식하여 사람과 로봇과의 원활한 감성적 상호작용을 위하여 특징을 추출하고 감정을 분류한 방법을 제시한다. 음성신호로부터 음향정보 및 운율정보인 기본 특징들을 추출하고 이로부터 계산된 통계치를 갖는 특징벡터를 입력으로 support vector machine (SVM) 기반의 패턴분류기를 사용하여 6가지의 감정- 화남(angry), 지루함(bored), 기쁨(happy), 중립(neutral), 슬픔(sad) 그리고 놀람(surprised)으로 분류한다. SVM에 의한 인식실험을 한 경우 51.4%의 인식률을 보였고 사람의 판단에 의한 경우는 60.4%의 인식률을 보였다. 또한 화자가 판단한 감정 데이터베이스의 감정들을 다수의 청자가 판단한 감정 상태로 변경한 입력을 SVM에 의해서 감정을 분류한 결과가 51.2% 정확도로 감정인식하기 위해 사용한 기본 특징들이 유효함을 알 수 있다.

  • PDF

A Study on the Recognition of the Connected Digits Using CorrectIve Trammg WIth HMM and Post Processing (HMM의 교정 학습과 후처리를 이용한 연결 숫자음 인식에 관한 연구)

  • 우인봉
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.161-165
    • /
    • 1994
  • HMM은 좋은 결과를 보이면서 현재 음성 인식 분야에서 널리 사용되는 알고리즘이다. 그러나, 이 HMM의 학습방법인 maimum like-ihood estimation 은 인식률을 극대화하는 모델의 파라메터 값을 생성하지 못하는 단점이 있다. 이러한 문제점을 보와하기 위하여 연결어 인식 알고리즘인 Segmental K-means의 학습과정에 교정 학습법을 도입하여 모델 파라메터 값을 재조정 해 준다. 한국어 연속 숫자음은 영어 연속 숫자음과 달리 연음 현상의 영향을 많이 받는다. Level building 과정에서 연음에 의한 오류를 감소시키기 위해 연음에 의해 발생할 수 있는 단어를 별도의 모델로 추가했다. 이렇게 추가된 단어 모델들에 대한 몇가지 규픽을 인식 결과에 적용하여 출력을 다시 조정한다. 본 시스템은 TMS320C30 프로세서 내장한 DSP 보드와 IBM PC 사엥서 구현되었고, 표준 패턴은 실험실 잡음 환경에서 남성화자 3명을 대상으로 작성하였다. 인식 결과 21종 전화번호 252개 데이터에 대하여 화자 종속으로 92.1% 인식률을 나타내었다.

  • PDF