• Title/Summary/Keyword: Speaker recognition systems

Search Result 86, Processing Time 0.031 seconds

Automatic Speech Style Recognition Through Sentence Sequencing for Speaker Recognition in Bilateral Dialogue Situations (양자 간 대화 상황에서의 화자인식을 위한 문장 시퀀싱 방법을 통한 자동 말투 인식)

  • Kang, Garam;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.17-32
    • /
    • 2021
  • Speaker recognition is generally divided into speaker identification and speaker verification. Speaker recognition plays an important function in the automatic voice system, and the importance of speaker recognition technology is becoming more prominent as the recent development of portable devices, voice technology, and audio content fields continue to expand. Previous speaker recognition studies have been conducted with the goal of automatically determining who the speaker is based on voice files and improving accuracy. Speech is an important sociolinguistic subject, and it contains very useful information that reveals the speaker's attitude, conversation intention, and personality, and this can be an important clue to speaker recognition. The final ending used in the speaker's speech determines the type of sentence or has functions and information such as the speaker's intention, psychological attitude, or relationship to the listener. The use of the terminating ending has various probabilities depending on the characteristics of the speaker, so the type and distribution of the terminating ending of a specific unidentified speaker will be helpful in recognizing the speaker. However, there have been few studies that considered speech in the existing text-based speaker recognition, and if speech information is added to the speech signal-based speaker recognition technique, the accuracy of speaker recognition can be further improved. Hence, the purpose of this paper is to propose a novel method using speech style expressed as a sentence-final ending to improve the accuracy of Korean speaker recognition. To this end, a method called sentence sequencing that generates vector values by using the type and frequency of the sentence-final ending appearing in the utterance of a specific person is proposed. To evaluate the performance of the proposed method, learning and performance evaluation were conducted with a actual drama script. The method proposed in this study can be used as a means to improve the performance of Korean speech recognition service.

Speaker recognition technique for offline conference recording system (오프라인 회의 기록 지원시스템을 위한 화자 인식 기법)

  • Park, Han-Mu;Son, Yun-Sik;Jeong, Jin-U
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.29-32
    • /
    • 2007
  • 최근 영상 처리 기술이 발달함에 따라 다양한 응용시스템에 영상 처리 기술을 접목하려는 시도가 나타나고 있다. 특히 영상 내의 얼굴을 객체로 다루는 인식 기술의 발전으로 얼굴 정보를 이용한 기술의 응용 분야는 게임 및 카메라 둥 다양한 분야에서 사용되고 있다. 본 논문에서는 오프라인 회의 보조 시스템에서 화자를 구분하기 위한 기법을 제시한다. 제안된 기법은 얼굴 객체 정보에서 화자 구별을 위한 특징 값을 제시하고, 이를 이용하여 얻어진 입 주변 엣지(Edge)를 이루는 픽셀들의 분산 값으로 화자 여부를 판단한다.

  • PDF

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

Improving Speaker Enrolling Speed for Speaker Verification Systems Based on Multilayer Perceptrons by Using a Qualitative Background Speaker Selection (정질적 기준을 이용한 다층신경망 기반 화자증명 시스템의 등록속도 단축방법)

  • 이태승;황병원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.5
    • /
    • pp.360-366
    • /
    • 2003
  • Although multilayer perceptrons (MLPs) present several advantages against other pattern recognition methods, MLP-based speaker verification systems suffer from slow enrollment speed caused by many background speakers to achieve a low verification error. To solve this problem, the quantitative discriminative cohort speakers (QnDCS) method, by introducing the cohort speakers method into the systems, reduced the number of background speakers required to enroll speakers. Although the QnDCS achieved the goal to some extent, the improvement rate for the enrolling speed was still unsatisfactory. To improve the enrolling speed, this paper proposes the qualitative DCS (QlDCS) by introducing a qualitative criterion to select less background speakers. An experiment for both methods is conducted to use the speaker verification system based on MLPs and continuants, and speech database. The results of the experiment show that the proposed QlDCS method enrolls speakers in two times shorter time than the QnDCS does over the online error backpropagation(EBP) method.

Development of Advanced Personal Identification System Using Iris Image and Speech Signal (홍채와 음성을 이용한 고도의 개인확인시스템)

  • Lee, Dae-Jong;Go, Hyoun-Joo;Kwak, Keun-Chang;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.348-354
    • /
    • 2003
  • This proposes a new algorithm for advanced personal identification system using iris pattern and speech signal. Since the proposed algorithm adopts a fusion scheme to take advantage of iris recognition and speaker identification, it shows robustness for noisy environments. For evaluating the performance of the proposed scheme, we compare it with the iris pattern recognition and speaker identification respectively. In the experiments, the proposed method showed more 56.7% improvements than the iris recognition method and more 10% improvements than the speaker identification method for high quality security level. Also, in noisy environments, the proposed method showed more 30% improvements than the iris recognition method and more 60% improvements than the speaker identification method for high quality security level.

A Method on the Improvement of Speaker Enrolling Speed for a Multilayer Perceptron Based Speaker Verification System through Reducing Learning Data (다층신경망 기반 화자증명 시스템에서 학습 데이터 감축을 통한 화자등록속도 향상방법)

  • 이백영;황병원;이태승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.585-591
    • /
    • 2002
  • While the multilayer perceptron(MLP) provides several advantages against the existing pattern recognition methods, it requires relatively long time in learning. This results in prolonging speaker enrollment time with a speaker verification system that uses the MLP as a classifier. This paper proposes a method that shortens the enrollment time through adopting the cohort speakers method used in the existing parametric systems and reducing the number of background speakers required to learn the MLP, and confirms the effect of the method by showing the result of an experiment that applies the method to a continuant and MLP-based speaker verification system.

Voice Dialing system using Stochastic Matching (확률적 매칭을 사용한 음성 다이얼링 시스템)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.515-518
    • /
    • 2004
  • This paper presents a method that improves the performance of the personal voice dialling system in which speaker Independent phoneme HMM's are used. Since the speaker independent phoneme HMM based voice dialing system uses only the phone transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the system which uses the speaker dependent models due to the phone recognition errors generated when the speaker Independent models are used. In order to solve this problem, a new method that jointly estimates transformation vectors for the speaker adaptation and transcriptions from training utterances is presented. The biases and transcriptions are estimated iteratively from the training data of each user with maximum likelihood approach to the stochastic matching using speaker-independent phone models. Experimental result shows that the proposed method is superior to the conventional method which used transcriptions only.

  • PDF

Speaker Verification System with Hybrid Model Improved by Adapted Continuous Wavelet Transform

  • Kim, Hyoungsoo;Yang, Sung-il;Younghun Kwon;Kyungjoon Cha
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3E
    • /
    • pp.30-36
    • /
    • 1999
  • In this paper, we develop a hybrid speaker recognition system [1] enhanced by pre-recognizer and post-recognizer. The pre-recognizer consists of general speech recognition systems and the post-recognizer is a pitch detection system using adapted continuous wavelet transform (ACWT) to improve the performance of the hybrid speaker recognition system. Two schemes to design ACWT is considered. One is the scheme to search basis library covering the whole band of speech fundamental frequency (speech pitch). The other is the scheme to determine which one is the best basis. Information cost functional is used for the criterion for the latter. ACWT is robust enough to classify the pitch of speech very well, even though the speech signal is badly damaged by environmental noises.

  • PDF

Japanese Vowel Sound Classification Using Fuzzy Inference System

  • Phitakwinai, Suwannee;Sawada, Hideyuki;Auephanwiriyakul, Sansanee;Theera-Umpon, Nipon
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.1
    • /
    • pp.35-41
    • /
    • 2014
  • An automatic speech recognition system is one of the popular research problems. There are many research groups working in this field for different language including Japanese. Japanese vowel recognition is one of important parts in the Japanese speech recognition system. The vowel classification system with the Mamdani fuzzy inference system was developed in this research. We tested our system on the blind test data set collected from one male native Japanese speaker and four male non-native Japanese speakers. All subjects in the blind test data set were not the same subjects in the training data set. We found out that the classification rate from the training data set is 95.0 %. In the speaker-independent experiments, the classification rate from the native speaker is around 70.0 %, whereas that from the non-native speakers is around 80.5 %.

Phonetic Transcription based Speech Recognition using Stochastic Matching Method (확률적 매칭 방법을 사용한 음소열 기반 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.696-700
    • /
    • 2007
  • A new method that improves the performance of the phonetic transcription based speech recognition system is presented with the speaker-independent phonetic recognizer. Since SI phoneme HMM based speech recognition system uses only the phoneme transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the phoneme recognition errors generated from using SI models. A new training method that iteratively estimates the phonetic transcription and transformation vectors is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the transformation vectors. The experiments performed over actual telephone line shows that a reduction of about 45% in the error rates could be achieved as compared to the conventional method.