• Title/Summary/Keyword: 김화자

Search Result 184, Processing Time 0.028 seconds

The design of Multi-modal system for the realization of DARC system controller (DARC 시스템 제어기 구현을 위한 멀티모달 시스템 설계)

  • 최광국;곽상훈;하얀돌이;김유진;김철;최승호
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.179-182
    • /
    • 2000
  • 본 논문은 DARC 시스템 제어기를 구현하기 위해 음성인식기와 입술인식기를 결합하여 멀티모달 시스템을 설계하였다. DARC 시스템에서 사용하고 있는 22개 단어를 DB로 구축하고, HMM을 적용하여 인식기를 설계하였다. 두 모달간 인식 확률 결합방법은 음성인식기가 입술인식기에 비해 높은 인식률을 가지고 있다는 가정 하에 8:2 비율의 가중치로 결합하였고, 결합시점은 인식 후 확률을 결합하는 방법을 적용하였다. 시스템간 인터페이스에서는 인터넷 프로토콜인 TCP/IP의 소켓을 통신모듈로 설계/구현하고, 인식실험은 테스트 DB를 이용한 방법과 5명의 화자가 실시간 실험을 통해 그 성능 평가를 하였다.

  • PDF

Development of a lipsync algorithm based on A/V corpus (코퍼스 기반의 립싱크 알고리즘 개발)

  • 하영민;김진영;정수경
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.145-148
    • /
    • 2000
  • 이 논문에서는 2차원 얼굴 좌표데이터를 합성하기 위한 음성과 영상 동기화 알고리즘을 제안한다. 영상변수의 획득을 위해 화자의 얼굴에 부착된 표시를 추적함으로써 영상변수를 획득하였고, 음소정보뿐만 아니라 운율정보들과의 영상과의 상관관계를 분석하였으며 합성단위로 시각소에 기반한 코퍼스를 선택하고, 주변의 음운환경도 함께 고려하여 연음현상을 모델링하였다. 입력된 코퍼스에 해당되는 패턴들을 lookup table에서 선택하여 주변음소에 대해 기준패턴과의 음운거리를 계산하고 음성파일에서 운율정보들을 추출해 운율거리를 계산한 후 가중치를 주어 패턴과의 거리를 얻는다. 이중가장 근접한 다섯개의 패턴들의 연결부분에 대해 Viterbi Search를 수행하여 최적의 경로를 선택하고 주성분분석된 영상정보를 복구하고 시간정보를 조절한다.

  • PDF

Speaker Separation Based on Directional Filter and Harmonic Filter (Directional Filter와 Harmonic Filter 기반 화자 분리)

  • Baek, Seung-Eun;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.125-136
    • /
    • 2005
  • Automatic speech recognition is much more difficult in real world. Speech recognition according to SIR (Signal to Interface Ratio) is difficult in situations in which noise of surrounding environment and multi-speaker exists. Therefore, study on main speaker's voice extractions a very important field in speech signal processing in binaural sound. In this paper, we used directional filter and harmonic filter among other existing methods to extract the main speaker's information in binaural sound. The main speaker's voice was extracted using directional filter, and other remaining speaker's information was removed using harmonic filter through main speaker's pitch detection. As a result, voice of the main speaker was enhanced.

  • PDF

Confidence Measure of Forensic Speaker Identification System According to Pitch Variances (과학수사용 화자 식별 시스템의 피치 차이에 따른 신뢰성 척도)

  • Kim, Min-Seok;Kim, Kyung-Wha;Yang, IL-Ho;Yu, Ha-Jin
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.135-139
    • /
    • 2010
  • Forensic speaker identification needs high accuracy and reliability. However, the current level of speaker identification does not reach its demand. Therefore, the confidence evaluation of results is one of the issues in forensic speaker identification. In this paper, we propose a new confidence measure of forensic speaker identification system. This is based on pitch differences between the registered utterances of the identified speaker and the test utterance. In the experiments, we evaluate this confidence measure by speech identification tasks on various environments. As the results, the proposed measure can be a good measure indicating if the result is reliable or not.

  • PDF

A Robust Speaker Identification Using Optimized Confidence and Modified HMM Decoder (최적화된 관측 신뢰도와 변형된 HMM 디코더를 이용한 잡음에 강인한 화자식별 시스템)

  • Tariquzzaman, Md.;Kim, Jin-Young;Na, Seung-Yu
    • MALSORI
    • /
    • no.64
    • /
    • pp.121-135
    • /
    • 2007
  • Speech signal is distorted by channel characteristics or additive noise and then the performances of speaker or speech recognition are severely degraded. To cope with the noise problem, we propose a modified HMM decoder algorithm using SNR-based observation confidence, which was successfully applied for GMM in speaker identification task. The modification is done by weighting observation probabilities with reliability values obtained from SNR. Also, we apply PSO (particle swarm optimization) method to the confidence function for maximizing the speaker identification performance. To evaluate our proposed method, we used the ETRI database for speaker recognition. The experimental results showed that the performance was definitely enhanced with the modified HMM decoder algorithm.

  • PDF

A Speech Representation and Recognition Method using Sign Patterns (부호패턴에 의한 음성표현과 인식방법)

  • Kim Young Hwa;Kim Un Il;Lee Hee Jeong;Park Byung Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.5
    • /
    • pp.86-94
    • /
    • 1989
  • In this paper the method using a sign pattern( +,- ) of Mel-cepstrum coefficients as a new speech representation is proposed. Relatively stable patterns can be obtained for speech signals which has strong stationarity like vowels and nasals, and the phonemic difference according to the individuality of speakers can be absorbed without affecting characteristics of the phoneme. In this paper we show that the reduction of recognition procedure of phonemes and training procedure of phoneme models can be achieved through the representation of Korean phonemes using such a sign pattern.

  • PDF

Implementation of the Auditory Sense for the Smart Robot: Speaker/Speech Recognition (로봇 시스템에의 적용을 위한 음성 및 화자인식 알고리즘)

  • Jo, Hyun;Kim, Gyeong-Ho;Park, Young-Jin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.05a
    • /
    • pp.1074-1079
    • /
    • 2007
  • We will introduce speech/speaker recognition algorithm for the isolated word. In general case of speaker verification, Gaussian Mixture Model (GMM) is used to model the feature vectors of reference speech signals. On the other hand, Dynamic Time Warping (DTW) based template matching technique was proposed for the isolated word recognition in several years ago. We combine these two different concepts in a single method and then implement in a real time speaker/speech recognition system. Using our proposed method, it is guaranteed that a small number of reference speeches (5 or 6 times training) are enough to make reference model to satisfy 90% of recognition performance.

  • PDF

Real-Time Recognition of the Korean Spingle Vowels Using the Speech Spectrum Anaysis (음성 스펙트럼 분석에 의한 한국어 단모음 실시간 인식)

  • 김엄준;성미영
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 1998.10a
    • /
    • pp.226-231
    • /
    • 1998
  • 본 연구에서는 짧은 시간에 계산이 가능하며, 음성을 특징 지울 수 있는 파라미터로서 영 교차율(zero crossing rate), 단 구간 에너지(short-term, energy) 그리고 포만트(formant)를 사용하였다. 특정 화자의 음성을 입력 받아서 단모음인 'ㅏ, ㅐ, ㅓ, ㅔ, ㅗ, ㅜ, ㅡ. ㅣ'에 대한 인식을 위해 위의 세가지 파라미터를 측정하였다. 영 교차율과 단 구간 에너지 파라미터는 유성음과 무성음의 구별과 음성인지 아닌지를 판별하는데 사용하였다. 포만트 파라미터는 10차 켑스트럼(cepstrum)을 이용하여 구하였으며, 각 단모음을 판별하기 위해서 사용하였다. 하나의 단모음을 입력받아 처리하여 텍스트로 출력하는데 평균 0.065sec에 처리하며, 각각의 단모음에 대해 93%, 10개의 테스트 문장에 대해 72%의 인식률을 보이고 있다.

  • PDF

Performance comparison of Text-Independent Speaker Recognizer Using VQ and GMM (VQ와 GMM을 이용한 문맥독립 화자인식기의 성능 비교)

  • Kim, Seong-Jong;Chung, Hoon;Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.235-244
    • /
    • 2000
  • This paper was focused on realizing the text-independent speaker recognizer using the VQ and GMM algorithm and studying the characteristics of the speaker recognizers that adopt these two algorithms. Because it was difficult ascertain the effect two algorithms have on the speaker recognizer theoretically, we performed the recognition experiments using various parameters and, as the result of the experiments, we could show that GMM algorithm had better recognition performance than VQ algorithm as following. The GMM showed better performance with small training data, and it also showed just a little difference of recognition rate as the kind of feature vectors and the length of input data vary. The GMM showed good recognition performance than the VQ on the whole.

  • PDF

Performance Comparison of Deep Feature Based Speaker Verification Systems (깊은 신경망 특징 기반 화자 검증 시스템의 성능 비교)

  • Kim, Dae Hyun;Seong, Woo Kyeong;Kim, Hong Kook
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.9-16
    • /
    • 2015
  • In this paper, several experiments are performed according to deep neural network (DNN) based features for the performance comparison of speaker verification (SV) systems. To this end, input features for a DNN, such as mel-frequency cepstral coefficient (MFCC), linear-frequency cepstral coefficient (LFCC), and perceptual linear prediction (PLP), are first compared in a view of the SV performance. After that, the effect of a DNN training method and a structure of hidden layers of DNNs on the SV performance is investigated depending on the type of features. The performance of an SV system is then evaluated on the basis of I-vector or probabilistic linear discriminant analysis (PLDA) scoring method. It is shown from SV experiments that a tandem feature of DNN bottleneck feature and MFCC feature gives the best performance when DNNs are configured using a rectangular type of hidden layers and trained with a supervised training method.