• Title/Summary/Keyword: 화자 특징

Search Result 299, Processing Time 0.023 seconds

Speech Feature based Double-talk Detector for Acoustic Echo Cancellation (반향제거를 위한 음성특징 기반의 동시통화 검출 기법)

  • Park, Jun-Eun;Lee, Yoon-Jae;Kim, Ki-Hyeon;Ko, Han-Seok
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.132-139
    • /
    • 2009
  • In this paper, a speech feature based double-talk detector method is proposed for an acoustic echo cancellation in hands-free communication system. The double-talk detector is an important element, since it controls the update of the adaptive filter for an acoustic echo cancellation. In previous research, the double talk detector is considered in the signal processing stage without taking the speech characteristics into account. However, in the proposed method, speech features which are used for the speech recognition is used for the discriminative features between the far-end and near-end speech. We obtained a substantial improvement over the previous double-talk detector methods using the only signal in time domain.

  • PDF

Vector Quantizer Based Speaker Normalization for Continuos Speech Recognition (연속음성 인식기를 위한 벡터양자화기 기반의 화자정규화)

  • Shin Ok-keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.583-589
    • /
    • 2004
  • Proposed is a speaker normalization method based on vector quantizer for continuous speech recognition (CSR) system in which no acoustic information is made use of. The proposed method, which is an improvement of the previously reported speaker normalization scheme for a simple digit recognizer, builds up a canonical codebook by iteratively training the codebook while the size of codebook is increased after each iteration from a relatively small initial size. Once the codebook established, the warp factors of speakers are estimated by comparing exhaustively the warped versions of each speaker's utterance with the codebook. Two sets of phones are used to estimate the warp factors: one, a set of vowels only. and the other, a set composed of all the Phonemes. A Piecewise linear warping function which corresponds to the estimated warp factor is adopted to warp the power spectrum of the utterance. Then the warped feature vectors are extracted to be used to train and to test the speech recognizer. The effectiveness of the proposed method is investigated by a set of recognition experiments using the TIMIT corpus and HTK speech recognition tool kit. The experimental results showed comparable recognition rate improvement with the formant based warping method.

Enhancement of Authentication Performance based on Multimodal Biometrics for Android Platform (안드로이드 환경의 다중생체인식 기술을 응용한 인증 성능 개선 연구)

  • Choi, Sungpil;Jeong, Kanghun;Moon, Hyeonjoon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.3
    • /
    • pp.302-308
    • /
    • 2013
  • In this research, we have explored personal authentication system through multimodal biometrics for mobile computing environment. We have selected face and speaker recognition for the implementation of multimodal biometrics system. For face recognition part, we detect the face with Modified Census Transform (MCT). Detected face is pre-processed through eye detection module based on k-means algorithm. Then we recognize the face with Principal Component Analysis (PCA) algorithm. For speaker recognition part, we extract features using the end-point of voice and the Mel Frequency Cepstral Coefficient (MFCC). Then we verify the speaker through Dynamic Time Warping (DTW) algorithm. Our proposed multimodal biometrics system shows improved verification rate through combining two different biometrics described above. We implement our proposed system based on Android environment using Galaxy S hoppin. Proposed system presents reduced false acceptance ratio (FAR) of 1.8% which shows improvement from single biometrics system using the face and the voice (presents 4.6% and 6.7% respectively).

An Improved Digit Recognition using Normalized mel-cepstrum (정규화된 Mel-cepstrum을 이용한 숫자음 인식성능 향상에 관한 연구)

  • 이기철
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.403-406
    • /
    • 1994
  • 음성은 화자의 상태 및 주변 환경에 따라 그 특징이 다양하게 변화한다. 본 논문에서는 음성신호의 특징 파라미터로 널리 쓰이고 있는 mel-cepstrum에 대해, 단어내에서의 변화를 정규화함으로써 인식성능을 향상시키고자 하였다. mel-cepstrum이란 단어 전체에 대한 mel-cepstrum의 평균 값으로 normalize 시킨 것이다. 한국어 숫자음에 대한 인식 실험결과, 본 논문에서 제안한 정규화된 mel-cepstrum이 정규화되지 않은 mel-cepstrum에 비해 우수한 인식 성능을 나타내었다. 또한 잡음 환경하에서 비교 실험한 결과에서도 상대적으로 우수한 인식률을 보였다.

  • PDF

A Study on the Visual Speech Recognition based on the Variations of Lip Shapes (입모양 변화에 의한 영상음성 인식에 관한 연구)

  • 이철우;계영철
    • Proceedings of the KAIS Fall Conference
    • /
    • 2001.05a
    • /
    • pp.188-191
    • /
    • 2001
  • 본 논문에서는 화자의 입모양의 변화를 분석하여 발음된 음성을 인식하는 방법에 관하여 연구하였다. 입모양 변화를 나타내는 특징벡터의 서로 다른 선택이 인식성능에 미치는 영향을 비교 분석하였다. 특징벡터로서는 ASM(Active Shape Model) 파라메터와 Acticulatory 파라메터를 특별히 선택하여 인식성능을 비교하였다. 모의실험 결과, Articulatory 파라메터를 사용하는 것이 인식성능도 더 우수하고 계산량도 더 적음을 확인할 수 있었다.

Spectral Normalization for Speaker-Invariant Feature Extraction (화자 불변 특징추출을 위한 스펙트럼 정규화)

  • 오광철
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1993.06a
    • /
    • pp.238-241
    • /
    • 1993
  • We present a new method to normalize spectral variations of different speakers based on physiological studies of hearing. The proposed method uses the cochlear frequency map to warp the input speech spectra by interpolation or decimation. Using this normalization method, we can obtain much improved recognition results for speaker independent speech recognition.

  • PDF

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

Dialect classification based on the speed and the pause of speech utterances (발화 속도와 휴지 구간 길이를 사용한 방언 분류)

  • Jonghwan Na;Bowon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2023
  • In this paper, we propose an approach for dialect classification based on the speed and pause of speech utterances as well as the age and gender of the speakers. Dialect classification is one of the important techniques for speech analysis. For example, an accurate dialect classification model can potentially improve the performance of speaker or speech recognition. According to previous studies, research based on deep learning using Mel-Frequency Cepstral Coefficients (MFCC) features has been the dominant approach. We focus on the acoustic differences between regions and conduct dialect classification based on the extracted features derived from the differences. In this paper, we propose an approach of extracting underexplored additional features, namely the speed and the pauses of speech utterances along with the metadata including the age and the gender of the speakers. Experimental results show that our proposed approach results in higher accuracy, especially with the speech rate feature, compared to the method only using the MFCC features. The accuracy improved from 91.02% to 97.02% compared to the previous method that only used MFCC features, by incorporating all the proposed features in this paper.

Implementation of the Timbre-based Emotion Recognition Algorithm for a Healthcare Robot Application (헬스케어 로봇으로의 응용을 위한 음색기반의 감정인식 알고리즘 구현)

  • Kong, Jung-Shik;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.43-46
    • /
    • 2009
  • This paper deals with feeling recognition from people's voice to fine feature vectors. Voice signals include the people's own information and but also people's feelings and fatigues. So, many researches are being progressed to fine the feelings from people's voice. In this paper, We analysis Selectable Mode Vocoder(SMV) that is one of the standard 3GPP2 codecs of ETSI. From the analyzed result, we propose voices features for recognizing feelings. And then, feeling recognition algorithm based on gaussian mixture model(GMM) is proposed. It uses feature vectors is suggested. We verify the performance of this algorithm from changing the mixture component.

  • PDF

Design and Implementation of the Voice Feature Elimination Technique to Protect Speaker's Privacy (사용자 프라이버시 보호를 위한 음성 특징 제거 기법 설계 및 구현)

  • Yu, Byung-Seok;Lim, SuHyun;Park, Mi-so;Lee, Yoo-Jin;Yun, Sung-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.672-675
    • /
    • 2012
  • 음성은 가장 익숙하고 편리한 의사 소통 수단으로 스마트폰과 같이 크기가 작은 모바일 기기의 입력 인터페이스로 적합하다. 서버 기반의 음성 인식은 서버를 방문하는 다양한 사용자들을 대상으로 음성 모델을 구축하기 때문에 음성 인식률을 높일 수 있고 상용화가 가능하다. 구글 음성인식, 아이폰의 시리(SiRi)가 대표적인 예이며 최근 스마트폰 사용자의 증가로 이에 대한 수요가 급증하고 있다. 서버 기반 음성 인식 기법에서 음성 인식은 스마트폰과 인터넷으로 연결되어 있는 원격지 서버에서 이루어진다. 따라서, 사용자는 스마트폰에 저장된 음성 데이터를 인터넷을 통하여 음성 인식 서버로 전달해야 된다[1, 2]. 음성 데이터는 사용자 고유 정보를 가지고 있으므로 개인 인증 및 식별을 위한 용도로 사용될 수 있으며 음성의 톤, 음성 신호의 피치, 빠르기 등을 통해서 사용자의 감정까지도 판단 할 수 있다[3]. 서버 기반 음성 인식에서 네트워크로 전송되는 사용자 음성 데이터는 제 3 자에게 쉽게 노출되기 때문에 화자의 신분 및 감정이 알려지게 되어 프라이버시 침해를 받게 된다. 본 논문에서는 화자의 프라이버시를 보호하기 위하여 사용자 음성 데이터로부터 개인의 고유 특징 및 현재 상태를 파악할 수 있는 감정 정보를 제거하는 기법을 설계 및 구현하였다.