• Title/Summary/Keyword: 화자 특징

Search Result 299, Processing Time 0.023 seconds

A Speaker Detection System based on Stereo Vision and Audio (스테레오 시청각 기반의 화자 검출 시스템)

  • An, Jun-Ho;Hong, Kwang-Seok
    • Journal of Internet Computing and Services
    • /
    • v.11 no.6
    • /
    • pp.21-29
    • /
    • 2010
  • In this paper, we propose the system which detects the speaker, who is speaking currently, among a number of users. A proposed speaker detection system based on stereo vision and audio is mainly composed of the followings: a position estimation of speaker candidates using stereo camara and microphone, a current speaker detection, and a speaker information acquisition based on a mobile device. We use the haar-like features and the adaboost algorithm to detect the faces of speaker candidates with stereo camera, and the position of speaker candidates is estimated by a triangulation method. Next, the Time Delay Of Arrival (TDOA) is estimated by the Cross Power Spectrum Phase (CPSP) analysis to find the direction of source with two microphone. Finally we acquire the information of the speaker including his position, voice, and face by comparing the information of the stereo camera with that of two microphone. Furthermore, the proposed system includes a TCP client/server connection method for mobile service.

Authentication Performance Optimization for Smart-phone based Multimodal Biometrics (스마트폰 환경의 인증 성능 최적화를 위한 다중 생체인식 융합 기법 연구)

  • Moon, Hyeon-Joon;Lee, Min-Hyung;Jeong, Kang-Hun
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.151-156
    • /
    • 2015
  • In this paper, we have proposed personal multimodal biometric authentication system based on face detection, recognition and speaker verification for smart-phone environment. Proposed system detect the face with Modified Census Transform algorithm then find the eye position in the face by using gabor filter and k-means algorithm. Perform preprocessing on the detected face and eye position, then we recognize with Linear Discriminant Analysis algorithm. Afterward in speaker verification process, we extract the feature from the end point of the speech data and Mel Frequency Cepstral Coefficient. We verified the speaker through Dynamic Time Warping algorithm because the speech feature changes in real-time. The proposed multimodal biometric system is to fuse the face and speech feature (to optimize the internal operation by integer representation) for smart-phone based real-time face detection, recognition and speaker verification. As mentioned the multimodal biometric system could form the reliable system by estimating the reasonable performance.

Analysis of the Time Delayed Effect for Speech Feature (음성 특징에 대한 시간 지연 효과 분석)

  • Ahn, Young-Mok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.100-103
    • /
    • 1997
  • In this paper, we analyze the time delayed effect of speech feature. Here, the time delayed effect means that the current feature vector of speech is under the influence of the previous feature vectors. In this paper, we use a set of LPC driven cepstal coefficients and evaluate the time delayed effect of cepstrum with the performance of the speech recognition system. For the experiments, we used the speech database consisting of 22 words which uttered by 50 male speakers. The speech database uttered by 25 male speakers was used for training, and the other set was used for testing. The experimental results show that the time delayed effect is large in the lower orders of feature vector but small in the higher orders.

  • PDF

A Study on the Channel Normalized Pitch Synchronous Cepstrum for Speaker Recognition (채널에 강인한 화자 인식을 위한 채널 정규화 피치 동기 켑스트럼에 관한 연구)

  • 김유진;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.61-74
    • /
    • 2004
  • In this paper, a contort- and speaker-dependent cepstrum extraction method and a channel normalization method for minimizing the loss of speaker characteristics in the cepstrum were proposed for a robust speaker recognition system over the channel. The proposed extraction method creates a cepstrum based on the pitch synchronous analysis using the inherent pitch of the speaker. Therefore, the cepstrum called the 〃pitch synchronous cepstrum〃 (PSC) represents the impulse response of the vocal tract more accurately in voiced speech. And the PSC can compensate for channel distortion because the pitch is more robust in a channel environment than the spectrum of speech. And the proposed channel normalization method, the 〃formant-broadened pitch synchronous CMS〃 (FBPSCMS), applies the Formant-Broadened CMS to the PSC and improves the accuracy of the intraframe processing. We compared the text-independent closed-set speaker identification on 56 females and 112 males using TIMIT and NTIMIT database, respectively. The results show that pitch synchronous km improves the error reduction rate by up to 7.7% in comparison with conventional short-time cepstrum and the error rates of the FBPSCMS are more stable and lower than those of pole-filtered CMS.

On Codebook Design to Improve Speaker Adaptation (음성 인식 시스템의 화자 적응 성능 향상을 위한 코드북 설계)

  • Yang, Tae-Young;Shin, Won-Ho;Kim, Weon-Goo;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.5-11
    • /
    • 1996
  • The purpose of this paper is to propose a method improving the performance of a semi-continuous hidden Markov model(SCHMM) speaker adaptation system which uses Bayesian Parameter reestimation approach. The performance of Bayesian speaker adaptation could be degraded in case that the features of a new speaker are severely different from those of a reference codebook. The excessive codewords of the reference codebook still remain after adaptation proess. which cause confusion in recognition process. To solve such problems, the proposed method uses formant information which is extracted from the cepstral coefficients of the reference codebook and adaptation data. The reference codebook is adapted to represent the formant distribution of a new speaker and it is used for Bayesian speaker adaptation as an initial codebook. The proposed method provides accurate correspondence between reference codebook and adaptation data. It was observed that the excessive codewords were not selected during recognition process. The experimental results showed that the proposed method improved the recognition performance.

  • PDF

Speaker Recognition Using Dynamic Time Variation fo Orthogonal Parameters (직교인자의 동적 특성을 이용한 화자인식)

  • 배철수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.9
    • /
    • pp.993-1000
    • /
    • 1992
  • Recently, many researchers have found that the speaker recognition rate is high when they perform the speaker recognition using statistical processing method of orthogonal parameter, which are derived from the analysis of speech signal and contain much of the speaker's identity. This method, however, has problems caused by vocalization speed or time varying feature of speed. Thus, to solve these problems, this paper proposes two methods of speaker recognition which combine DTW algorithm with the method using orthogonal parameters extracted from $Karthumem-Lo\'{e}ve$ Transform method which applies orthogonal parameters as feature vector to ETW algorithm and the other is the method which applies orthogonal parameters to the optimal path. In addition, we compare speaker recognition rate obtained from the proposed two method with that from the conventional method of statistical process of orthogonal parameters. Orthogonal parameters used in this paper are derived from both linear prediction coefficients and partial correlation coefficients of speech signal.

  • PDF

VoIP-Based Voice Secure Telecommunication Using Speaker Authentication in Telematics Environments (텔레매틱스 환경에서 화자인증을 이용한 VoIP기반 음성 보안통신)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.1
    • /
    • pp.84-90
    • /
    • 2011
  • In this paper, a VoIP-based voice secure telecommunication technology using the text-independent speaker authentication in the telematics environments is proposed. For the secure telecommunication, the sender's voice packets are encrypted by the public-key generated from the speaker's voice information and submitted to the receiver. It is constructed to resist against the man-in-the middle attack. At the receiver side, voice features extracted from the received voice packets are compared with the reference voice-key received from the sender side for the speaker authentication. To improve the accuracy of text-independent speaker authentication, Gaussian Mixture Model(GMM)-supervectors are applied to Support Vector Machine (SVM) kernel using Bayesian information criterion (BIC) and Mahalanobis distance (MD).

Improvement of MLLR Speaker Adaptation Algorithm to Reduce Over-adaptation Using ICA and PCA (과적응 감소를 위한 주성분 분석 및 독립성분 분석을 이용한 MLLR 화자적응 알고리즘 개선)

  • 김지운;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.539-544
    • /
    • 2003
  • This paper describes how to reduce the effect of an occupation threshold by that the transform of mixture components of HMM parameters is controlled in hierarchical tree structure to prevent from over-adaptation. To reduce correlations between data elements and to remove elements with less variance, we employ PCA (Principal component analysis) and ICA (independent component analysis) that would give as good a representation as possible, and decline the effect of over-adaptation. When we set lower occupation threshold and increase the number of transformation function, ordinary MLLR adaptation algorithm represents lower recognition rate than SI models, whereas the proposed MLLR adaptation algorithm represents the improvement of over 2% for the word recognition rate as compared to performance of SI models.

Recognizing Five Emotional States Using Speech Signals (음성 신호를 이용한 화자의 5가지 감성 인식)

  • Kang Bong-Seok;Han Chul-Hee;Woo Kyoung-Ho;Yang Tae-Young;Lee Chungyong;Youn Dae-Hee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.101-104
    • /
    • 1999
  • 본 논문에서는 음성 신호를 이용해서 화자의 감정을 인식하기 위해 3가지 시스템을 구축하고 이들의 성능을 비교해 보았다. 인식 대상으로 하는 감정은 기쁨, 슬픔, 화남, 두려움, 지루함, 평상시의 감정이고, 각 감정에 대한 감정 음성 데이터베이스를 직접 구축하였다. 피치와 에너지 정보를 감성 인식의 특징으로 이용하였고, 인식 알고리듬은 MLB(Maximum-Likelihood Bayes)분류기, NN(Nearest Neighbor)분류기 및 HMM(Hidden Markov Model)분류기를 이용하였다. 이 중 MLB 분류기와 NN 분류기에서는 특징벡터로 피치와 에너지의 평균과 표준편차, 최대값 등 통계적인 정보를 이용하였고, TMM 분류기에서는 각 프레임에서의 델타 피치와 델타델타 피치, 델타 에너지와 델타델타 에너지 등 시간적 정보를 이용하였다. 실험은 화자종속, 문장독립형 방식으로 하였고, 인식 실험 결과는 MLB를 이용해서 $68.9\%, NN을 이용해서 $66.7\%를 얻었고, HMM 분류기를 이용해서 $89.30\%를 얻었다.

  • PDF

A study on creating Reference Pattern of speech by using the cluster (집단화를 이용한 음성의 표준 패턴설정에 관한 연구)

  • 김계국
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1985.10a
    • /
    • pp.59-63
    • /
    • 1985
  • 불특정 화자의 음성인식을 위해 150 숫자음에 대하여 10개의 표준패턴을 설정하는데 목적을 두고 기술했다. 남성화자 3인이 각숫자음(0-9)를 5번씩 반복 발음한 150음을 지단화하여 숫자음의 표준패턴을 설정하였다. 특징 파라미터는 포르만트 주파수를 이용하였고 유크리드 거리 측정법을 유사도 비교에 사용하였다. 실험결과 85.3%의 인식률을 얻었다.

  • PDF