• Title/Summary/Keyword: 화자 검증

Search Result 63, Processing Time 0.023 seconds

RPCA-GMM for Speaker Identification (화자식별을 위한 강인한 주성분 분석 가우시안 혼합 모델)

  • 이윤정;서창우;강상기;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.519-527
    • /
    • 2003
  • Speech is much influenced by the existence of outliers which are introduced by such an unexpected happenings as additive background noise, change of speaker's utterance pattern and voice detection errors. These kinds of outliers may result in severe degradation of speaker recognition performance. In this paper, we proposed the GMM based on robust principal component analysis (RPCA-GMM) using M-estimation to solve the problems of both ouliers and high dimensionality of training feature vectors in speaker identification. Firstly, a new feature vector with reduced dimension is obtained by robust PCA obtained from M-estimation. The robust PCA transforms the original dimensional feature vector onto the reduced dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector. Secondly, the GMM with diagonal covariance matrix is obtained from these transformed feature vectors. We peformed speaker identification experiments to show the effectiveness of the proposed method. We compared the proposed method (RPCA-GMM) with transformed feature vectors to the PCA and the conventional GMM with diagonal matrix. Whenever the portion of outliers increases by every 2%, the proposed method maintains almost same speaker identification rate with 0.03% of little degradation, while the conventional GMM and the PCA shows much degradation of that by 0.65% and 0.55%, respectively This means that our method is more robust to the existence of outlier.

소프트웨어 로봇을 위한 인간-로봇 상호작용

  • Gwak Geun-Chang;Ji Su-Yeong;Jo Yeong-Jo
    • The Magazine of the IEIE
    • /
    • v.33 no.3 s.262
    • /
    • pp.49-55
    • /
    • 2006
  • 인간과 로봇의 자연스러운 상호작용을 위하여 영상과 음성을 기반으로 한 인간-로봇 상호작용 (HRI: Human Robot Interaction) 기술들을 소개한다. URC개념의 서버/클라이언트 구조를 갖는 소프트웨어 로봇에 수행 가능한 얼굴 인식 및 검증, 준 생체정보(semi biometrics)를 이용한 사용자 인식, 제스처인식, 화자인식 및 검증, 대화체 음성인식 기술들에 대하여 살펴본다. 이러한 인간-로봇 상호작용 기술들은 초고속 인터넷과 같은 IT 인프라를 이용하는 URC(Ubiquitous Robotic Companion) 기반의 지능형 서비스 로봇을 위한 핵심기술로서 사용되어진다.

  • PDF

A comparison study of the characteristics of pauses and breath groups during paragraph reading for normal female adults with and without voice disorders (정상성인 여성 화자와 음성장애 성인 여성 화자의 문단 낭독 시 휴지 및 호흡단락 특성의 비교)

  • Pyo, Hwa Young
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.109-116
    • /
    • 2019
  • This study was conducted to identify the characteristics of pauses and breath groups made by normal adults and patients with voice disorders while reading a paragraph. Forty normal female adults and forty female patients with a functional voice disorder (18-45 yrs.) read the "Gaeul" paragraph with the "Running Speech" protocol of the Phonatory Aerodynamic System (PAS), by which the pauses with or without inspiration and between or within syntactic words and breath groups were analyzed. The number of pauses with inspiration was found to be higher in the patient group, but the number of pauses without inspiration was higher in the normal group. The rate of syntactic word boundaries with pauses with inspiration was higher in the patient group, while the number of syllables per breath group was higher in the normal group. As these results can be explained by patients' poor breath support due to glottal insufficiency, the question of whether voice disorder patients use their pauses and breath groups properly should be considered carefully in evaluation and intervention.

Word-balloon effects on Video (비디오에 대한 말풍선 효과 합성)

  • Lee, Sun-Young;Lee, In-Kwon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.332-334
    • /
    • 2012
  • 최근 영화나 드라마 같은 미디어 데어터가 폭발적으로 증가하면서, 다양한 언어로 번역된 자막 데이터도 증가하고 있다. 이러한 자막은 대부분 화면 하단이나 우측에 위치가 고정되어 나타내는 방식을 취하고 있다. 그러나 이 방식에는 몇 가지 한계점을 가지고 있다. 자막과 등장인물의 얼굴이 거리가 먼 경우, 시청자의 시선이 분산되어 영상에 집중하기 어렵다는 점과 청각장애를 가진 사람의 경우 자막만으로는 누가 말하고 있는 대사인지 혼동이 온다는 점이다. 본 논문에서는 만화에서 대사를 전달하기 위해 사용하던 말풍선을 동영상의 자막을 나타내는데 사용하는 새로운 자막 시스템을 제안한다. 말풍선을 사용하면 말꼬리로 화자의 위치를 가리키고, 시청자의 시선을 화자의 얼굴 근처에 집중시킴으로써 기존 자막이 갖는 한계점을 개선시킬 수 있다. 본 연구의 결과물을 검증하기 위해 사용자 평가를 실시했고, 기존의 자막 방식에 비해 시선의 안정성과 흥미성, 정확도에서 더 낫다는 결과를 얻을 수 있었다.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Implementation of the Timbre-based Emotion Recognition Algorithm for a Healthcare Robot Application (헬스케어 로봇으로의 응용을 위한 음색기반의 감정인식 알고리즘 구현)

  • Kong, Jung-Shik;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.43-46
    • /
    • 2009
  • This paper deals with feeling recognition from people's voice to fine feature vectors. Voice signals include the people's own information and but also people's feelings and fatigues. So, many researches are being progressed to fine the feelings from people's voice. In this paper, We analysis Selectable Mode Vocoder(SMV) that is one of the standard 3GPP2 codecs of ETSI. From the analyzed result, we propose voices features for recognizing feelings. And then, feeling recognition algorithm based on gaussian mixture model(GMM) is proposed. It uses feature vectors is suggested. We verify the performance of this algorithm from changing the mixture component.

  • PDF

Real-Time Implementation of Speaker Dependent Speech Recognition Hardware Module Using the TMS320C32 DSP : VR32 (TMS320C32 DSP를 이용한 실시간 화자종속 음성인식 하드웨어 모듈(VR32) 구현)

  • Chung, Ik-Joo;Chung, Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.4
    • /
    • pp.14-22
    • /
    • 1998
  • 본 연구에서는 Texas Instruments 사의 저가형 부동소수점 디지털 신호 처리기 (Digital Singnal Processor, DSP)인 TMS320C32를 이용하여 실시간 화자종속 음성인식 하 드웨어 모듈(VR32)을 개발하였다. 하드웨어 모듈의 구성은 40MHz의 TMS320C32 DSP, 14bit 코덱인 TLC32044(또는 8bit μ-law PCM 코덱), EPROM과 SRAM 등의 메모리와 호 스트 인터페이스를 위한 로직 회로로 이루어졌다. 뿐만 아니라 이 하드웨어 모듈을 PC사에 서 평가해보기 위한 PC 인터페이스용 보드 및 소프트웨어도 개발하였다. 음성인식 알고리 즘의 구성은 에너지와 ZCR을 기반으로 한 끝점검출(Endpoint Detection) 침 10차 가중 LPC 켑스터럼(Weighted LPC Cepstrum) 분석이 실시간으로 이루어지며 이후 Dynamic Time Warping(DTW)를 통하여 최고 유사 단어를 결정하고 다시 검증과정을 거쳐 최종 인식을 수행한다. 끝점검출의 경우 적응 문턱값(Adaptive threshold)을 이용하여 잡음에 강인한 끝 점검출이 가능하며 DTW 알고리즘의 경우 C 및 어셈블리를 이용한 최적화를 통하여 계산 속도를 대폭 개선하였다. 현재 인식률은 일반 사무실 환경에서 통상 단축다이얼 용도로 사 용할 수 있는 30 단어에 대하여 95% 이상으로 매우 높은 편이며, 특히 배경음악이나 자동 차 소음과 같은 잡음환경에서도 잘 동작한다.

  • PDF

A Study on Out-of-Vocabulary Rejection Algorithms using Variable Confidence Thresholds (가변 신뢰도 문턱치를 사용한 미등록어 거절 알고리즘에 대한 연구)

  • Bhang, Ki-Duck;Kang, Chul-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1471-1479
    • /
    • 2008
  • In this paper, we propose a technique to improve Out-Of-Vocabulary(OOV) rejection algorithms in variable vocabulary recognition system which is much used in ASR(Automatic Speech Recognition). The rejection system can be classified into two categories by their implementation method, keyword spotting method and utterance verification method. The utterance verification method uses the likelihood ratio of each phoneme Viterbi score relative to anti-phoneme score for deciding OOV. In this paper, we add speaker verification system before utterance verification and calculate an speaker verification probability. The obtained speaker verification probability is applied for determining the proposed variable-confidence threshold. Using the proposed method, we achieve the significant performance improvement; CA(Correctly Accepted for keyword) 94.23%, CR(Correctly Rejected for out-of-vocabulary) 95.11% in office environment, and CA 91.14%, CR 92.74% in noisy environment.

  • PDF

Impact of face masks on spectral and cepstral measures of speech: A case study of two Korean voice actors (한국어 스펙트럼과 캡스트럼 측정시 안면마스크의 영향: 남녀 성우 2인 사례 연구)

  • Wonyoung Yang;Miji Kwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.4
    • /
    • pp.422-435
    • /
    • 2024
  • This study intended to verify the effects of face masks on the Korean language in terms of acoustic, aerodynamic, and formant parameters. We chose all types of face masks available in Korea based on filter performance and folding type. Two professional voice actors (a male and a female) with more than 20 years of experience who are native Koreans and speak standard Korean participated in this study as speakers of voice data. Face masks attenuated the high-frequency range, resulting in decreased Vowel Space Area (VSA) and Vowel Articulation Index (VAI)scores and an increased Low-to-High spectral ratio (L/H ratio) in all voice samples. This can result in lower speech intelligibility. However, the degree of increment and decrement was based on the voice characteristics. For female speakers, the Speech Level (SL) and Cepstral Peak Prominence (CPP) increased with increasing face mask thickness. In this study, the presence or filter performance of a face mask was found to affect speech acoustic parameters according to the speech characteristics. Face masks provoked vocal effort when the vocal intensity was not sufficiently strong, or the environment had less reverberance. Further research needs to be conducted on the vocal efforts induced by face masks to overcome acoustic modifications when wearing masks.