• 제목/요약/키워드: Speaker Classification

검색결과 62건 처리시간 0.025초

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

연속음성중 키워드(Keyword) 인식을 위한 Binary Clustering Network (Binary clustering network for recognition of keywords in continuous speech)

  • 최관선;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국내학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.870-876
    • /
    • 1993
  • This paper presents a binary clustering network (BCN) and a heuristic algorithm to detect pitch for recognition of keywords in continuous speech. In order to classify nonlinear patterns, BCN separates patterns into binary clusters hierarchically and links same patterns at root level by using the supervised learning and the unsupervised learning. BCN has many desirable properties such as flexibility of dynamic structure, high classification accuracy, short learning time, and short recall time. Pitch Detection algorithm is a heuristic model that can solve the difficulties such as scaling invariance, time warping, time-shift invariance, and redundance. This recognition algorithm has shown recognition rates as high as 95% for speaker-dependent as well as multispeaker-dependent tests.

  • PDF

Discriminative Training of Stochastic Segment Model Based on HMM Segmentation for Continuous Speech Recognition

  • Chung, Yong-Joo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • 제15권4E호
    • /
    • pp.21-27
    • /
    • 1996
  • In this paper, we propose a discriminative training algorithm for the stochastic segment model (SSM) in continuous speech recognition. As the SSM is usually trained by maximum likelihood estimation (MLE), a discriminative training algorithm is required to improve the recognition performance. Since the SSM does not assume the conditional independence of observation sequence as is done in hidden Markov models (HMMs), the search space for decoding an unknown input utterance is increased considerably. To reduce the computational complexity and starch space amount in an iterative training algorithm for discriminative SSMs, a hybrid architecture of SSMs and HMMs is programming using HMMs. Given the segment boundaries, the parameters of the SSM are discriminatively trained by the minimum error classification criterion based on a generalized probabilistic descent (GPD) method. With the discriminative training of the SSM, the word error rate is reduced by 17% compared with the MLE-trained SSM in speaker-independent continuous speech recognition.

  • PDF

A Study on the Optimal Mahalanobis Distance for Speech Recognition

  • Lee, Chang-Young
    • 음성과학
    • /
    • 제13권4호
    • /
    • pp.177-186
    • /
    • 2006
  • In an effort to enhance the quality of feature vector classification and thereby reduce the recognition error rate of the speaker-independent speech recognition, we employ the Mahalanobis distance in the calculation of the similarity measure between feature vectors. It is assumed that the metric matrix of the Mahalanobis distance be diagonal for the sake of cost reduction in memory and time of calculation. We propose that the diagonal elements be given in terms of the variations of the feature vector components. Geometrically, this prescription tends to redistribute the set of data in the shape of a hypersphere in the feature vector space. The idea is applied to the speech recognition by hidden Markov model with fuzzy vector quantization. The result shows that the recognition is improved by an appropriate choice of the relevant adjustable parameter. The Viterbi score difference of the two winners in the recognition test shows that the general behavior is in accord with that of the recognition error rate.

  • PDF

음성망을 이용한 한국어 연속 숫자음 인식에 관한 연구 (Study on the Recognition of Spoken Korean Continuous Digits Using Phone Network)

  • 이강성;이형준;변용규;김순협
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1988년도 전기.전자공학 학술대회 논문집
    • /
    • pp.624-627
    • /
    • 1988
  • This paper describes the implementation of recognition of speaker - dependent Korean spoken continuous digits. The recognition system can be divided into two parts, acoustic - phonetic processor and lexical decoder. Acoustic - phonetic processor calculates the feature vectors from input speech signal and the performs frame labelling and phone labelling. Frame labelling is performed by Bayesian classification method and phone labelling is performed using labelled frame and posteriori probability. The lexical decoder accepts segments (phones) from acoustic - phonetic processor and decodes its lexical structure through phone network which is constructed from phonetic representation of ten digits. The experiment carried out with two sets of 4continuous digits, each set is composed of 35 patterns. An evaluation of the system yielded a pattern accuracy of about 80 percent resulting from a word accuracy of about 95 percent.

  • PDF

Japanese and Korean speakers' production of Japanese fricative /s/ and affricate /ts/

  • Yamakawa, Kimiko;Amano, Shigeaki
    • 말소리와 음성과학
    • /
    • 제14권1호
    • /
    • pp.13-19
    • /
    • 2022
  • This study analyzed the pronunciations of Japanese fricative /s/ and affricate /ts/ by 24 Japanese and 40 Korean speakers using the rise and steady+decay durations of their frication part in order to clarify the characteristics of their pronunciations. Discriminant analysis revealed that Japanese speakers' /s/ and /ts/ were well classified by the acoustic boundaries defined by a discriminant function. Using this boundary, Korean speakers' production of /s/ and /ts/ was analyzed. It was found that, in Korean speakers' pronunciation, misclassification of /s/ as /ts/ was more frequent than that of /ts/ as /s/, indicating that both the /s/ and /ts/ distributions shift toward short rise and steady+decay durations. Moreover, their distributions were very similar to those of Korean fricatives and affricates. These results suggest that Korean speakers' classification error might be because of their use of Korean lax and tense fricatives to pronounce Japanese /s/, and Korean lax and tense affricates to pronounce Japanese /ts/.

발화행태 특징을 활용한 응급상황 신고자 연령분류 (Age classification of emergency callers based on behavioral speech utterance characteristics)

  • 손귀영;권순일;백성욱
    • 한국차세대컴퓨팅학회논문지
    • /
    • 제13권6호
    • /
    • pp.96-105
    • /
    • 2017
  • 본 논문에서는 실제 응급상황센터에 접수된 신고전화의 음성분석을 통하여 발화자의 연령을 분류하고자 한다. 2가지 발화행태적 특징요소인 무성휴지(Silent Pause), 대화반응시간(Turn-taking latency)를 활용하여 성인과 노인을 분류할 수 있는 특징에 대한 분류기준을 선정하고, 이를 기계학습 분류기인 SVM(Support Vector Machine)을 활용하여 분류정확도를 확인하였다. 먼저, 응급상황센터의 실제 신고전화에 대하여 발화행태적 특징 요소를 기반으로 청취분석을 통하여 발생길이에 대하여 성인과 노인사이에 통계적으로 유의하다는 것을 확인하였다(p<0.05). 또한, 성인과 노인 각 100개, 총 200개의 음성데이터를 5차 교차검증방법을 사용하여 기계학습을 실행한 결과, 2가지의 발화행태를 모두 사용한 복합기준(무성휴지+대화반응시간)일 경우, 70%의 가장 높은 분류정확도를 확인할 수 있었다. 본 연구의 결과는 음성에 기반한 연령을 분류하는 연구에 있어서, 기존의 음성정보와 더불어, 새로운 발화행태적 특징요소와의 결합을 통하여 연령구분을 가능하게 하는 새로운 방법으로 제안할 수 있을 것이다. 또한, 향후 음성기반 상황판단 시스템 기술 개발에 있어서 기초자료로 적용이 가능하며, 이를 통하여 신속한 연령분류를 판단을 통한 상황대처가 가능하도록 하는 데에 기여할 수 있을 것이다.

Harmonics(배음)와 Formant Bandwidth(포먼트 폭)를 이용한 음성특성(音聲特性)과 사상체질간(四象體質間)의 상관성(相關性) 연구(硏究) (A Study on the Correlation Between Sasang Constitution and Sound Characteristics Used Harmonics and Formant Bandwidth)

  • 박성진;김달래
    • 사상체질의학회지
    • /
    • 제16권1호
    • /
    • pp.61-73
    • /
    • 2004
  • This study was prepared to investigate the correlation between Sasang constitutional groups and voice characteristics using voice analysis system(in this study, CSL). I focused on the voice characteristics in terms of harmonics, Formant frequency and Formant Bandwidth. The subjects were 71 males. I classified them into three groups, that is Soeumin group, Soyangin group and Taeumin group. The classification method of Constitution used two ways, QSCCII(Questionnarie for the Sasang Constitution Classification II) and Interview with a specialist in Sasang Constitution. So 71 people were categorized into 31 Soeumin(people), 18 Soyangin(people) and 22 Taeumin(people). Pitch is approximately similar to the fundamental frequency(F0) in voices. Shimmer in dB gives an evaluation of the period-to-period variability of the peak-to-peak amplitude within the analyzed voice sample. FFT(Fast Fourier Transform) method in CSL can display sampled voices into harmonics. H1 is the first peak and h2 is the second peak in the harmonics. The amplitude difference of h1 and h2(h1-h2) can be explained as the speaker's phonation type, And Formant frequency and bandwidth can be explained as the speaker's vocal tract. So I checked the harmonics and Formant frequency and Bandwidth as the voice parameters. First I have captured /e/ voices from all subjects using microphone. And then I analyzed /e/ voices with CSL. Power Spectrum and Formant History is the menu in the CSL which can display harmonics and Formant frequency and bandwidth. The results about the correlation between Sasang Constitutional Groups and voice parameters are as follows; 1. There is no significant amplitude difference of harmonics(h1-h2) among three groups. 2. There is the significant difference between Soeumin Group and Soyangin Group in Formant Frequency 1 and Formant Bandwidth 1(p<0.05). Any other parameters have no significance. I assume that Soyangin Group has clearer and brighter voice than Soeumin Group according to the Formant Bandwidth difference. And I think its result has coincidence with the context of "Dongyi Suse Bowon" and "Sasangimhejinam".

  • PDF

SOFM 신경회로망을 이용한 한국어 음소 인식 (Korean Phoneme Recognition Using Self-Organizing Feature Map)

  • 전용구;양진우;김순협
    • 한국음향학회지
    • /
    • 제14권2호
    • /
    • pp.101-112
    • /
    • 1995
  • 본 논문에서는 패턴 매칭 방법에 근거하여 인식 단위가 음소인 음소 기반 인식 시스템을 구성하였다. 선택한 신경망 구조는 생물학적 신경망인 코호넨(T. Kohonen)의 SOFM(Self-Organizing Feature Map)으로 패턴 매칭 과정 중 클러스터러(clusterer)로 사용하였다. SOFM 신경망은 신호 공간에 대해서 최적의 국소(局所) 해부적 사상(local topographical mapping)에 의한 자기 조직화 과정을 수행하며, 그 결과 인식 문제에 있어서 상당히 높은 정확도를 나타낸다. 따라서 SOFM 신경망은 음소 인식에도 효과적으로 응용될 수 있다. 또한 음소 인식 시스템의 성능 향상을 위해 K-means클러스터링 알고리즘이 결합된 학습 알고리즘을 제안하였다. 제안된 음소 인식 시스템의 성능을 평가하기 위해 먼저, 인식 대상음소는 모음군 17개, 자음의 경우 파열음9개, 마찰음 3개, 파찰음 3개, 유음 및 비음 4개, 음소의 성질이 다른 종성 7개의 음소군으로 모두 43개의 음소를 대상으로 실험하였으며, 각 음소군에 대한 특징 지도를 구성하여 레이블러(labeler)의 기능을 수행하게 하였다. 화자 종속 인식 실험 결과 $87.2\%$의 인식률을 보였으며 제안한 학습법의 빠른 수렴성과 인식률 향상을 확인하였다.

  • PDF

I-벡터 기반 오픈세트 언어 인식을 위한 다중 판별 DNN (Multiple Discriminative DNNs for I-Vector Based Open-Set Language Recognition)

  • 강우현;조원익;강태균;김남수
    • 한국통신학회논문지
    • /
    • 제41권8호
    • /
    • pp.958-964
    • /
    • 2016
  • 본 논문에서는 여러 개의 이원 support vector machine (binary SVM)을 사용하여 세 개 이상의 클래스를 분류하는 multi-class SVM과 유사하게 다중의 판별 deep neural network (DNN) 모델을 사용하는 i-벡터 기반의 언어 인식 시스템을 제안한다. 제안하는 시스템은 NIST 2015 i-vector Machine Learning Challenge 데이터베이스에 포함된 i-벡터들을 이용하여 학습 및 테스트 되었으며, 오픈 세트에서 기존의 cosine distance, multi-class SVM 및 단일 neural network (NN) 기반의 언어 인식 시스템에 비하여 높은 성능을 보임이 확인되었다.