• Title/Summary/Keyword: LDA mixture model

Search Result 11, Processing Time 0.024 seconds

Face Recognition using LDA Mixture Model (LDA 혼합 모형을 이용한 얼굴 인식)

  • Kim Hyun-Chul;Kim Daijin;Bang Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.8
    • /
    • pp.789-794
    • /
    • 2005
  • LDA (Linear Discriminant Analysis) provides the projection that discriminates the data well, and shows a very good performance for face recognition. However, since LDA provides only one transformation matrix over whole data, it is not sufficient to discriminate the complex data consisting of many classes like honan faces. To overcome this weakness, we propose a new face recognition method, called LDA mixture model, that the set of alf classes are partitioned into several clusters and we get a transformation matrix for each cluster. This detailed representation will improve the classification performance greatly. In the simulation of face recognition, LDA mixture model outperforms PCA, LDA, and PCA mixture model in terms of classification performance.

Extensions of LDA by PCA Mixture Model and Class-wise Features (PCA 혼합 모형과 클래스 기반 특징에 의한 LDA의 확장)

  • Kim Hyun-Chul;Kim Daijin;Bang Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.8
    • /
    • pp.781-788
    • /
    • 2005
  • LDA (Linear Discriminant Analysis) is a data discrimination technique that seeks transformation to maximize the ratio of the between-class scatter and the within-class scatter While it has been successfully applied to several applications, it has two limitations, both concerning the underfitting problem. First, it fails to discriminate data with complex distributions since all data in each class are assumed to be distributed in the Gaussian manner; and second, it can lose class-wise information, since it produces only one transformation over the entire range of classes. We propose three extensions of LDA to overcome the above problems. The first extension overcomes the first problem by modeling the within-class scatter using a PCA mixture model that can represent more complex distribution. The second extension overcomes the second problem by taking different transformation for each class in order to provide class-wise features. The third extension combines these two modifications by representing each class in terms of the PCA mixture model and taking different transformation for each mixture component. It is shown that all our proposed extensions of LDA outperform LDA concerning classification errors for handwritten digit recognition and alphabet recognition.

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

A Study on Face Expression Recognition using LDA Mixture Model and Nearest Neighbor Pattern Classification (LDA 융합모델과 최소거리패턴분류법을 이용한 얼굴 표정 인식 연구)

  • No, Jong-Heun;Baek, Yeong-Hyeon;Mun, Seong-Ryong;Gang, Yeong-Jin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.11a
    • /
    • pp.167-170
    • /
    • 2006
  • 본 논문은 선형분류기인 LDA 융합모델과 최소거리패턴분류법을 이용한 얼굴표정인식 알고리즘 연구에 관한 것이다. 제안된 알고리즘은 얼굴 표정을 인식하기 위해 두 단계의 특징 추출과정과 인식단계를 거치게 된다. 먼저 특징추출 단계에서는 얼굴 표정이 담긴 영상을 PCA를 이용해 고차원에서 저차원의 공간으로 변환한 후, LDA 이용해 특징벡터를 클래스 별로 나누어 분류한다. 다음 단계로 LDA융합모델을 통해 계산된 특징벡터에 최소거리패턴분류법을 적용함으로서 얼굴 표정을 인식한다. 제안된 알고리즘은 6가지 기본 감정(기쁨, 화남, 놀람, 공포, 슬픔, 혐오)으로 구성된 데이터베이스를 이용해 실험한 결과, 기존알고리즘에 비해 향상된 인식률과 특정 표정에 관계없이 고른 인식률을 보임을 확인하였다.

  • PDF

A study on user defined spoken wake-up word recognition system using deep neural network-hidden Markov model hybrid model (Deep neural network-hidden Markov model 하이브리드 구조의 모델을 사용한 사용자 정의 기동어 인식 시스템에 관한 연구)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.131-136
    • /
    • 2020
  • Wake Up Word (WUW) is a short utterance used to convert speech recognizer to recognition mode. The WUW defined by the user who actually use the speech recognizer is called user-defined WUW. In this paper, to recognize user-defined WUW, we construct traditional Gaussian Mixture Model-Hidden Markov Model (GMM-HMM), Linear Discriminant Analysis (LDA)-GMM-HMM and LDA-Deep Neural Network (DNN)-HMM based system and compare their performances. Also, to improve recognition accuracy of the WUW system, a threshold method is applied to each model, which significantly reduces the error rate of the WUW recognition and the rejection failure rate of non-WUW simultaneously. For LDA-DNN-HMM system, when the WUW error rate is 9.84 %, the rejection failure rate of non-WUW is 0.0058 %, which is about 4.82 times lower than the LDA-GMM-HMM system. These results demonstrate that LDA-DNN-HMM model developed in this paper proves to be highly effective for constructing user-defined WUW recognition system.

Performance Improvement of Korean Connected Digit Recognition Using Various Discriminant Analyses (다양한 변별분석을 통한 한국어 연결숫자 인식 성능향상에 관한 연구)

  • Song Hwa Jeon;Kim Hyung Soon
    • MALSORI
    • /
    • no.44
    • /
    • pp.105-113
    • /
    • 2002
  • In Korean, each digit is monosyllable and some pairs are known to have high confusability, causing performance degradation of connected digit recognition systems. To improve the performance, in this paper, we employ various discriminant analyses (DA) including Linear DA (LDA), Weighted Pairwise Scatter LDA WPS-LDA), Heteroscedastic Discriminant Analysis (HDA), and Maximum Likelihood Linear Transformation (MLLT). We also examine several combinations of various DA for additional performance improvement. Experimental results show that applying any DA mentioned above improves the string accuracy, but the amount of improvement of each DA method varies according to the model complexity or number of mixtures per state. Especially, more than 20% of string error reduction is achieved by applying MLLT after WPS-LDA, compared with the baseline system, when class level of DA is defined as a tied state and 1 mixture per state is used.

  • PDF

Detection of Pathological Voice Using Linear Discriminant Analysis

  • Lee, Ji-Yeoun;Jeong, Sang-Bae;Choi, Hong-Shik;Hahn, Min-Soo
    • MALSORI
    • /
    • no.64
    • /
    • pp.77-88
    • /
    • 2007
  • Nowadays, mel-frequency cesptral coefficients (MFCCs) and Gaussian mixture models (GMMs) are used for the pathological voice detection. This paper suggests a method to improve the performance of the pathological/normal voice classification based on the MFCC-based GMM. We analyze the characteristics of the mel frequency-based filterbank energies using the fisher discriminant ratio (FDR). And the feature vectors through the linear discriminant analysis (LDA) transformation of the filterbank energies (FBE) and the MFCCs are implemented. An accuracy is measured by the GMM classifier. This paper shows that the FBE LDA-based GMM is a sufficiently distinct method for the pathological/normal voice classification, with a 96.6% classification performance rate. The proposed method shows better performance than the MFCC-based GMM with noticeable improvement of 54.05% in terms of error reduction.

  • PDF

Performance Improvement of Classification Between Pathological and Normal Voice Using HOS Parameter (HOS 특징 벡터를 이용한 장애 음성 분류 성능의 향상)

  • Lee, Ji-Yeoun;Jeong, Sang-Bae;Choi, Hong-Shik;Hahn, Min-Soo
    • MALSORI
    • /
    • no.66
    • /
    • pp.61-72
    • /
    • 2008
  • This paper proposes a method to improve pathological and normal voice classification performance by combining multiple features such as auditory-based and higher-order features. Their performances are measured by Gaussian mixture models (GMMs) and linear discriminant analysis (LDA). The combination of multiple features proposed by the frame-based LDA method is shown to be an effective method for pathological and normal voice classification, with a 87.0% classification rate. This is a noticeable improvement of 17.72% compared to the MFCC-based GMM algorithm in terms of error reduction.

  • PDF

A Gaussian Mixture Model Based Surface Electromyogram Pattern Classification Algorithm for Estimation of Wrist Motions (손목 움직임 추정을 위한 Gaussian Mixture Model 기반 표면 근전도 패턴 분류 알고리즘)

  • Jeong, Eui-Chul;Yu, Song-Hyun;Lee, Sang-Min;Song, Young-Rok
    • Journal of Biomedical Engineering Research
    • /
    • v.33 no.2
    • /
    • pp.65-71
    • /
    • 2012
  • In this paper, the Gaussian Mixture Model(GMM) which is very robust modeling for pattern classification is proposed to classify wrist motions using surface electromyograms(EMG). EMG is widely used to recognize wrist motions such as up, down, left, right, rest, and is obtained from two electrodes placed on the flexor carpi ulnaris and extensor carpi ulnaris of 15 subjects under no strain condition during wrist motions. Also, EMG-based feature is derived from extracted EMG signals in time domain for fast processing. The estimated features based in difference absolute mean value(DAMV) are used for motion classification through GMM. The performance of our approach is evaluated by recognition rates and it is found that the proposed GMM-based method yields better results than conventional schemes including k-Nearest Neighbor(k-NN), Quadratic Discriminant Analysis(QDA) and Linear Discriminant Analysis(LDA).

Semantic Dependency Link Topic Model for Biomedical Acronym Disambiguation (의미적 의존 링크 토픽 모델을 이용한 생물학 약어 중의성 해소)

  • Kim, Seonho;Yoon, Juntae;Seo, Jungyun
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.652-665
    • /
    • 2014
  • Many important terminologies in biomedical text are expressed as abbreviations or acronyms. We newly suggest a semantic link topic model based on the concepts of topic and dependency link to disambiguate biomedical abbreviations and cluster long form variants of abbreviations which refer to the same senses. This model is a generative model inspired by the latent Dirichlet allocation (LDA) topic model, in which each document is viewed as a mixture of topics, with each topic characterized by a distribution over words. Thus, words of a document are generated from a hidden topic structure of a document and the topic structure is inferred from observable word sequences of document collections. In this study, we allow two distinct word generation to incorporate semantic dependencies between words, particularly between expansions (long forms) of abbreviations and their sentential co-occurring words. Besides topic information, the semantic dependency between words is defined as a link and a new random parameter for the link presence is assigned to each word. As a result, the most probable expansions with respect to abbreviations of a given abstract are decided by word-topic distribution, document-topic distribution, and word-link distribution estimated from document collection though the semantic dependency link topic model. The abstracts retrieved from the MEDLINE Entrez interface by the query relating 22 abbreviations and their 186 expansions were used as a data set. The link topic model correctly predicted expansions of abbreviations with the accuracy of 98.30%.