• Title/Summary/Keyword: extensions of LDA

Search Result 2, Processing Time 0.014 seconds

Extensions of LDA by PCA Mixture Model and Class-wise Features (PCA 혼합 모형과 클래스 기반 특징에 의한 LDA의 확장)

  • Kim Hyun-Chul;Kim Daijin;Bang Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.8
    • /
    • pp.781-788
    • /
    • 2005
  • LDA (Linear Discriminant Analysis) is a data discrimination technique that seeks transformation to maximize the ratio of the between-class scatter and the within-class scatter While it has been successfully applied to several applications, it has two limitations, both concerning the underfitting problem. First, it fails to discriminate data with complex distributions since all data in each class are assumed to be distributed in the Gaussian manner; and second, it can lose class-wise information, since it produces only one transformation over the entire range of classes. We propose three extensions of LDA to overcome the above problems. The first extension overcomes the first problem by modeling the within-class scatter using a PCA mixture model that can represent more complex distribution. The second extension overcomes the second problem by taking different transformation for each class in order to provide class-wise features. The third extension combines these two modifications by representing each class in terms of the PCA mixture model and taking different transformation for each mixture component. It is shown that all our proposed extensions of LDA outperform LDA concerning classification errors for handwritten digit recognition and alphabet recognition.

On Optimizing LDA-extentions Using a Pre-Clustering (사전 클러스터링을 이용한 LDA-확장법들의 최적화)

  • Kim, Sang-Woon;Koo, Byum-Yong;Choi, Woo-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.3
    • /
    • pp.98-107
    • /
    • 2007
  • For high-dimensional pattern recognition, such as face classification, the small number of training samples leads to the Small Sample Size problem when the number of pattern samples is smaller than the number of dimensionality. Recently, various LDA-extensions have been developed, including LDA, PCA+LDA, and Direct-LDA, to address the problem. This paper proposes a method of improving the classification efficiency by increasing the number of (sub)-classes through pre-clustering a training set prior to the execution of Direct-LDA. In LDA (or Direct-LDA), since the number of classes of the training set puts a limit to the dimensionality to be reduced, it is increased to the number of sub-classes that is obtained through clustering so that the classification performance of LDA-extensions can be improved. In other words, the eigen space of the training set consists of the range space and the null space, and the dimensionality of the range space increases as the number of classes increases. Therefore, when constructing the transformation matrix, through minimizing the null space, the loss of discriminatve information resulted from this space can be minimized. Experimental results for the artificial data of X-OR samples as well as the bench mark face databases of AT&T and Yale demonstrate that the classification efficiency of the proposed method could be improved.