• 제목/요약/키워드: kernel principal component analysis

검색결과 61건 처리시간 0.027초

Arrow Diagrams for Kernel Principal Component Analysis

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • 제20권3호
    • /
    • pp.175-184
    • /
    • 2013
  • Kernel principal component analysis(PCA) maps observations in nonlinear feature space to a reduced dimensional plane of principal components. We do not need to specify the feature space explicitly because the procedure uses the kernel trick. In this paper, we propose a graphical scheme to represent variables in the kernel principal component analysis. In addition, we propose an index for individual variables to measure the importance in the principal component plane.

커널 주성분 분석의 앙상블을 이용한 다양한 환경에서의 화자 식별 (Speaker Identification on Various Environments Using an Ensemble of Kernel Principal Component Analysis)

  • 양일호;김민석;소병민;김명재;유하진
    • 한국음향학회지
    • /
    • 제31권3호
    • /
    • pp.188-196
    • /
    • 2012
  • 본 논문에서는 커널 주성분 분석 (KPCA, kernel principal component analysis)으로 강화한 화자 특징을 이용하여 복수의 분류기를 학습하고 이를 앙상블 결합하는 화자 식별 방법을 제안한다. 이 때, 계산량과 메모리 요구량을 줄이기 위해 전체 화자 특징 벡터 중 일부를 랜덤 선택하여 커널 주성분 분석의 기저를 추정한다. 실험 결과, 제안한 방법이 그리디 커널 주성분 분석 (GKPCA, greedy kernel principal component analysis)보다 높은 화자 식별률을 보였다.

Incremental Eigenspace Model Applied To Kernel Principal Component Analysis

  • Kim, Byung-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권2호
    • /
    • pp.345-354
    • /
    • 2003
  • An incremental kernel principal component analysis(IKPCA) is proposed for the nonlinear feature extraction from the data. The problem of batch kernel principal component analysis(KPCA) is that the computation becomes prohibitive when the data set is large. Another problem is that, in order to update the eigenvectors with another data, the whole eigenvectors should be recomputed. IKPCA overcomes this problem by incrementally updating the eigenspace model. IKPCA is more efficient in memory requirement than a batch KPCA and can be easily improved by re-learning the data. In our experiments we show that IKPCA is comparable in performance to a batch KPCA for the classification problem on nonlinear data set.

  • PDF

클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출 (Nonlinear Feature Extraction using Class-augmented Kernel PCA)

  • 박명수;오상록
    • 전자공학회논문지SC
    • /
    • 제48권5호
    • /
    • pp.7-12
    • /
    • 2011
  • 본 논문에서는 자료패턴을 분류하기에 적합한 특징을 추출하는 방법인, 클래스가 부가된 커널 주성분분석(class-augmented kernel principal component analysis)를 새로이 제안하였다. 특징추출에 널리 이용되는 부분공간 기법 중, 최근 제안된 클래스가 부가된 주성분분석(class-augmented principal component analysis)은 패턴 분류를 위한 특징을 추출하기 위해 이용되는 선형분류분석(linear discriminant analysis)등에 비해 정확한 특징을 계산상의 문제 없이 추출할 수 있는 기법이다. 그러나, 추출되는 특징은 입력의 선형조합으로 제한되어 자료에 따라 적절한 특징을 추출하기 어려운 경우가 발생한다. 이를 해결하기 위하여 클래스가 부가된 주성분분석에 커널 트릭을 적용하여 비선형 특징을 추출할 수 있는 새로운 부분공간 기법으로 확장하고, 실험을 통하여 성능을 평가하였다.

The Kernel Trick for Content-Based Media Retrieval in Online Social Networks

  • Cha, Guang-Ho
    • Journal of Information Processing Systems
    • /
    • 제17권5호
    • /
    • pp.1020-1033
    • /
    • 2021
  • Nowadays, online or mobile social network services (SNS) are very popular and widely spread in our society and daily lives to instantly share, disseminate, and search information. In particular, SNS such as YouTube, Flickr, Facebook, and Amazon allow users to upload billions of images or videos and also provide a number of multimedia information to users. Information retrieval in multimedia-rich SNS is very useful but challenging task. Content-based media retrieval (CBMR) is the process of obtaining the relevant image or video objects for a given query from a collection of information sources. However, CBMR suffers from the dimensionality curse due to inherent high dimensionality features of media data. This paper investigates the effectiveness of the kernel trick in CBMR, specifically, the kernel principal component analysis (KPCA) for dimensionality reduction. KPCA is a nonlinear extension of linear principal component analysis (LPCA) to discovering nonlinear embeddings using the kernel trick. The fundamental idea of KPCA is mapping the input data into a highdimensional feature space through a nonlinear kernel function and then computing the principal components on that mapped space. This paper investigates the potential of KPCA in CBMR for feature extraction or dimensionality reduction. Using the Gaussian kernel in our experiments, we compute the principal components of an image dataset in the transformed space and then we use them as new feature dimensions for the image dataset. Moreover, KPCA can be applied to other many domains including CBMR, where LPCA has been used to extract features and where the nonlinear extension would be effective. Our results from extensive experiments demonstrate that the potential of KPCA is very encouraging compared with LPCA in CBMR.

An improved kernel principal component analysis based on sparse representation for face recognition

  • Huang, Wei;Wang, Xiaohui;Zhu, Yinghui;Zheng, Gengzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권6호
    • /
    • pp.2709-2729
    • /
    • 2016
  • Representation based classification, kernel method and sparse representation have received much attention in the field of face recognition. In this paper, we proposed an improved kernel principal component analysis method based on sparse representation to improve the accuracy and robustness for face recognition. First, the distances between the test sample and all training samples in kernel space are estimated based on collaborative representation. Second, S training samples with the smallest distances are selected, and Kernel Principal Component Analysis (KPCA) is used to extract the features that are exploited for classification. The proposed method implements the sparse representation under ℓ2 regularization and performs feature extraction twice to improve the robustness. Also, we investigate the relationship between the accuracy and the sparseness coefficient, the relationship between the accuracy and the dimensionality respectively. The comparative experiments are conducted on the ORL, the GT and the UMIST face database. The experimental results show that the proposed method is more effective and robust than several state-of-the-art methods including Sparse Representation based Classification (SRC), Collaborative Representation based Classification (CRC), KCRC and Two Phase Test samples Sparse Representation (TPTSR).

SVM-Guided Biplot of Observations and Variables

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • 제20권6호
    • /
    • pp.491-498
    • /
    • 2013
  • We consider support vector machines(SVM) to predict Y with p numerical variables $X_1$, ${\ldots}$, $X_p$. This paper aims to build a biplot of p explanatory variables, in which the first dimension indicates the direction of SVM classification and/or regression fits. We use the geometric scheme of kernel principal component analysis adapted to map n observations on the two-dimensional projection plane of which one axis is determined by a SVM model a priori.

Analysis of Kernel Hardness of Korean Wheat Cultivars

  • Hong, Byung-Hee;Park, Chul-Soo
    • 한국작물학회지
    • /
    • 제44권1호
    • /
    • pp.78-85
    • /
    • 1999
  • To investigate kernel hardness, a compression test which is widely used to measure the hardness of individual kernels as a physical testing method was made simultaneously with the measurement of friabilin (15KDa) which is strongly associated with kernel hardness and was recently developed as a biochemical marker for evaluating kernel hardness in 79 Korean wheat varieties and experimental lines. With the scattered diagram based on the principal component analysis from the parameters of the compression test, 79 Korean wheat varieties were classified into three groups based on the principal component analysis. Since conventional methods required large amount of flour samples for analysis of friabilin due to the relatively small amount of friabilin in wheat kernels, those methods had limitations for quality prediction in wheat breeding programs. An extraction of friabilin from the starch of a single kernel through cesium chloride gradient centrifugation was successful in this experiment. Among 79 Korean wheat varieties and experimental lines 50 lines (63.3%) exhibited a friabilin band and 29 lines (36.7%) did not show a friabilin band. In this study, lines that contained high maximum force and the lower ratio of minimum force to maximum force showed the absence of the friabilin band. Identification of friabilin, which is the product of a major gene, could be applied in the screening procedures of kernel hardness. The single kernel analysis system for friabilin was found to be an easy, simple and effective screening method for early generation materials in a wheat breeding program for quality improvement.

  • PDF

Fault Detection of a Proposed Three-Level Inverter Based on a Weighted Kernel Principal Component Analysis

  • Lin, Mao;Li, Ying-Hui;Qu, Liang;Wu, Chen;Yuan, Guo-Qiang
    • Journal of Power Electronics
    • /
    • 제16권1호
    • /
    • pp.182-189
    • /
    • 2016
  • Fault detection is the research focus and priority in this study to ensure the high reliability of a proposed three-level inverter. Kernel principal component analysis (KPCA) has been widely used for feature extraction because of its simplicity. However, highlighting useful information that may be hidden under retained KPCs remains a problem. A weighted KPCA is proposed to overcome this shortcoming. Variable contribution plots are constructed to evaluate the importance of each KPC on the basis of sensitivity analysis theory. Then, different weighting values of KPCs are set to highlight the useful information. The weighted statistics are evaluated comprehensively by using the improved feature eigenvectors. The effectiveness of the proposed method is validated. The diagnosis results of the inverter indicate that the proposed method is superior to conventional KPCA.

비선형 특징 추출을 위한 온라인 비선형 주성분분석 기법 (On-line Nonlinear Principal Component Analysis for Nonlinear Feature Extraction)

  • 김병주;심주용;황창하;김일곤
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권3호
    • /
    • pp.361-368
    • /
    • 2004
  • 본 논문에서는 온라인 학습 자료의 비선형 특징(feature) 추출을 위한 새로운 온라인 비선형 주성분분석(OL-NPCA : On-line Nonlinear Principal Component Analysis) 기법을 제안한다. 비선형 특징 추출을 위한 대표적인 방법으로 커널 주성분방법(Kernel PCA)이 사용되고 있는데 기존의 커널 주성분 분석 방법은 다음과 같은 단점이 있다. 첫째 커널 주성분 분석 방법을 N 개의 학습 자료에 적용할 때 N${\times}$N크기의 커널 행렬의 저장 및 고유벡터를 계산하여야 하는데, N의 크기가 큰 경우에는 수행에 문제가 된다. 두 번째 문제는 새로운 학습 자료의 추가에 의한 고유공간을 새로 계산해야 하는 단점이 있다. OL-NPCA는 이러한 문제점들을 점진적인 고유공간 갱신 기법과 특징 사상 함수에 의해 해결하였다. Toy 데이타와 대용량 데이타에 대한 실험을 통해 OL-NPCA는 다음과 같은 장점을 나타낸다. 첫째 메모리 요구량에 있어 기존의 커널 주성분분석 방법에 비해 상당히 효율적이다. 두 번째 수행 성능에 있어 커널 주성분 분석과 유사한 성능을 나타내었다. 또한 제안된 OL-NPCA 방법은 재학습에 의해 쉽게 성능이 항상 되는 장점을 가지고 있다.