• Title/Summary/Keyword: Nonlinear Feature Extraction

Search Result 11, Processing Time 0.209 seconds

Study of Nonlinear Feature Extraction for Faults Diagnosis of Rotating Machinery (회전기계의 결함진단을 위한 비선형 특징 추출 방법의 연구)

  • Widodo, Achmad;Yang, Bo-Suk
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.127-130
    • /
    • 2005
  • There are many methods in feature extraction have been developed. Recently, principal components analysis (PCA) and independent components analysis (ICA) is introduced for doing feature extraction. PCA and ICA linearly transform the original input into new uncorrelated and independent features space respectively In this paper, the feasibility of using nonlinear feature extraction will be studied. This method will employ the PCA and ICA procedure and adopt the kernel trick to nonlinearly map the data into a feature space. The goal of this study is to seek effectively useful feature for faults classification.

  • PDF

On-line Nonlinear Principal Component Analysis for Nonlinear Feature Extraction (비선형 특징 추출을 위한 온라인 비선형 주성분분석 기법)

  • 김병주;심주용;황창하;김일곤
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.361-368
    • /
    • 2004
  • The purpose of this study is to propose a new on-line nonlinear PCA(OL-NPCA) method for a nonlinear feature extraction from the incremental data. Kernel PCA(KPCA) is widely used for nonlinear feature extraction, however, it has been pointed out that KPCA has the following problems. First, applying KPCA to N patterns requires storing and finding the eigenvectors of a N${\times}$N kernel matrix, which is infeasible for a large number of data N. Second problem is that in order to update the eigenvectors with an another data, the whole eigenspace should be recomputed. OL-NPCA overcomes these problems by incremental eigenspace update method with a feature mapping function. According to the experimental results, which comes from applying OL-NPCA to a toy and a large data problem, OL-NPCA shows following advantages. First, OL-NPCA is more efficient in memory requirement than KPCA. Second advantage is that OL-NPCA is comparable in performance to KPCA. Furthermore, performance of OL-NPCA can be easily improved by re-learning the data.

Modified Kernel PCA Applied To Classification Problem (수정된 커널 주성분 분석 기법의 분류 문제에의 적용)

  • Kim, Byung-Joo;Sim, Joo-Yong;Hwang, Chang-Ha;Kim, Il-Kon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.243-248
    • /
    • 2003
  • An incremental kernel principal component analysis (IKPCA) is proposed for the nonlinear feature extraction from the data. The problem of batch kernel principal component analysis (KPCA) is that the computation becomes prohibitive when the data set is large. Another problem is that, in order to update the eigenvectors with another data, the whole eigenspace should be recomputed. IKPCA overcomes these problems by incrementally computing eigenspace model and empirical kernel map The IKPCA is more efficient in memory requirement than a batch KPCA and can be easily improved by re-learning the data. In our experiments we show that IKPCA is comparable in performance to a batch KPCA for the feature extraction and classification problem on nonlinear data set.

Nonlinear Feature Extraction using Class-augmented Kernel PCA (클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출)

  • Park, Myoung-Soo;Oh, Sang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • In this papwer, we propose a new feature extraction method, named as Class-augmented Kernel Principal Component Analysis (CA-KPCA), which can extract nonlinear features for classification. Among the subspace method that was being widely used for feature extraction, Class-augmented Principal Component Analysis (CA-PCA) is a recently one that can extract features for a accurate classification without computational difficulties of other methods such as Linear Discriminant Analysis (LDA). However, the features extracted by CA-PCA is still restricted to be in a linear subspace of the original data space, which limites the use of this method for various problems requiring nonlinear features. To resolve this limitation, we apply a kernel trick to develop a new version of CA-PCA to extract nonlinear features, and evaluate its performance by experiments using data sets in the UCI Machine Learning Repository.

Nonlinear feature extraction for regression problems (회귀문제를 위한 비선형 특징 추출 방법)

  • Kim, Seongmin;Kwak, Nojun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.86-88
    • /
    • 2010
  • 본 논문에서는 회귀문제를 위한 비선형 특징 추출방법을 제안하고 분류문제에 적용한다. 이 방법은 이미 제안된 선형판별 분석법을 회귀문제에 적용한 회귀선형판별분석법(Linear Discriminant Analysis for regression:LDAr)을 비선형 문제에 대해 확장한 것이다. 본 논문에서는 이를 위해 커널함수를 이용하여 비선형 문제로 확장하였다. 기본적인 아이디어는 입력 특징 공간을 커널 함수를 이용하여 새로운 고차원의 특징 공간으로 확장을 한 후, 샘플 간의 거리가 큰 것과 작은 것의 비율을 최대화하는 것이다. 일반적으로 얼굴 인식과 같은 응용 분야에서 얼굴의 크기, 회전과 같은 것들은 회귀문제에 있어서 비선형적이며 복잡한 문제로 인식되고 있다. 본 논문에서는 회귀 문제에 대한 간단한 실험을 수행하였으며 회귀선형판별분석법(LDAr)을 이용한 결과보다 향상된 결과를 얻을 수 있었다.

  • PDF

Incremental Eigenspace Model Applied To Kernel Principal Component Analysis

  • Kim, Byung-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.345-354
    • /
    • 2003
  • An incremental kernel principal component analysis(IKPCA) is proposed for the nonlinear feature extraction from the data. The problem of batch kernel principal component analysis(KPCA) is that the computation becomes prohibitive when the data set is large. Another problem is that, in order to update the eigenvectors with another data, the whole eigenvectors should be recomputed. IKPCA overcomes this problem by incrementally updating the eigenspace model. IKPCA is more efficient in memory requirement than a batch KPCA and can be easily improved by re-learning the data. In our experiments we show that IKPCA is comparable in performance to a batch KPCA for the classification problem on nonlinear data set.

  • PDF

An efficient learning algorithm of nonlinear PCA neural networks using momentum (모멘트를 이용한 비선형 주요성분분석 신경망의 효율적인 학습알고리즘)

  • Cho, Yong-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.3 no.4
    • /
    • pp.361-367
    • /
    • 2000
  • This paper proposes an efficient feature extraction of the image data using nonlinear principal component analysis neural networks of a new learning algorithm. The proposed method is a learning algorithm with momentum for reflecting the past trends. It is to get the better performance by restraining an oscillation due to converge the global optimum. The proposed algorithm has been applied to the cancer image of $256{\times}256$ pixels and the coin image of $128{\times}128$ pixels respectively. The simulation results show that the proposed algorithm has better performances of the convergence and the nonlinear feature extraction, in comparison with those using the backpropagation and the conventional nonlinear PCA neural networks.

  • PDF

UFKLDA: An unsupervised feature extraction algorithm for anomaly detection under cloud environment

  • Wang, GuiPing;Yang, JianXi;Li, Ren
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.684-695
    • /
    • 2019
  • In a cloud environment, performance degradation, or even downtime, of virtual machines (VMs) usually appears gradually along with anomalous states of VMs. To better characterize the state of a VM, all possible performance metrics are collected. For such high-dimensional datasets, this article proposes a feature extraction algorithm based on unsupervised fuzzy linear discriminant analysis with kernel (UFKLDA). By introducing the kernel method, UFKLDA can not only effectively deal with non-Gaussian datasets but also implement nonlinear feature extraction. Two sets of experiments were undertaken. In discriminability experiments, this article introduces quantitative criteria to measure discriminability among all classes of samples. The results show that UFKLDA improves discriminability compared with other popular feature extraction algorithms. In detection accuracy experiments, this article computes accuracy measures of an anomaly detection algorithm (i.e., C-SVM) on the original performance metrics and extracted features. The results show that anomaly detection with features extracted by UFKLDA improves the accuracy of detection in terms of sensitivity and specificity.

Low Dimensional Modeling and Synthesis of Head-Related Transfer Function (HRTF) Using Nonlinear Feature Extraction Methods (비선형 특징추출 기법에 의한 머리전달함수(HRTF)의 저차원 모델링 및 합성)

  • Seo, Sang-Won;Kim, Gi-Hong;Kim, Hyeon-Seok;Kim, Hyeon-Bin;Lee, Ui-Taek
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1361-1369
    • /
    • 2000
  • For the implementation of 3D Sound Localization system, the binaural filtering by HRTFs is generally employed. But the HRTF filter is of high order and its coefficients for all directions have to be stored, which imposes a rather large memory requirement. To cope with this, research works have centered on obtaining low dimensional HRTF representations without significant loss of information and synthesizing the original HRTF efficiently, by means of feature extraction methods for multivariate dat including PCA. In these researches, conventional linear PCA was applied to the frequency domain HRTF data and using relatively small number of principal components the original HRTFs could be synthesized in approximation. In this paper we applied neural network based nonlinear PCA model (NLPCA) and the nonlinear PLS repression model (NLPLS) for this low dimensional HRTF modeling and analyze the results in comparison with the PCA. The NLPCA that performs projection of data onto the nonlinear surfaces showed the capability of more efficient HRTF feature extraction than linear PCA and the NLPLS regression model that incorporates the direction information in feature extraction yielded more stable results in synthesizing general HRTFs not included in the model training.

  • PDF

Maximum Simplex Volume based Landmark Selection for Isomap (최대 부피 Simplex 기반의 Isomap을 위한 랜드마크 추출)

  • Chi, Junhwa
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.5
    • /
    • pp.509-516
    • /
    • 2013
  • Since traditional linear feature extraction methods are unable to handle nonlinear characteristics often exhibited in hyperspectral imagery, nonlinear feature extraction, also known as manifold learning, is receiving increased attention in hyperspectral remote sensing society as well as other community. A most widely used manifold Isomap is generally promising good results in classification and spectral unmixing tasks, but significantly high computational overhead is problematic, especially for large scale remotely sensed data. A small subset of distinguishing points, referred to as landmarks, is proposed as a solution. This study proposes a new robust and controllable landmark selection method based on the maximum volume of the simplex spanned by landmarks. The experiments are conducted to compare classification accuracies with standard deviation according to sampling methods, the number of landmarks, and processing time. The proposed method could employ both classification accuracy and computational efficiency.