• Title/Summary/Keyword: Data Principal

Search Result 2,084, Processing Time 0.025 seconds

Incremental Eigenspace Model Applied To Kernel Principal Component Analysis

  • Kim, Byung-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.345-354
    • /
    • 2003
  • An incremental kernel principal component analysis(IKPCA) is proposed for the nonlinear feature extraction from the data. The problem of batch kernel principal component analysis(KPCA) is that the computation becomes prohibitive when the data set is large. Another problem is that, in order to update the eigenvectors with another data, the whole eigenvectors should be recomputed. IKPCA overcomes this problem by incrementally updating the eigenspace model. IKPCA is more efficient in memory requirement than a batch KPCA and can be easily improved by re-learning the data. In our experiments we show that IKPCA is comparable in performance to a batch KPCA for the classification problem on nonlinear data set.

  • PDF

Sensitivity Analysis in Principal Component Regression with Quadratic Approximation

  • Shin, Jae-Kyoung;Chang, Duk-Joon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.3
    • /
    • pp.623-630
    • /
    • 2003
  • Recently, Tanaka(1988) derived two influence functions related to an eigenvalue problem $(A-\lambda_sI)\upsilon_s=0$ of real symmetric matrix A and used them for sensitivity analysis in principal component analysis. In this paper, we deal with the perturbation expansions up to quadratic terms of the same functions and discuss the application to sensitivity analysis in principal component regression analysis(PCRA). Numerical example is given to show how the approximation improves with the quadratic term.

  • PDF

Utilizing Principal Component Analysis in Unsupervised Classification Based on Remote Sensing Data

  • Lee, Byung-Gul;Kang, In-Joan
    • Proceedings of the Korean Environmental Sciences Society Conference
    • /
    • 2003.11a
    • /
    • pp.33-36
    • /
    • 2003
  • Principal component analysis (PCA) was used to improve image classification by the unsupervised classification techniques, the K-means. To do this, I selected a Landsat TM scene of Jeju Island, Korea and proposed two methods for PCA: unstandardized PCA (UPCA) and standardized PCA (SPCA). The estimated accuracy of the image classification of Jeju area was computed by error matrix. The error matrix was derived from three unsupervised classification methods. Error matrices indicated that classifications done on the first three principal components for UPCA and SPCA of the scene were more accurate than those done on the seven bands of TM data and that also the results of UPCA and SPCA were better than those of the raw Landsat TM data. The classification of TM data by the K-means algorithm was particularly poor at distinguishing different land covers on the island. From the classification results, we also found that the principal component based classifications had characteristics independent of the unsupervised techniques (numerical algorithms) while the TM data based classifications were very dependent upon the techniques. This means that PCA data has uniform characteristics for image classification that are less affected by choice of classification scheme. In the results, we also found that UPCA results are better than SPCA since UPCA has wider range of digital number of an image.

  • PDF

Principal Component Analysis Based Two-Dimensional (PCA-2D) Correlation Spectroscopy: PCA Denoising for 2D Correlation Spectroscopy

  • Jung, Young-Mee
    • Bulletin of the Korean Chemical Society
    • /
    • v.24 no.9
    • /
    • pp.1345-1350
    • /
    • 2003
  • Principal component analysis based two-dimensional (PCA-2D) correlation analysis is applied to FTIR spectra of polystyrene/methyl ethyl ketone/toluene solution mixture during the solvent evaporation. Substantial amount of artificial noise were added to the experimental data to demonstrate the practical noise-suppressing benefit of PCA-2D technique. 2D correlation analysis of the reconstructed data matrix from PCA loading vectors and scores successfully extracted only the most important features of synchronicity and asynchronicity without interference from noise or insignificant minor components. 2D correlation spectra constructed with only one principal component yield strictly synchronous response with no discernible a asynchronous features, while those involving at least two or more principal components generated meaningful asynchronous 2D correlation spectra. Deliberate manipulation of the rank of the reconstructed data matrix, by choosing the appropriate number and type of PCs, yields potentially more refined 2D correlation spectra.

Regional Geological Mapping by Principal Component Analysis of the Landsat TM Data in a Heavily Vegetated Area (식생이 무성한 지역에서의 Principal Component Analysis 에 의한 Landsat TM 자료의 광역지질도 작성)

  • 朴鍾南;徐延熙
    • Korean Journal of Remote Sensing
    • /
    • v.4 no.1
    • /
    • pp.49-60
    • /
    • 1988
  • Principal Component Analysis (PCA) was applied for regional geological mapping to a multivariate data set of the Landsat TM data in the heavily vegetated and topographically rugged Chungju area. The multivariate data set selection was made by statistical analysis based on the magnitude of regression of squares in multiple regression, and it includes R1/2/R3/4, R2/3, R5/7/R4/3, R1/2, R3/4. R4/3. AND R4/5. As a result of application of PCA, some of later principal components (in this study PC 3 and PC 5) are geologically more significant than earlier major components, PC 1 and PC 2 herein. The earlier two major components which comprise 96% of the total information of the data set, mainly represent reflectance of vegetation and topographic effects, while though the rest represent 3% of the total information which statistically indicates the information unstable, geological significance of PC3 and PC5 in the study implies that application of the technique in more favorable areas should lead to much better results.

Algorithm for Finding the Best Principal Component Regression Models for Quantitative Analysis using NIR Spectra (근적외 스펙트럼을 이용한 정량분석용 최적 주성분회귀모델을 얻기 위한 알고리듬)

  • Cho, Jung-Hwan
    • Journal of Pharmaceutical Investigation
    • /
    • v.37 no.6
    • /
    • pp.377-395
    • /
    • 2007
  • Near infrared(NIR) spectral data have been used for the noninvasive analysis of various biological samples. Nonetheless, absorption bands of NIR region are overlapped extensively. It is very difficult to select the proper wavelengths of spectral data, which give the best PCR(principal component regression) models for the analysis of constituents of biological samples. The NIR data were used after polynomial smoothing and differentiation of 1st order, using Savitzky-Golay filters. To find the best PCR models, all-possible combinations of available principal components from the given NIR spectral data were derived by in-house programs written in MATLAB codes. All of the extensively generated PCR models were compared in terms of SEC(standard error of calibration), $R^2$, SEP(standard error of prediction) and SECP(standard error of calibration and prediction) to find the best combination of principal components of the initial PCR models. The initial PCR models were found by SEC or Malinowski's indicator function and a priori selection of spectral points were examined in terms of correlation coefficients between NIR data at each wavelength and corresponding concentrations. For the test of the developed program, aqueous solutions of BSA(bovine serum albumin) and glucose were prepared and analyzed. As a result, the best PCR models were found using a priori selection of spectral points and the final model selection by SEP or SECP.

Improvement on Fuzzy C-Means Using Principal Component Analysis

  • Choi, Hang-Suk;Cha, Kyung-Joon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.301-309
    • /
    • 2006
  • In this paper, we show the improved fuzzy c-means clustering method. To improve, we use the double clustering as principal component analysis from objects which is located on common region of more than two clusters. In addition we use the degree of membership (probability) of fuzzy c-means which is the advantage. From simulation result, we find some improvement of accuracy in data of the probability 0.7 exterior and interior of overlapped area.

  • PDF

Evaluation of Water Quality Using Multivariate Statistic Analysis in Busan Coastal Area

  • Kim, Sang-Soo;Cho, Jang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.3
    • /
    • pp.531-542
    • /
    • 2004
  • Principal component analysis and cluster analysis were conducted to comprehensively evaluate the water quality of Busan coastal area with the data collected seasonally by the analysis of surface water at 10 stations from 1997 to 2003. We noted that the first principal component was regarded as a factor related with the input of nutrient-rich fresh water and the second principal component as meteorological characteristics. Also we obtained that water qualities of station 4 and 9 were different from those of other stations in Busan coastal area.

  • PDF

Data anomaly detection and Data fusion based on Incremental Principal Component Analysis in Fog Computing

  • Yu, Xue-Yong;Guo, Xin-Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.3989-4006
    • /
    • 2020
  • The intelligent agriculture monitoring is based on the perception and analysis of environmental data, which enables the monitoring of the production environment and the control of environmental regulation equipment. As the scale of the application continues to expand, a large amount of data will be generated from the perception layer and uploaded to the cloud service, which will bring challenges of insufficient bandwidth and processing capacity. A fog-based offline and real-time hybrid data analysis architecture was proposed in this paper, which combines offline and real-time analysis to enable real-time data processing on resource-constrained IoT devices. Furthermore, we propose a data process-ing algorithm based on the incremental principal component analysis, which can achieve data dimensionality reduction and update of principal components. We also introduce the concept of Squared Prediction Error (SPE) value and realize the abnormal detection of data through the combination of SPE value and data fusion algorithm. To ensure the accuracy and effectiveness of the algorithm, we design a regular-SPE hybrid model update strategy, which enables the principal component to be updated on demand when data anomalies are found. In addition, this strategy can significantly reduce resource consumption growth due to the data analysis architectures. Practical datasets-based simulations have confirmed that the proposed algorithm can perform data fusion and exception processing in real-time on resource-constrained devices; Our model update strategy can reduce the overall system resource consumption while ensuring the accuracy of the algorithm.

A study on principal component analysis using penalty method (페널티 방법을 이용한 주성분분석 연구)

  • Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.721-731
    • /
    • 2017
  • In this study, principal component analysis methods using Lasso penalty are introduced. There are two popular methods that apply Lasso penalty to principal component analysis. The first method is to find an optimal vector of linear combination as the regression coefficient vector of regressing for each principal component on the original data matrix with Lasso penalty (elastic net penalty in general). The second method is to find an optimal vector of linear combination by minimizing the residual matrix obtained from approximating the original matrix by the singular value decomposition with Lasso penalty. In this study, we have reviewed two methods of principal components using Lasso penalty in detail, and shown that these methods have an advantage especially in applying to data sets that have more variables than cases. Also, these methods are compared in an application to a real data set using R program. More specifically, these methods are applied to the crime data in Ahamad (1967), which has more variables than cases.