• Title/Summary/Keyword: Data Principal

Search Result 2,084, Processing Time 0.029 seconds

On principal component analysis for interval-valued data (구간형 자료의 주성분 분석에 관한 연구)

  • Choi, Soojin;Kang, Kee-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.61-74
    • /
    • 2020
  • Interval-valued data, one type of symbolic data, are observed in the form of intervals rather than single values. Each interval-valued observation has an internal variation. Principal component analysis reduces the dimension of data by maximizing the variance of data. Therefore, the principal component analysis of the interval-valued data should account for the variance between observations as well as the variation within the observed intervals. In this paper, three principal component analysis methods for interval-valued data are summarized. In addition, a new method using a truncated normal distribution has been proposed instead of a uniform distribution in the conventional quantile method, because we believe think there is more information near the center point of the interval. Each method is compared using simulations and the relevant data set from the OECD. In the case of the quantile method, we draw a scatter plot of the principal component, and then identify the position and distribution of the quantiles by the arrow line representation method.

On Robust Principal Component using Analysis Neural Networks (신경망을 이용한 로버스트 주성분 분석에 관한 연구)

  • Kim, Sang-Min;Oh, Kwang-Sik;Park, Hee-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.7 no.1
    • /
    • pp.113-118
    • /
    • 1996
  • Principal component analysis(PCA) is an essential technique for data compression and feature extraction, and has been widely used in statistical data analysis, communication theory, pattern recognition, and image processing. Oja(1992) found that a linear neuron with constrained Hebbian learning rule can extract the principal component by using stochastic gradient ascent method. In practice real data often contain some outliers. These outliers will significantly deteriorate the performances of the PCA algorithms. In order to make PCA robust, Xu & Yuille(1995) applied statistical physics to the problem of robust principal component analysis(RPCA). Devlin et.al(1981) obtained principal components by using techniques such as M-estimation. The propose of this paper is to investigate from the statistical point of view how Xu & Yuille's(1995) RPCA works under the same simulation condition as in Devlin et.al(1981).

  • PDF

Data Visualization using Linear and Non-linear Dimensionality Reduction Methods

  • Kim, Junsuk;Youn, Joosang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.12
    • /
    • pp.21-26
    • /
    • 2018
  • As the large amount of data can be efficiently stored, the methods extracting meaningful features from big data has become important. Especially, the techniques of converting high- to low-dimensional data are crucial for the 'Data visualization'. In this study, principal component analysis (PCA; linear dimensionality reduction technique) and Isomap (non-linear dimensionality reduction technique) are introduced and applied to neural big data obtained by the functional magnetic resonance imaging (fMRI). First, we investigate how much the physical properties of stimuli are maintained after the dimensionality reduction processes. We moreover compared the amount of residual variance to quantitatively compare the amount of information that was not explained. As result, the dimensionality reduction using Isomap contains more information than the principal component analysis. Our results demonstrate that it is necessary to consider not only linear but also nonlinear characteristics in the big data analysis.

Principal Component Analysis of Compositional Data using Box-Cox Contrast Transformation (Box-Cox 대비변환을 이용한 구성비율자료의 주성분분석)

  • 최병진;김기영
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.1
    • /
    • pp.137-148
    • /
    • 2001
  • Compositional data found in many practical applications consist of non-negative vectors of proportions with the constraint which the sum of the elements of each vector is unity. It is well-known that the statistical analysis of compositional data suffers from the unit-sum constraint. Moreover, the non-linear pattern frequently displayed by the data does not facilitate the application of the linear multivariate techniques such as principal component analysis. In this paper we develop new type of principal component analysis for compositional data using Box-Cox contrast transformation. Numerical illustrations are provided for comparative purpose.

  • PDF

Study on Principal Sentiment Analysis of Social Data (소셜 데이터의 주된 감성분석에 대한 연구)

  • Jang, Phil-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.49-56
    • /
    • 2014
  • In this paper, we propose a method for identifying hidden principal sentiments among large scale texts from documents, social data, internet and blogs by analyzing standard language, slangs, argots, abbreviations and emoticons in those words. The IRLBA(Implicitly Restarted Lanczos Bidiagonalization Algorithm) is used for principal component analysis with large scale sparse matrix. The proposed system consists of data acquisition, message analysis, sentiment evaluation, sentiment analysis and integration and result visualization modules. The suggested approaches would help to improve the accuracy and expand the application scope of sentiment analysis in social data.

Robust Design for Multiple Quality Characteristics using Principal Component Analysis

  • Kwon, Yong-Man;Hong, Yeon-Woong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.3
    • /
    • pp.545-551
    • /
    • 2003
  • Robust design is to identify appropriate settings of control factors that make the system's performance robust to changes in the noise factors that represent the source of variation. In this paper we propose how to simultaneously optimize multiple quality characteristics using the principal component analysis of multivariate statistical analysis. An example is illustrated to compare it with already proposed method.

  • PDF

Motion Recognition using Principal Component Analysis

  • Kwon, Yong-Man;Kim, Jong-Min
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.817-823
    • /
    • 2004
  • This paper describes a three dimensional motion recognition algorithm and a system which adopts the algorithm for non-contact human-computer interaction. From sequence of stereos images, five feature regions are extracted with simple color segmentation algorithm and then those are used for three dimensional locus calculation precess. However, the result is not so stable, noisy, that we introduce principal component analysis method to get more robust motion recognition results. This method can overcome the weakness of conventional algorithms since it directly uses three dimensional information motion recognition.

  • PDF

On Sensitivity Analysis in Principal Component Regression

  • Kim, Soon-Kwi;Park, Sung H.
    • Journal of the Korean Statistical Society
    • /
    • v.20 no.2
    • /
    • pp.177-190
    • /
    • 1991
  • In this paper, we discuss and review various measures which have been presented for studying outliers. high-leverage points, and influential observations when principal component regression is adopted. We suggest several diagnostics measures when principal component regression is used. A numerical example is illustrated. Some individual data points may be flagged as outliers, high-leverage point, or influential points.

  • PDF

Abnormality Detection to Non-linear Multivariate Process Using Supervised Learning Methods (지도학습기법을 이용한 비선형 다변량 공정의 비정상 상태 탐지)

  • Son, Young-Tae;Yun, Deok-Kyun
    • IE interfaces
    • /
    • v.24 no.1
    • /
    • pp.8-14
    • /
    • 2011
  • Principal Component Analysis (PCA) reduces the dimensionality of the process by creating a new set of variables, Principal components (PCs), which attempt to reflect the true underlying process dimension. However, for highly nonlinear processes, this form of monitoring may not be efficient since the process dimensionality can't be represented by a small number of PCs. Examples include the process of semiconductors, pharmaceuticals and chemicals. Nonlinear correlated process variables can be reduced to a set of nonlinear principal components, through the application of Kernel Principal Component Analysis (KPCA). Support Vector Data Description (SVDD) which has roots in a supervised learning theory is a training algorithm based on structural risk minimization. Its control limit does not depend on the distribution, but adapts to the real data. So, in this paper proposes a non-linear process monitoring technique based on supervised learning methods and KPCA. Through simulated examples, it has been shown that the proposed monitoring chart is more effective than $T^2$ chart for nonlinear processes.

Optimal Thoracic Sound Data Extraction Using Principal Component Analysis (주성분 분석을 이용한 최적 흉부음 데이터 검출)

  • 임선희;박기영;최규훈;박강서;김종교
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2156-2159
    • /
    • 2003
  • Thoracic sound has been widely known as a good method to examine thoracic disease. But, it's difficult to diagnose with correct data according to patient's thoracic position from same patient who has thoracic disease. Therefore, it is necessary to normalize the data for lung sound objectively In this paper, we'd like to detect a useful data for medical examination by applying PCA(Principal Component Analysis) to thoracic sound data and then present a objective data about lung and heart sound for thoracic disease.

  • PDF