• 제목/요약/키워드: Principal dimension estimation

검색결과 16건 처리시간 0.023초

Comprehensive studies of Grassmann manifold optimization and sequential candidate set algorithm in a principal fitted component model

  • Chaeyoung, Lee;Jae Keun, Yoo
    • Communications for Statistical Applications and Methods
    • /
    • 제29권6호
    • /
    • pp.721-733
    • /
    • 2022
  • In this paper we compare parameter estimation by Grassmann manifold optimization and sequential candidate set algorithm in a structured principal fitted component (PFC) model. The structured PFC model extends the form of the covariance matrix of a random error to relieve the limits that occur due to too simple form of the matrix. However, unlike other PFC models, structured PFC model does not have a closed form for parameter estimation in dimension reduction which signals the need of numerical computation. The numerical computation can be done through Grassmann manifold optimization and sequential candidate set algorithm. We conducted numerical studies to compare the two methods by computing the results of sequential dimension testing and trace correlation values where we can compare the performance in determining dimension and estimating the basis. We could conclude that Grassmann manifold optimization outperforms sequential candidate set algorithm in dimension determination, while sequential candidate set algorithm is better in basis estimation when conducting dimension reduction. We also applied the methods in real data which derived the same result.

Principal Component Regression by Principal Component Selection

  • Lee, Hosung;Park, Yun Mi;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • 제22권2호
    • /
    • pp.173-180
    • /
    • 2015
  • We propose a selection procedure of principal components in principal component regression. Our method selects principal components using variable selection procedures instead of a small subset of major principal components in principal component regression. Our procedure consists of two steps to improve estimation and prediction. First, we reduce the number of principal components using the conventional principal component regression to yield the set of candidate principal components and then select principal components among the candidate set using sparse regression techniques. The performance of our proposals is demonstrated numerically and compared with the typical dimension reduction approaches (including principal component regression and partial least square regression) using synthetic and real datasets.

A concise overview of principal support vector machines and its generalization

  • Jungmin Shin;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • 제31권2호
    • /
    • pp.235-246
    • /
    • 2024
  • In high-dimensional data analysis, sufficient dimension reduction (SDR) has been considered as an attractive tool for reducing the dimensionality of predictors while preserving regression information. The principal support vector machine (PSVM) (Li et al., 2011) offers a unified approach for both linear and nonlinear SDR. This article comprehensively explores a variety of SDR methods based on the PSVM, which we call principal machines (PM) for SDR. The PM achieves SDR by solving a sequence of convex optimizations akin to popular supervised learning methods, such as the support vector machine, logistic regression, and quantile regression, to name a few. This makes the PM straightforward to handle and extend in both theoretical and computational aspects, as we will see throughout this article.

Intensive comparison of semi-parametric and non-parametric dimension reduction methods in forward regression

  • Shin, Minju;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제29권5호
    • /
    • pp.615-627
    • /
    • 2022
  • Principal Fitted Component (PFC) is a semi-parametric sufficient dimension reduction (SDR) method, which is originally proposed in Cook (2007). According to Cook (2007), the PFC has a connection with other usual non-parametric SDR methods. The connection is limited to sliced inverse regression (Li, 1991) and ordinary least squares. Since there is no direct comparison between the two approaches in various forward regressions up to date, a practical guidance between the two approaches is necessary for usual statistical practitioners. To fill this practical necessity, in this paper, we newly derive a connection of the PFC to covariance methods (Yin and Cook, 2002), which is one of the most popular SDR methods. Also, intensive numerical studies have done closely to examine and compare the estimation performances of the semi- and non-parametric SDR methods for various forward regressions. The founding from the numerical studies are confirmed in a real data example.

Development of Preliminary Design Model for Ultra-Large Container Ships by Genetic Algorithm

  • Han, Song-I;Jung, Ho-Seok;Cho, Yong-Jin
    • International Journal of Ocean System Engineering
    • /
    • 제2권4호
    • /
    • pp.233-238
    • /
    • 2012
  • In this study, we carried out a precedent investigation for an ultra-large container ship, which is expected to be a higher value-added vessel. We studied a preliminary optimized design technique for estimating the principal dimensions of an ultra-large container ship. Above all, we have developed optimized dimension estimation models to reduce the building costs and weight, using previous container ships in shipbuilding yards. We also applied a generalized estimation model to estimate the shipping service costs. A Genetic Algorithm, which utilized the RFR (required freight rate) of a container ship as a fitness value, was used in the optimization technique. We could handle uncertainties in the shipping service environment using a Monte-Carlo simulation. We used several processes to verify the estimated dimensions of an ultra-large container ship. We roughly determined the general arrangement of an ultra-large container ship up to 1500 TEU, the capacity check of loading containers, the weight estimation, and so on. Through these processes, we evaluated the possibility for the practical application of the preliminary design model.

Tutorial: Methodologies for sufficient dimension reduction in regression

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제23권2호
    • /
    • pp.105-117
    • /
    • 2016
  • In the paper, as a sequence of the first tutorial, we discuss sufficient dimension reduction methodologies used to estimate central subspace (sliced inverse regression, sliced average variance estimation), central mean subspace (ordinary least square, principal Hessian direction, iterative Hessian transformation), and central $k^{th}$-moment subspace (covariance method). Large-sample tests to determine the structural dimensions of the three target subspaces are well derived in most of the methodologies; however, a permutation test (which does not require large-sample distributions) is introduced. The test can be applied to the methodologies discussed in the paper. Theoretical relationships among the sufficient dimension reduction methodologies are also investigated and real data analysis is presented for illustration purposes. A seeded dimension reduction approach is then introduced for the methodologies to apply to large p small n regressions.

Model-based inverse regression for mixture data

  • Choi, Changhwan;Park, Chongsun
    • Communications for Statistical Applications and Methods
    • /
    • 제24권1호
    • /
    • pp.97-113
    • /
    • 2017
  • This paper proposes a method for sufficient dimension reduction (SDR) of mixture data. We consider mixture data containing more than one component that have distinct central subspaces. We adopt an approach of a model-based sliced inverse regression (MSIR) to the mixture data in a simple and intuitive manner. We employed mixture probabilistic principal component analysis (MPPCA) to estimate each central subspaces and cluster the data points. The results from simulation studies and a real data set show that our method is satisfactory to catch appropriate central spaces and is also robust regardless of the number of slices chosen. Discussions about root selection, estimation accuracy, and classification with initial value issues of MPPCA and its related simulation results are also provided.

A Fuzzy Neural Network Combining Wavelet Denoising and PCA for Sensor Signal Estimation

  • Na, Man-Gyun
    • Nuclear Engineering and Technology
    • /
    • 제32권5호
    • /
    • pp.485-494
    • /
    • 2000
  • In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique . Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors.

  • PDF

Bayesian inference of the cumulative logistic principal component regression models

  • Kyung, Minjung
    • Communications for Statistical Applications and Methods
    • /
    • 제29권2호
    • /
    • pp.203-223
    • /
    • 2022
  • We propose a Bayesian approach to cumulative logistic regression model for the ordinal response based on the orthogonal principal components via singular value decomposition considering the multicollinearity among predictors. The advantage of the suggested method is considering dimension reduction and parameter estimation simultaneously. To evaluate the performance of the proposed model we conduct a simulation study with considering a high-dimensional and highly correlated explanatory matrix. Also, we fit the suggested method to a real data concerning sprout- and scab-damaged kernels of wheat and compare it to EM based proportional-odds logistic regression model. Compared to EM based methods, we argue that the proposed model works better for the highly correlated high-dimensional data with providing parameter estimates and provides good predictions.

다변량회귀에서 주선택 반응변수 차원축소 (Principal selected response reduction in multivariate regression)

  • 유재근
    • 응용통계연구
    • /
    • 제34권4호
    • /
    • pp.659-669
    • /
    • 2021
  • 다변량 회귀분석은 경시적 자료분석이나 함수적 자료분석 등 다양한 분야에서 빈번하게 사용되는 통계적 방법론이다. 다변량 회귀분석은 설명변수의 차원 뿐만 아니라 반응변수의 차원때문에 일변량 회귀분석에서 보다 차원의 저주문제에 더 강한 영향을 받는다. 이러한 문제를 해결하기 위해 최근 Yoo (2018)와 Yoo (2019a)에 세 가지 모형기반 반응변수 차원축소 방법이 제시되었다. 하지만 Yoo (2019a)에서 제시한 기본 방법은 모의실험 결과 모형에 가장 영향을 덜 받지만, 다른 두 방법 중 더 나은 방법보다 더 좋은 추정결과를 제시하지 못한다. 이러한 단점을 극복하기 위해 본 논문에서는 기본 방법의 결과 다른 두 방법의 결과를 비교하여, 자료에 따라 최선의 방법을 제시하는 선택 알고리듬을 제시하고, 이를 주선택 반응변수 차원축소라 명명한다. 다양한 모의실험 결과 주선택 반응변수 차원축소는 Yoo (2019a)의 기본방법보다 더 정확하게 차원을 축소하고, 모든 경우에 있더 더 바람직한 방법을 선택함을 확인할 수 있다. 이러한 결과로 제안한 주선택 반응변수의 차원축소 방법의 실제적 유용성을 확인할 수 있다.