• Title/Summary/Keyword: Principal dimension estimation

Search Result 16, Processing Time 0.022 seconds

Comprehensive studies of Grassmann manifold optimization and sequential candidate set algorithm in a principal fitted component model

  • Chaeyoung, Lee;Jae Keun, Yoo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.721-733
    • /
    • 2022
  • In this paper we compare parameter estimation by Grassmann manifold optimization and sequential candidate set algorithm in a structured principal fitted component (PFC) model. The structured PFC model extends the form of the covariance matrix of a random error to relieve the limits that occur due to too simple form of the matrix. However, unlike other PFC models, structured PFC model does not have a closed form for parameter estimation in dimension reduction which signals the need of numerical computation. The numerical computation can be done through Grassmann manifold optimization and sequential candidate set algorithm. We conducted numerical studies to compare the two methods by computing the results of sequential dimension testing and trace correlation values where we can compare the performance in determining dimension and estimating the basis. We could conclude that Grassmann manifold optimization outperforms sequential candidate set algorithm in dimension determination, while sequential candidate set algorithm is better in basis estimation when conducting dimension reduction. We also applied the methods in real data which derived the same result.

Principal Component Regression by Principal Component Selection

  • Lee, Hosung;Park, Yun Mi;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.2
    • /
    • pp.173-180
    • /
    • 2015
  • We propose a selection procedure of principal components in principal component regression. Our method selects principal components using variable selection procedures instead of a small subset of major principal components in principal component regression. Our procedure consists of two steps to improve estimation and prediction. First, we reduce the number of principal components using the conventional principal component regression to yield the set of candidate principal components and then select principal components among the candidate set using sparse regression techniques. The performance of our proposals is demonstrated numerically and compared with the typical dimension reduction approaches (including principal component regression and partial least square regression) using synthetic and real datasets.

A concise overview of principal support vector machines and its generalization

  • Jungmin Shin;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.235-246
    • /
    • 2024
  • In high-dimensional data analysis, sufficient dimension reduction (SDR) has been considered as an attractive tool for reducing the dimensionality of predictors while preserving regression information. The principal support vector machine (PSVM) (Li et al., 2011) offers a unified approach for both linear and nonlinear SDR. This article comprehensively explores a variety of SDR methods based on the PSVM, which we call principal machines (PM) for SDR. The PM achieves SDR by solving a sequence of convex optimizations akin to popular supervised learning methods, such as the support vector machine, logistic regression, and quantile regression, to name a few. This makes the PM straightforward to handle and extend in both theoretical and computational aspects, as we will see throughout this article.

Intensive comparison of semi-parametric and non-parametric dimension reduction methods in forward regression

  • Shin, Minju;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.615-627
    • /
    • 2022
  • Principal Fitted Component (PFC) is a semi-parametric sufficient dimension reduction (SDR) method, which is originally proposed in Cook (2007). According to Cook (2007), the PFC has a connection with other usual non-parametric SDR methods. The connection is limited to sliced inverse regression (Li, 1991) and ordinary least squares. Since there is no direct comparison between the two approaches in various forward regressions up to date, a practical guidance between the two approaches is necessary for usual statistical practitioners. To fill this practical necessity, in this paper, we newly derive a connection of the PFC to covariance methods (Yin and Cook, 2002), which is one of the most popular SDR methods. Also, intensive numerical studies have done closely to examine and compare the estimation performances of the semi- and non-parametric SDR methods for various forward regressions. The founding from the numerical studies are confirmed in a real data example.

Development of Preliminary Design Model for Ultra-Large Container Ships by Genetic Algorithm

  • Han, Song-I;Jung, Ho-Seok;Cho, Yong-Jin
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.4
    • /
    • pp.233-238
    • /
    • 2012
  • In this study, we carried out a precedent investigation for an ultra-large container ship, which is expected to be a higher value-added vessel. We studied a preliminary optimized design technique for estimating the principal dimensions of an ultra-large container ship. Above all, we have developed optimized dimension estimation models to reduce the building costs and weight, using previous container ships in shipbuilding yards. We also applied a generalized estimation model to estimate the shipping service costs. A Genetic Algorithm, which utilized the RFR (required freight rate) of a container ship as a fitness value, was used in the optimization technique. We could handle uncertainties in the shipping service environment using a Monte-Carlo simulation. We used several processes to verify the estimated dimensions of an ultra-large container ship. We roughly determined the general arrangement of an ultra-large container ship up to 1500 TEU, the capacity check of loading containers, the weight estimation, and so on. Through these processes, we evaluated the possibility for the practical application of the preliminary design model.

Tutorial: Methodologies for sufficient dimension reduction in regression

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.105-117
    • /
    • 2016
  • In the paper, as a sequence of the first tutorial, we discuss sufficient dimension reduction methodologies used to estimate central subspace (sliced inverse regression, sliced average variance estimation), central mean subspace (ordinary least square, principal Hessian direction, iterative Hessian transformation), and central $k^{th}$-moment subspace (covariance method). Large-sample tests to determine the structural dimensions of the three target subspaces are well derived in most of the methodologies; however, a permutation test (which does not require large-sample distributions) is introduced. The test can be applied to the methodologies discussed in the paper. Theoretical relationships among the sufficient dimension reduction methodologies are also investigated and real data analysis is presented for illustration purposes. A seeded dimension reduction approach is then introduced for the methodologies to apply to large p small n regressions.

Model-based inverse regression for mixture data

  • Choi, Changhwan;Park, Chongsun
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.1
    • /
    • pp.97-113
    • /
    • 2017
  • This paper proposes a method for sufficient dimension reduction (SDR) of mixture data. We consider mixture data containing more than one component that have distinct central subspaces. We adopt an approach of a model-based sliced inverse regression (MSIR) to the mixture data in a simple and intuitive manner. We employed mixture probabilistic principal component analysis (MPPCA) to estimate each central subspaces and cluster the data points. The results from simulation studies and a real data set show that our method is satisfactory to catch appropriate central spaces and is also robust regardless of the number of slices chosen. Discussions about root selection, estimation accuracy, and classification with initial value issues of MPPCA and its related simulation results are also provided.

A Fuzzy Neural Network Combining Wavelet Denoising and PCA for Sensor Signal Estimation

  • Na, Man-Gyun
    • Nuclear Engineering and Technology
    • /
    • v.32 no.5
    • /
    • pp.485-494
    • /
    • 2000
  • In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique . Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors.

  • PDF

Bayesian inference of the cumulative logistic principal component regression models

  • Kyung, Minjung
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.203-223
    • /
    • 2022
  • We propose a Bayesian approach to cumulative logistic regression model for the ordinal response based on the orthogonal principal components via singular value decomposition considering the multicollinearity among predictors. The advantage of the suggested method is considering dimension reduction and parameter estimation simultaneously. To evaluate the performance of the proposed model we conduct a simulation study with considering a high-dimensional and highly correlated explanatory matrix. Also, we fit the suggested method to a real data concerning sprout- and scab-damaged kernels of wheat and compare it to EM based proportional-odds logistic regression model. Compared to EM based methods, we argue that the proposed model works better for the highly correlated high-dimensional data with providing parameter estimates and provides good predictions.

Principal selected response reduction in multivariate regression (다변량회귀에서 주선택 반응변수 차원축소)

  • Yoo, Jae Keun
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.4
    • /
    • pp.659-669
    • /
    • 2021
  • Multivariate regression often appears in longitudinal or functional data analysis. Since multivariate regression involves multi-dimensional response variables, it is more strongly affected by the so-called curse of dimension that univariate regression. To overcome this issue, Yoo (2018) and Yoo (2019a) proposed three model-based response dimension reduction methodologies. According to various numerical studies in Yoo (2019a), the default method suggested in Yoo (2019a) is least sensitive to the simulated models, but it is not the best one. To release this issue, the paper proposes an selection algorithm by comparing the other two methods with the default one. This approach is called principal selected response reduction. Various simulation studies show that the proposed method provides more accurate estimation results than the default one by Yoo (2019a), and it confirms practical and empirical usefulness of the propose method over the default one by Yoo (2019a).