• Title/Summary/Keyword: Regression algorithm

Search Result 1,046, Processing Time 0.028 seconds

An efficient algorithm for the non-convex penalized multinomial logistic regression

  • Kwon, Sunghoon;Kim, Dongshin;Lee, Sangin
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.1
    • /
    • pp.129-140
    • /
    • 2020
  • In this paper, we introduce an efficient algorithm for the non-convex penalized multinomial logistic regression that can be uniformly applied to a class of non-convex penalties. The class includes most non-convex penalties such as the smoothly clipped absolute deviation, minimax concave and bridge penalties. The algorithm is developed based on the concave-convex procedure and modified local quadratic approximation algorithm. However, usual quadratic approximation may slow down computational speed since the dimension of the Hessian matrix depends on the number of categories of the output variable. For this issue, we use a uniform bound of the Hessian matrix in the quadratic approximation. The algorithm is available from the R package ncpen developed by the authors. Numerical studies via simulations and real data sets are provided for illustration.

Regression Trees with. Unbiased Variable Selection (변수선택 편향이 없는 회귀나무를 만들기 위한 알고리즘)

  • 김진흠;김민호
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.3
    • /
    • pp.459-473
    • /
    • 2004
  • It has well known that an exhaustive search algorithm suggested by Breiman et. a1.(1984) has a trend to select the variable having relatively many possible splits as an splitting rule. We propose an algorithm to overcome this variable selection bias problem and then construct unbiased regression trees based on the algorithm. The proposed algorithm runs two steps of selecting a split variable and determining a split rule for binary split based on the split variable. Simulation studies were performed to compare the proposed algorithm with Breiman et a1.(1984)'s CART(Classification and Regression Tree) in terms of degree of variable selection bias, variable selection power, and MSE(Mean Squared Error). Also, we illustrate the proposed algorithm with real data sets.

Sequential Adaptation Algorithm Based on Transformation Space Model for Speech Recognition (음성인식을 위한 변환 공간 모델에 근거한 순차 적응기법)

  • Kim, Dong-Kook;Chang, Joo-Hyuk;Kim, Nam-Soo
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.75-88
    • /
    • 2004
  • In this paper, we propose a new approach to sequential linear regression adaptation of continuous density hidden Markov models (CDHMMs) based on transformation space model (TSM). The proposed TSM which characterizes the a priori knowledge of the training speakers associated with maximum likelihood linear regression (MLLR) matrix parameters is effectively described in terms of the latent variable models. The TSM provides various sources of information such as the correlation information, the prior distribution, and the prior knowledge of the regression parameters that are very useful for rapid adaptation. The quasi-Bayes (QB) estimation algorithm is formulated to incrementally update the hyperparameters of the TSM and regression matrices simultaneously. Experimental results showed that the proposed TSM approach is better than that of the conventional quasi-Bayes linear regression (QBLR) algorithm for a small amount of adaptation data.

  • PDF

Switching Regression Analysis via Fuzzy LS-SVM

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.609-617
    • /
    • 2006
  • A new fuzzy c-regression algorithm for switching regression analysis is presented, which combines fuzzy c-means clustering and least squares support vector machine. This algorithm can detect outliers in switching regression models while yielding the simultaneous estimates of the associated parameters together with a fuzzy c-partitions of data. It can be employed for the model-free nonlinear regression which does not assume the underlying form of the regression function. We illustrate the new approach with some numerical examples that show how it can be used to fit switching regression models to almost all types of mixed data.

  • PDF

Nonparametric Regression with Genetic Algorithm (유전자 알고리즘을 이용한 비모수 회귀분석)

  • Kim, Byung-Do;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.11 no.1
    • /
    • pp.61-73
    • /
    • 2001
  • Predicting a variable using other variables in a large data set is a very difficult task. It involves selecting variables to include in a model and determining the shape of the relationship between variables. Nonparametric regression such as smoothing splines and neural networks are widely-used methods for such a task. We propose an alternative method based on a genetic algorithm(GA) to solve this problem. We applied GA to regression splines, a nonparametric regression method, to estimate functional forms between variables. Using several simulated and real data, our technique is shown to outperform traditional nonparametric methods such as smoothing splines and neural networks.

  • PDF

Resistant Poisson Regression and Its Application (저항적 포아송 회귀와 활용)

  • Huh, Myung-Hoe;Sung, Nae-Kyung;Lim, Yong-Bin
    • Journal of Korean Society for Quality Management
    • /
    • v.33 no.1
    • /
    • pp.83-87
    • /
    • 2005
  • For the count response we normally consider Poisson regression model. However, the conventional fitting algorithm for Poisson regression model is not reliable at all when the response variable is measured with sizable contamination. In this study, we propose an alternative fitting algorithm that is resistant to outlying values in response and report a case study in semiconductor industry.

Kernel Adatron Algorithm for Supprot Vector Regression

  • Kyungha Seok;Changha Hwang
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.3
    • /
    • pp.843-848
    • /
    • 1999
  • Support vector machine(SVM) is a new and very promising classification and regression technique developed by Bapnik and his group at AT&T Bell laboratories. However it has failed to establish itself as common machine learning tool. This is partly due to the fact that SVM is not easy to implement and its standard implementation requires the optimization package for quadratic programming. In this paper we present simple iterative Kernl Adatron algorithm for nonparametric regression which is easy to implement and guaranteed to converge to the optimal solution and compare it with neural networks and projection pursuit regression.

  • PDF

MULTIPLE OUTLIER DETECTION IN LOGISTIC REGRESSION BY USING INFLUENCE MATRIX

  • Lee, Gwi-Hyun;Park, Sung-Hyun
    • Journal of the Korean Statistical Society
    • /
    • v.36 no.4
    • /
    • pp.457-469
    • /
    • 2007
  • Many procedures are available to identify a single outlier or an isolated influential point in linear regression and logistic regression. But the detection of influential points or multiple outliers is more difficult, owing to masking and swamping problems. The multiple outlier detection methods for logistic regression have not been studied from the points of direct procedure yet. In this paper we consider the direct methods for logistic regression by extending the $Pe\tilde{n}a$ and Yohai (1995) influence matrix algorithm. We define the influence matrix in logistic regression by using Cook's distance in logistic regression, and test multiple outliers by using the mean shift model. To show accuracy of the proposed multiple outlier detection algorithm, we simulate artificial data including multiple outliers with masking and swamping.

Model selection algorithm in Gaussian process regression for computer experiments

  • Lee, Youngsaeng;Park, Jeong-Soo
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.383-396
    • /
    • 2017
  • The model in our approach assumes that computer responses are a realization of a Gaussian processes superimposed on a regression model called a Gaussian process regression model (GPRM). Selecting a subset of variables or building a good reduced model in classical regression is an important process to identify variables influential to responses and for further analysis such as prediction or classification. One reason to select some variables in the prediction aspect is to prevent the over-fitting or under-fitting to data. The same reasoning and approach can be applicable to GPRM. However, only a few works on the variable selection in GPRM were done. In this paper, we propose a new algorithm to build a good prediction model among some GPRMs. It is a post-work of the algorithm that includes the Welch method suggested by previous researchers. The proposed algorithms select some non-zero regression coefficients (${\beta}^{\prime}s$) using forward and backward methods along with the Lasso guided approach. During this process, the fixed were covariance parameters (${\theta}^{\prime}s$) that were pre-selected by the Welch algorithm. We illustrated the superiority of our proposed models over the Welch method and non-selection models using four test functions and one real data example. Future extensions are also discussed.

Unified Non-iterative Algorithm for Principal Component Regression, Partial Least Squares and Ordinary Least Squares

  • Kim, Jong-Duk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.355-366
    • /
    • 2003
  • A unified procedure for principal component regression (PCR), partial least squares (PLS) and ordinary least squares (OLS) is proposed. The process gives solutions for PCR, PLS and OLS in a unified and non-iterative way. This enables us to see the interrelationships among the three regression coefficient vectors, and it is seen that the so-called E-matrix in the solution expression plays the key role in differentiating the methods. In addition to setting out the procedure, the paper also supplies a robust numerical algorithm for its implementation, which is used to show how the procedure performs on a real world data set.

  • PDF