• Title/Summary/Keyword: Group penalty

Search Result 34, Processing Time 0.023 seconds

A numerical study on group quantile regression models

  • Kim, Doyoen;Jung, Yoonsuh
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.4
    • /
    • pp.359-370
    • /
    • 2019
  • Grouping structures in covariates are often ignored in regression models. Recent statistical developments considering grouping structure shows clear advantages; however, reflecting the grouping structure on the quantile regression model has been relatively rare in the literature. Treating the grouping structure is usually conducted by employing a group penalty. In this work, we explore the idea of group penalty to the quantile regression models. The grouping structure is assumed to be known, which is commonly true for some cases. For example, group of dummy variables transformed from one categorical variable can be regarded as one group of covariates. We examine the group quantile regression models via two real data analyses and simulation studies that reveal the beneficial performance of group quantile regression models to the non-group version methods if there exists grouping structures among variables.

Variable Selection with Nonconcave Penalty Function on Reduced-Rank Regression

  • Jung, Sang Yong;Park, Chongsun
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.1
    • /
    • pp.41-54
    • /
    • 2015
  • In this article, we propose nonconcave penalties on a reduced-rank regression model to select variables and estimate coefficients simultaneously. We apply HARD (hard thresholding) and SCAD (smoothly clipped absolute deviation) symmetric penalty functions with singularities at the origin, and bounded by a constant to reduce bias. In our simulation study and real data analysis, the new method is compared with an existing variable selection method using $L_1$ penalty that exhibits competitive performance in prediction and variable selection. Instead of using only one type of penalty function, we use two or three penalty functions simultaneously and take advantages of various types of penalty functions together to select relevant predictors and estimation to improve the overall performance of model fitting.

Weighted Support Vector Machines with the SCAD Penalty

  • Jung, Kang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.6
    • /
    • pp.481-490
    • /
    • 2013
  • Classification is an important research area as data can be easily obtained even if the number of predictors becomes huge. The support vector machine(SVM) is widely used to classify a subject into a predetermined group because it gives sound theoretical background and better performance than other methods in many applications. The SVM can be viewed as a penalized method with the hinge loss function and penalty functions. Instead of $L_2$ penalty function Fan and Li (2001) proposed the smoothly clipped absolute deviation(SCAD) satisfying good statistical properties. Despite the ability of SVMs, they have drawbacks of non-robustness when there are outliers in the data. We develop a robust SVM method using a weight function with the SCAD penalty function based on the local quadratic approximation. We compare the performance of the proposed SVM with the SVM using the $L_1$ and $L_2$ penalty functions.

An Algorithm for Resource-Unconstrained Earliness-Tardiness Problem with Partial Precedences (자원 제약이 없는 환경에서 부분 우선순위를 고려한 Earliness-Tardiness 최적 일정계획 알고리즘)

  • Ha, Byung-Hyun
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.2
    • /
    • pp.141-157
    • /
    • 2013
  • In this paper, we consider the minimization of the total weighted earliness-tardiness penalty of jobs, regarding the partial precedences between jobs. We present an optimal scheduling algorithm in O(n(n+m log m)) where n is the number of jobs and m is the number of partial precedences. In the algorithm, the optimal schedule is constructed iteratively by considering each group of contiguous jobs as a block that is represented by a tree.

A Study on Determining Single-Center Scheduling using LTV(LifeTime Value) (고객 생애 가치를 활용한 단일 창구 일정계획 수립에 관한 연구)

  • 양광모;박재현;강경식
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2003.05a
    • /
    • pp.285-290
    • /
    • 2003
  • There is only one server available and arriving work require services from this server. Job are processed by the machine one at a time. The most common objective is to sequence jobs on the severs so as to minimize the penalty for being late, commonly called tardiness penalty Based on other objectives, many criteria may serve as s basis for developing job schedules. Therefore, this study tries to proposed that Scheduling by customer needs group for minimizing the problem and reducing inventory, product development time, cycle time, and order lead time.

  • PDF

Advanced Evacuation Analysis for Passenger Ship Using Penalty Walking Velocity Algorithm for Obstacle Avoid (장애물 회피에 페널티 보행 속도 알고리즘을 적용한 여객선 승객 탈출 시뮬레이션)

  • Park, Kwang-Phil;Ha, Sol;Cho, Yoon-Ok;Lee, Kyu-Yeul
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.1-9
    • /
    • 2010
  • In this paper, advanced evacuation analysis simulation on a passenger ship is performed. Velocity based model has been implemented and used to calculate the movement of the individual passengers under the evacuation situation. The age and gender of each passenger are considered as the factors of walking speed. Flocking algorithm is applied for the passenger's group behavior. Penalty walking velocity is introduced to avoid collision between the passengers and obstacles, and to prevent the position overlap among passengers. Application of flocking algorithm and penalty walking velocity to evacuation simulation is verified through implementation of the 11 test problems in IMO (International Maritime Organization) MSC (Maritime Safety Committee) Circulation 1238.

How Effectively Safety Incentives Work? A Randomized Experimental Investigation

  • Ahmed, Ishfaq;Faheem, Asim
    • Safety and Health at Work
    • /
    • v.12 no.1
    • /
    • pp.20-27
    • /
    • 2021
  • Background: Incentive and penalty (I/P) programs are commonly used to increase employees' safety outcomes, but its influence on employees' safety outcomes is under-investigated. Moreover, under developed economies lack safety culture and there is dearth of literature focusing on empirical studies over there [1]. Based on these gaps, this study attempts to see the impact of I/P programs on safety outcomes in a developing country. Methods: The study was carried out in three stages, where Stage I revealed that selected 45 organizations were deficit of safety culture and practices, while only three firms were found good at safety practices. At Stage II, these three firms were divided in two clusters (groups), and were probed further at Stage III. At this stage group, one was manipulated by providing incentives (experimental group) and employees' responses in terms of safety motivation and performance were noticed. Results: It was observed that the experimental group's safety motivation and performance had improved (both for immediate and 1-month later performance). The results were further probed at Phase 3 (after 3 months), where it was found that the benefits of I/P programs were not long lasting and started replenishing. Conclusion: Findings of the study helped researchers conclude that safety incentives have only short-term influence on safety outcomes, while a long-term and permanent solution should be found.

Hierarchically penalized sparse principal component analysis (계층적 벌점함수를 이용한 주성분분석)

  • Kang, Jongkyeong;Park, Jaeshin;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.135-145
    • /
    • 2017
  • Principal component analysis (PCA) describes the variation of multivariate data in terms of a set of uncorrelated variables. Since each principal component is a linear combination of all variables and the loadings are typically non-zero, it is difficult to interpret the derived principal components. Sparse principal component analysis (SPCA) is a specialized technique using the elastic net penalty function to produce sparse loadings in principal component analysis. When data are structured by groups of variables, it is desirable to select variables in a grouped manner. In this paper, we propose a new PCA method to improve variable selection performance when variables are grouped, which not only selects important groups but also removes unimportant variables within identified groups. To incorporate group information into model fitting, we consider a hierarchical lasso penalty instead of the elastic net penalty in SPCA. Real data analyses demonstrate the performance and usefulness of the proposed method.

ADMM for least square problems with pairwise-difference penalties for coefficient grouping

  • Park, Soohee;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.4
    • /
    • pp.441-451
    • /
    • 2022
  • In the era of bigdata, scalability is a crucial issue in learning models. Among many others, the Alternating Direction of Multipliers (ADMM, Boyd et al., 2011) algorithm has gained great popularity in solving large-scale problems efficiently. In this article, we propose applying the ADMM algorithm to solve the least square problem penalized by the pairwise-difference penalty, frequently used to identify group structures among coefficients. ADMM algorithm enables us to solve the high-dimensional problem efficiently in a unified fashion and thus allows us to employ several different types of penalty functions such as LASSO, Elastic Net, SCAD, and MCP for the penalized problem. Additionally, the ADMM algorithm naturally extends the algorithm to distributed computation and real-time updates, both desirable when dealing with large amounts of data.

Different penalty methods for assessing interval from first to successful insemination in Japanese Black heifers

  • Setiaji, Asep;Oikawa, Takuro
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.9
    • /
    • pp.1349-1354
    • /
    • 2019
  • Objective: The objective of this study was to determine the best approach for handling missing records of first to successful insemination (FS) in Japanese Black heifers. Methods: Of a total of 2,367 records of heifers born between 2003 and 2015 used, 206 (8.7%) of open heifers were missing. Four penalty methods based on the number of inseminations were set as follows: C1, FS average according to the number of inseminations; C2, constant number of days, 359; C3, maximum number of FS days to each insemination; and C4, average of FS at the last insemination and FS of C2. C5 was generated by adding a constant number (21 d) to the highest number of FS days in each contemporary group. The bootstrap method was used to compare among the 5 methods in terms of bias, mean squared error (MSE) and coefficient of correlation between estimated breeding value (EBV) of non-censored data and censored data. Three percentages (5%, 10%, and 15%) were investigated using the random censoring scheme. The univariate animal model was used to conduct genetic analysis. Results: Heritability of FS in non-censored data was $0.012{\pm}0.016$, slightly lower than the average estimate from the five penalty methods. C1, C2, and C3 showed lower standard errors of estimated heritability but demonstrated inconsistent results for different percentages of missing records. C4 showed moderate standard errors but more stable ones for all percentages of the missing records, whereas C5 showed the highest standard errors compared with noncensored data. The MSE in C4 heritability was $0.633{\times}10^{-4}$, $0.879{\times}10^{-4}$, $0.876{\times}10^{-4}$ and $0.866{\times}10^{-4}$ for 5%, 8.7%, 10%, and 15%, respectively, of the missing records. Thus, C4 showed the lowest and the most stable MSE of heritability; the coefficient of correlation for EBV was 0.88; 0.93 and 0.90 for heifer, sire and dam, respectively. Conclusion: C4 demonstrated the highest positive correlation with the non-censored data set and was consistent within different percentages of the missing records. We concluded that C4 was the best penalty method for missing records due to the stable value of estimated parameters and the highest coefficient of correlation.