• Title/Summary/Keyword: Variables selection

검색결과 1,189건 처리시간 0.029초

Genomic Selection for Adjacent Genetic Markers of Yorkshire Pigs Using Regularized Regression Approaches

  • Park, Minsu;Kim, Tae-Hun;Cho, Eun-Seok;Kim, Heebal;Oh, Hee-Seok
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제27권12호
    • /
    • pp.1678-1683
    • /
    • 2014
  • This study considers a problem of genomic selection (GS) for adjacent genetic markers of Yorkshire pigs which are typically correlated. The GS has been widely used to efficiently estimate target variables such as molecular breeding values using markers across the entire genome. Recently, GS has been applied to animals as well as plants, especially to pigs. For efficient selection of variables with specific traits in pig breeding, it is required that any such variable selection retains some properties: i) it produces a simple model by identifying insignificant variables; ii) it improves the accuracy of the prediction of future data; and iii) it is feasible to handle high-dimensional data in which the number of variables is larger than the number of observations. In this paper, we applied several variable selection methods including least absolute shrinkage and selection operator (LASSO), fused LASSO and elastic net to data with 47K single nucleotide polymorphisms and litter size for 519 observed sows. Based on experiments, we observed that the fused LASSO outperforms other approaches.

기혼여성의 배우자 선택요인과 결혼만족도 (Mate Selection Factors and Marital Satisfaction of Married Women)

  • 이선정;신효식
    • 한국가정과학회지
    • /
    • 제3권2호
    • /
    • pp.13-26
    • /
    • 2000
  • The purpose of present study were to find the general trends of mate selecting factors and. marital satisfaction. concentrated on married women, to examine the difference among mate selection factors and marital satisfaction according to socio-demographic variables and Psychological variables and to analyze the effects of these variables influencing marital satisfaction. The subjects were 276 wives, living in Kwangju that having passed under S years after marriage without divorce experience. The major findings were as follows . 1. In mate selection, factor of high-degree was personality. view of value. personal relations, achievement, emotional mature. self-differentiation. degree of affection's expression. sense of humor, charms and condition of health Respondents'marital satisfaction score showed 91.75 and this score was higher than median score(62.5) 2. The external factor of mate selection showed significant difference according to degree of education. career. order. and sex-role attitude. The internal factor of mate selection showed significant difference according to degree of education, career, order, self-differentiation, self-esteem, and sex-role attitude 3. As correlating mate selectional factors to marital satisfaction, the significance appears in the mate's personality. view of value, emotional mature. personal relations. self-differentiation, condition of health. achievement. charm, sense of humor and degree of affection's expression. 4. Married women's marital satisfaction was influenced by self-esteem, personality and child's number that were explained about 38% by these variables. In conclusion, to happy marital life must be loved her own self. and above all considered internal factors like personality than external factors in mate selection.

  • PDF

ON THE SELECTION Of INPUT VARIABLES TO BE RETAINED IN A REDUCED_ORDER MODEL

  • Lee, Kun-Yong
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1987년도 전기.전자공학 학술대회 논문집(I)
    • /
    • pp.198-200
    • /
    • 1987
  • This paper presents the choice of appropriate sets of input variables for large-scale linear multivariable systems. It is shown that the selection of a good set of input variables for control may become important when both strong and weak input variables are available. The transmission of information from the inputs to the outputs is investigated and appropriate scaling procedures to derive a scaled input matrix are proposed. Singular value decomposition methods facilitate the quantification of the systems excitation stemming from the various input variables, and thus the selection of an appropriately strong and orthogonal set of input variables. The need for and the implementation and benefits of reducing the number of input variables are illustrated by a large-scale steam generator model of a real process.

  • PDF

성인의 시판 라면류 선택 속성에 따른 식사 행동 차이에 대한 탐색적 고찰 (A Comprehensive Study on the Meal Intake Behavior according to Ramyun's Selection Attributes for Korean Adults)

  • 정효선;유경진;윤혜현
    • 동아시아식생활학회지
    • /
    • 제22권6호
    • /
    • pp.895-902
    • /
    • 2012
  • This study was conducted to understand the Ramyun's selection attributes of Korean adults and examine differences in demographic characteristics and meal intake behavior among three groups of samples divided based on the Ramyun's selection attributes. Self-administered questionnaires were completed by 702 adults, and data were subjected to frequency analysis, chi-square analysis, factor analysis, reliability tests, cluster analysis, and discriminant analysis using SPSS. The results of the study were as follows. The Ramyun's selection attributes for Korean adults investigated were food quality (four variables), price (three variables), and company reliability (four variables). Cluster analysis resulted in the subjects being divided into three groups according to their Ramyun's selection attributes, a high-selection group, mid-selection group, and low-selection group. Three groups of samples classified by Ramyun's selection attributes differed based on demographic characteristics (gender and education level) and meal intake behavior (meal numbers, reason for meal, meal time, and meal size).

Set Covering 기반의 대용량 오믹스데이터 특징변수 추출기법 (Set Covering-based Feature Selection of Large-scale Omics Data)

  • 마정우;안기동;김광수;류홍서
    • 한국경영과학회지
    • /
    • 제39권4호
    • /
    • pp.75-84
    • /
    • 2014
  • In this paper, we dealt with feature selection problem of large-scale and high-dimensional biological data such as omics data. For this problem, most of the previous approaches used simple score function to reduce the number of original variables and selected features from the small number of remained variables. In the case of methods that do not rely on filtering techniques, they do not consider the interactions between the variables, or generate approximate solutions to the simplified problem. Unlike them, by combining set covering and clustering techniques, we developed a new method that could deal with total number of variables and consider the combinatorial effects of variables for selecting good features. To demonstrate the efficacy and effectiveness of the method, we downloaded gene expression datasets from TCGA (The Cancer Genome Atlas) and compared our method with other algorithms including WEKA embeded feature selection algorithms. In the experimental results, we showed that our method could select high quality features for constructing more accurate classifiers than other feature selection algorithms.

Two-Stage Penalized Composite Quantile Regression with Grouped Variables

  • Bang, Sungwan;Jhun, Myoungshic
    • Communications for Statistical Applications and Methods
    • /
    • 제20권4호
    • /
    • pp.259-270
    • /
    • 2013
  • This paper considers a penalized composite quantile regression (CQR) that performs a variable selection in the linear model with grouped variables. An adaptive sup-norm penalized CQR (ASCQR) is proposed to select variables in a grouped manner; in addition, the consistency and oracle property of the resulting estimator are also derived under some regularity conditions. To improve the efficiency of estimation and variable selection, this paper suggests the two-stage penalized CQR (TSCQR), which uses the ASCQR to select relevant groups in the first stage and the adaptive lasso penalized CQR to select important variables in the second stage. Simulation studies are conducted to illustrate the finite sample performance of the proposed methods.

고차원 범주형 자료를 위한 비지도 연관성 기반 범주형 변수 선택 방법 (Association-based Unsupervised Feature Selection for High-dimensional Categorical Data)

  • 이창기;정욱
    • 품질경영학회지
    • /
    • 제47권3호
    • /
    • pp.537-552
    • /
    • 2019
  • Purpose: The development of information technology makes it easy to utilize high-dimensional categorical data. In this regard, the purpose of this study is to propose a novel method to select the proper categorical variables in high-dimensional categorical data. Methods: The proposed feature selection method consists of three steps: (1) The first step defines the goodness-to-pick measure. In this paper, a categorical variable is relevant if it has relationships among other variables. According to the above definition of relevant variables, the goodness-to-pick measure calculates the normalized conditional entropy with other variables. (2) The second step finds the relevant feature subset from the original variables set. This step decides whether a variable is relevant or not. (3) The third step eliminates redundancy variables from the relevant feature subset. Results: Our experimental results showed that the proposed feature selection method generally yielded better classification performance than without feature selection in high-dimensional categorical data, especially as the number of irrelevant categorical variables increase. Besides, as the number of irrelevant categorical variables that have imbalanced categorical values is increasing, the difference in accuracy between the proposed method and the existing methods being compared increases. Conclusion: According to experimental results, we confirmed that the proposed method makes it possible to consistently produce high classification accuracy rates in high-dimensional categorical data. Therefore, the proposed method is promising to be used effectively in high-dimensional situation.

부분선형모형에서 LARS를 이용한 변수선택 (Variable selection in partial linear regression using the least angle regression)

  • 서한손;윤민;이학배
    • 응용통계연구
    • /
    • 제34권6호
    • /
    • pp.937-944
    • /
    • 2021
  • 본 연구는 부분선형모형에서 변수선택의 문제를 다룬다. 부분선형모형은 평활화모수 추정과 같은 비모수 추정과 선형설명변수에 대한 추정의 문제를 함께 포함하고 있어 변수선택이 쉽지 않다. 본 연구에서는 빠른 전진선택법인 LARS 를 이용한 변수선택법을 제시한다. 제안된 방법은 LARS에 의하여 선별된 변수들에 대하여 t-검정, 가능한 모든 회귀모형 비교 또는 단계별 선택법을 적용한다. 제안된 방법들의 효율성을 비교하기 위하여 실제데이터에 적용한 예제와 모의실험 결과가 제시된다.

토픽 모형을 이용한 텍스트 데이터의 단어 선택 (Feature selection for text data via topic modeling)

  • 장우솔;김예은;손원
    • 응용통계연구
    • /
    • 제35권6호
    • /
    • pp.739-754
    • /
    • 2022
  • 텍스트 데이터는 일반적으로 많은 변수를 포함하고 있으며 변수들 사이의 연관성도 높아 통계 분석의 정확성, 효율성 등에서 문제가 생길 수 있다. 이러한 문제점에 대처하기 위해 목표 변수가 주어진 지도 학습에서는 목표 변수를 잘 설명할 수 있는 단어들을 선택하여 이 단어들만 통계 분석에 이용하기도 한다. 반면, 비지도 학습에서는 목표 변수가 주어지지 않으므로 지도 학습에서와 같은 단어 선택 절차를 활용하기 어렵다. 이 연구에서는 토픽 모형을 이용하여 지도 학습에서의 목표 변수를 대신할 수 있는 토픽을 생성하고 각 토픽별로 연관성이 높은 단어들을 선택하는 단어 선택 절차를 제안한다. 제안된 절차를 실제 텍스트 데이터에 적용한 결과, 단어 선택 절차를 이용하면 많은 토픽에서 공통적으로 자주 등장하는 단어들을 제거함으로써 토픽을 더 명확하게 식별할 수 있었다. 또한, 군집 분석에 적용한 결과, 군집과 범주 사이에 높은 연관성을 가지는 군집 분석 결과를 얻을 수 있는 것으로 나타났다. 목표 변수에 대한 정보없이 토픽 모형을 이용하여 선택한 단어들을 분류 분석에 적용하였을 때 목표 변수를 이용하여 단어들을 선택한 경우와 비슷한 분류 정확성을 얻을 수 있음도 확인하였다.

다중선형회귀모형에서의 변수선택기법 평가 (Evaluating Variable Selection Techniques for Multivariate Linear Regression)

  • 류나현;김형석;강필성
    • 대한산업공학회지
    • /
    • 제42권5호
    • /
    • pp.314-326
    • /
    • 2016
  • The purpose of variable selection techniques is to select a subset of relevant variables for a particular learning algorithm in order to improve the accuracy of prediction model and improve the efficiency of the model. We conduct an empirical analysis to evaluate and compare seven well-known variable selection techniques for multiple linear regression model, which is one of the most commonly used regression model in practice. The variable selection techniques we apply are forward selection, backward elimination, stepwise selection, genetic algorithm (GA), ridge regression, lasso (Least Absolute Shrinkage and Selection Operator) and elastic net. Based on the experiment with 49 regression data sets, it is found that GA resulted in the lowest error rates while lasso most significantly reduces the number of variables. In terms of computational efficiency, forward/backward elimination and lasso requires less time than the other techniques.