• Title/Summary/Keyword: elastic-net regularization method

Search Result 6, Processing Time 0.02 seconds

Time delay estimation algorithm using Elastic Net (Elastic Net를 이용한 시간 지연 추정 알고리즘)

  • Jun-Seok Lim;Keunwa Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.364-369
    • /
    • 2023
  • Time-delay estimation between two receivers is a technique that has been applied in a variety of fields, from underwater acoustics to room acoustics and robotics. There are two types of time delay estimation techniques: one that estimates the amount of time delay from the correlation between receivers, and the other that parametrically models the time delay between receivers and estimates the parameters by system recognition. The latter has the characteristic that only a small fraction of the system's parameters are directly related to the delay. This characteristic can be exploited to improve the accuracy of the estimation by methods such as Lasso regularization. However, in the case of Lasso regularization, the necessary information is lost. In this paper, we propose a method using Elastic Net that adds Ridge regularization to Lasso regularization to compensate for this. Comparing the proposed method with the conventional Generalized Cross Correlation (GCC) method and the method using Lasso regularization, we show that the estimation variance is very small even for white Gaussian signal sources and colored signal sources.

A Study on Regularization Methods to Evaluate the Sediment Trapping Efficiency of Vegetative Filter Strips (식생여과대 유사 저감 효율 산정을 위한 정규화 방안)

  • Bae, JooHyun;Han, Jeongho;Yang, Jae E;Kim, Jonggun;Lim, Kyoung Jae;Jang, Won Seok
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.61 no.6
    • /
    • pp.9-19
    • /
    • 2019
  • Vegetative Filter Strip (VFS) is the best management practice which has been widely used to mitigate water pollutants from agricultural fields by alleviating runoff and sediment. This study was conducted to improve an equation for estimating sediment trapping efficiency of VFS using several different regularization methods (i.e., ordinary least squares analysis, LASSO, ridge regression analysis and elastic net). The four different regularization methods were employed to develop the sediment trapping efficiency equation of VFS. Each regularization method indicated high accuracy in estimating the sediment trapping efficiency of VFS. Among the four regularization methods, the ridge method showed the most accurate results according to $R^2$, RMSE and MAPE which were 0.94, 7.31% and 14.63%, respectively. The equation developed in this study can be applied in watershed-scale hydrological models in order to estimate the sediment trapping efficiency of VFS in agricultural fields for an effective watershed management in Korea.

Joint Identification of Multiple Genetic Variants of Obesity in a Korean Genome-wide Association Study

  • Oh, So-Hee;Cho, Seo-Ae;Park, Tae-Sung
    • Genomics & Informatics
    • /
    • v.8 no.3
    • /
    • pp.142-149
    • /
    • 2010
  • In recent years, genome-wide association (GWA) studies have successfully led to many discoveries of genetic variants affecting common complex traits, including height, blood pressure, and diabetes. Although GWA studies have made much progress in finding single nucleotide polymorphisms (SNPs) associated with many complex traits, such SNPs have been shown to explain only a very small proportion of the underlying genetic variance of complex traits. This is partly due to that fact that most current GWA studies have relied on single-marker approaches that identify single genetic factors individually and have limitations in considering the joint effects of multiple genetic factors on complex traits. Joint identification of multiple genetic factors would be more powerful and provide a better prediction of complex traits, since it utilizes combined information across variants. Recently, a new statistical method for joint identification of genetic variants for common complex traits via the elastic-net regularization method was proposed. In this study, we applied this joint identification approach to a large-scale GWA dataset (i.e., 8842 samples and 327,872 SNPs) in order to identify genetic variants of obesity for the Korean population. In addition, in order to test for the biological significance of the jointly identified SNPs, gene ontology and pathway enrichment analyses were further conducted.

Network-based regularization for analysis of high-dimensional genomic data with group structure (그룹 구조를 갖는 고차원 유전체 자료 분석을 위한 네트워크 기반의 규제화 방법)

  • Kim, Kipoong;Choi, Jiyun;Sun, Hokeun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1117-1128
    • /
    • 2016
  • In genetic association studies with high-dimensional genomic data, regularization procedures based on penalized likelihood are often applied to identify genes or genetic regions associated with diseases or traits. A network-based regularization procedure can utilize biological network information (such as genetic pathways and signaling pathways in genetic association studies) with an outstanding selection performance over other regularization procedures such as lasso and elastic-net. However, network-based regularization has a limitation because cannot be applied to high-dimension genomic data with a group structure. In this article, we propose to combine data dimension reduction techniques such as principal component analysis and a partial least square into network-based regularization for the analysis of high-dimensional genomic data with a group structure. The selection performance of the proposed method was evaluated by extensive simulation studies. The proposed method was also applied to real DNA methylation data generated from Illumina Innium HumanMethylation27K BeadChip, where methylation beta values of around 20,000 CpG sites over 12,770 genes were compared between 123 ovarian cancer patients and 152 healthy controls. This analysis was also able to indicate a few cancer-related genes.

The Doubly Regularized Quantile Regression

  • Choi, Ho-Sik;Kim, Yong-Dai
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.753-764
    • /
    • 2008
  • The $L_1$ regularized estimator in quantile problems conduct parameter estimation and model selection simultaneously and have been shown to enjoy nice performance. However, $L_1$ regularized estimator has a drawback: when there are several highly correlated variables, it tends to pick only a few of them. To make up for it, the proposed method adopts doubly regularized framework with the mixture of $L_1$ and $L_2$ norms. As a result, the proposed method can select significant variables and encourage the highly correlated variables to be selected together. One of the most appealing features of the new algorithm is to construct the entire solution path of doubly regularized quantile estimator. From simulations and real data analysis, we investigate its performance.

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.