• Title/Summary/Keyword: Robust variable selection

Search Result 31, Processing Time 0.02 seconds

The Regional Homogeneity in the Presence of Heteroskedasticity

  • Chung, Kyoun-Sup;Lee, Sang-Yup
    • Korean System Dynamics Review
    • /
    • v.8 no.2
    • /
    • pp.25-49
    • /
    • 2007
  • An important assumption of the classical linear regression model is that the disturbances appearing in the population regression function are homoskedastic; that is, they all have the same variance. If we persist in using the usual testing procedures despite heteroskedasticity, what ever conclusions we draw or inferences we make be very misleading. The contribution of this paper will be to the concrete procedure of the proper estimation when the heteroskedasticity does exist in the data, because the quality of dependent variable predictions, i.e., the estimated variance of the dependent variable, can be improved by giving consideration to the issues of regional homogeneity and/or heteroskedasticity across the research area. With respect to estimation, specific attention should be paid to the selection of the appropriate strategy in terms of the auxiliary regression model. The paper shows that by testing for heteroskedasticity, and by using robust methods in the presence of with and without heteroskedasticity, more efficient statistical inferences are provided.

  • PDF

Implementation of HMM Based Speech Recognizer with Medium Vocabulary Size Using TMS320C6201 DSP (TMS320C6201 DSP를 이용한 HMM 기반의 음성인식기 구현)

  • Jung, Sung-Yun;Son, Jong-Mok;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1E
    • /
    • pp.20-24
    • /
    • 2006
  • In this paper, we focused on the real time implementation of a speech recognition system with medium size of vocabulary considering its application to a mobile phone. First, we developed the PC based variable vocabulary word recognizer having the size of program memory and total acoustic models as small as possible. To reduce the memory size of acoustic models, linear discriminant analysis and phonetic tied mixture were applied in the feature selection process and training HMMs, respectively. In addition, state based Gaussian selection method with the real time cepstral normalization was used for reduction of computational load and robust recognition. Then, we verified the real-time operation of the implemented recognition system on the TMS320C6201 EVM board. The implemented recognition system uses memory size of about 610 kbytes including both program memory and data memory. The recognition rate was 95.86% for ETRI 445DB, and 96.4%, 97.92%, 87.04% for three kinds of name databases collected through the mobile phones.

Watermaking for still images in the DCT domain (DCT 영역에서의 정지 영상 Watermarking)

  • 권오형;김영식;박래홍
    • Journal of Broadcast Engineering
    • /
    • v.4 no.1
    • /
    • pp.68-75
    • /
    • 1999
  • In this paper. we propose a digital watermarking method for still images in the discrete cosine transform (DCT) domain. The adaptive watermark insertion method in the high frequency region within a given image is employed to increase the invisibility of the inserted watermark in images, in which variable block-size method is used for selection of the high frequency region. Experimental results show that the proposed watermarking method is robust to several common image processing techniques. including joint photographic experts group (JPEG) compression, lowpass filtering, multiple watermarking, and croppIng.

  • PDF

Control of SRM with Modified C-dump Converter in Cooling System of Automobiles (Modified C-dump 컨버터를 이용한 자동차 냉각시스템 SRM 제어)

  • Yoon, Yong-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.8
    • /
    • pp.1297-1302
    • /
    • 2017
  • Recently, SRMs are used in automobiles for power assistant steering, accessory motion control and traction drives. Especially in the motion control and traction drives, safety and efficiency are of paramount importance. The paper describes the essential elements faced in designing and constructing driving circuits for a switched reluctance motor for automobiles. An important factor in the selection of a motor and a drive for industrial application is the cost. The switched reluctance motor(SRM) is a simple, low-cost, and robust motor suitable for variable-speed as well as servo-type applications. With relatively simple converter and control requirements, the SRM is gaining an increasing attention in the drive industry. This paper presents a modified C-dump converter for Switched Reluctance Motor (SRM) machine application in the cooling system of automobiles. The experiments are performed to verify the capability of applicate control method on 6/4 salient type SRM.

Empirical variogram for achieving the best valid variogram

  • Mahdi, Esam;Abuzaid, Ali H.;Atta, Abdu M.A.
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.5
    • /
    • pp.547-568
    • /
    • 2020
  • Modeling the statistical autocorrelations in spatial data is often achieved through the estimation of the variograms, where the selection of the appropriate valid variogram model, especially for small samples, is crucial for achieving precise spatial prediction results from kriging interpolations. To estimate such a variogram, we traditionally start by computing the empirical variogram (traditional Matheron or robust Cressie-Hawkins or kernel-based nonparametric approaches). In this article, we conduct numerical studies comparing the performance of these empirical variograms. In most situations, the nonparametric empirical variable nearest-neighbor (VNN) showed better performance than its competitors (Matheron, Cressie-Hawkins, and Nadaraya-Watson). The analysis of the spatial groundwater dataset used in this article suggests that the wave variogram model, with hole effect structure, fitted to the empirical VNN variogram is the most appropriate choice. This selected variogram is used with the ordinary kriging model to produce the predicted pollution map of the nitrate concentrations in groundwater dataset.

Lasso Regression of RNA-Seq Data based on Bootstrapping for Robust Feature Selection (안정적 유전자 특징 선택을 위한 유전자 발현량 데이터의 부트스트랩 기반 Lasso 회귀 분석)

  • Jo, Jeonghee;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.9
    • /
    • pp.557-563
    • /
    • 2017
  • When large-scale gene expression data are analyzed using lasso regression, the estimation of regression coefficients may be unstable due to the highly correlated expression values between associated genes. This irregularity, in which the coefficients are reduced by L1 regularization, causes difficulty in variable selection. To address this problem, we propose a regression model which exploits the repetitive bootstrapping of gene expression values prior to lasso regression. The genes selected with high frequency were used to build each regression model. Our experimental results show that several genes were consistently selected in all regression models and we verified that these genes were not false positives. We also identified that the sign distribution of the regression coefficients of the selected genes from each model was correlated to the real dependent variables.

Sparse Adaptive Equalizer for ATSC DTV in Fast Fading Channels (고속페이딩 채널 극복을 위한 ATSC DTV용 스파스 적응 등화기)

  • Heo No-Ik;Oh Hae-Sock;Han Dong Seog
    • Journal of Broadcast Engineering
    • /
    • v.10 no.1 s.26
    • /
    • pp.4-13
    • /
    • 2005
  • An equalization algorithm is proposed to guarantee a stable performance in fast fading channels for digital television (DTV) systems from the advanced television system committee (ATSC) standard. In channels with high Doppler shifts, the conventional equalization algorithm shows severe performance degradation. Although the conventional equalizer compensates poor channel conditions to some degree, long filter taps required to overcome long delay profiles are not suitable for fast fading channels. The Proposed sparse equalization algorithm is robust to the multipaths with long delay Profiles as well as fast fading by utilizing channel estimation and equalizer initialization. It can compensate fast fading channels with high Doppler shifts using a filter tap selection technique as well as variable step-sizes. Under the ATSC test channels, the proposed algorithm is analyzed and compared with the conventional equalizer. Although the proposed algorithm uses small number of filter taps compared to the conventional equalizer, it is stable and has the advantages of fast convergence and channel tracking.

Doubly-robust Q-estimation in observational studies with high-dimensional covariates (고차원 관측자료에서의 Q-학습 모형에 대한 이중강건성 연구)

  • Lee, Hyobeen;Kim, Yeji;Cho, Hyungjun;Choi, Sangbum
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.309-327
    • /
    • 2021
  • Dynamic treatment regimes (DTRs) are decision-making rules designed to provide personalized treatment to individuals in multi-stage randomized trials. Unlike classical methods, in which all individuals are prescribed the same type of treatment, DTRs prescribe patient-tailored treatments which take into account individual characteristics that may change over time. The Q-learning method, one of regression-based algorithms to figure out optimal treatment rules, becomes more popular as it can be easily implemented. However, the performance of the Q-learning algorithm heavily relies on the correct specification of the Q-function for response, especially in observational studies. In this article, we examine a number of double-robust weighted least-squares estimating methods for Q-learning in high-dimensional settings, where treatment models for propensity score and penalization for sparse estimation are also investigated. We further consider flexible ensemble machine learning methods for the treatment model to achieve double-robustness, so that optimal decision rule can be correctly estimated as long as at least one of the outcome model or treatment model is correct. Extensive simulation studies show that the proposed methods work well with practical sample sizes. The practical utility of the proposed methods is proven with real data example.

Estimation of Spatial Distribution Using the Gaussian Mixture Model with Multivariate Geoscience Data (다변량 지구과학 데이터와 가우시안 혼합 모델을 이용한 공간 분포 추정)

  • Kim, Ho-Rim;Yu, Soonyoung;Yun, Seong-Taek;Kim, Kyoung-Ho;Lee, Goon-Taek;Lee, Jeong-Ho;Heo, Chul-Ho;Ryu, Dong-Woo
    • Economic and Environmental Geology
    • /
    • v.55 no.4
    • /
    • pp.353-366
    • /
    • 2022
  • Spatial estimation of geoscience data (geo-data) is challenging due to spatial heterogeneity, data scarcity, and high dimensionality. A novel spatial estimation method is needed to consider the characteristics of geo-data. In this study, we proposed the application of Gaussian Mixture Model (GMM) among machine learning algorithms with multivariate data for robust spatial predictions. The performance of the proposed approach was tested through soil chemical concentration data from a former smelting area. The concentrations of As and Pb determined by ex-situ ICP-AES were the primary variables to be interpolated, while the other metal concentrations by ICP-AES and all data determined by in-situ portable X-ray fluorescence (PXRF) were used as auxiliary variables in GMM and ordinary cokriging (OCK). Among the multidimensional auxiliary variables, important variables were selected using a variable selection method based on the random forest. The results of GMM with important multivariate auxiliary data decreased the root mean-squared error (RMSE) down to 0.11 for As and 0.33 for Pb and increased the correlations (r) up to 0.31 for As and 0.46 for Pb compared to those from ordinary kriging and OCK using univariate or bivariate data. The use of GMM improved the performance of spatial interpretation of anthropogenic metals in soil. The multivariate spatial approach can be applied to understand complex and heterogeneous geological and geochemical features.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.