• Title/Summary/Keyword: Data imputation

Search Result 199, Processing Time 0.023 seconds

A Novel on Auto Imputation and Analysis Prediction Model of Data Missing Scope based on Machine Learning (머신러닝기반의 데이터 결측 구간의 자동 보정 및 분석 예측 모델에 대한 연구)

  • Jung, Se-Hoon;Lee, Han-Sung;Kim, Jun-Yeong;Sim, Chun-Bo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.257-268
    • /
    • 2022
  • When there is a missing value in the raw data, if ignore the missing values and proceed with the analysis, the accuracy decrease due to the decrease in the number of sample. The method of imputation and analyzing patterns and significant values can compensate for the problem of lower analysis quality and analysis accuracy as a result of bias rather than simply removing missing values. In this study, we proposed to study irregular data patterns and missing processing methods of data using machine learning techniques for the study of correction of missing values. we would like to propose a plan to replace the missing with data from a similar past point in time by finding the situation at the time when the missing data occurred. Unlike previous studies, data correction techniques present new algorithms using DNN and KNN-MLE techniques. As a result of the performance evaluation, the ANAE measurement value compared to the existing missing section correction algorithm confirmed a performance improvement of about 0.041 to 0.321.

A two-sample test with interval censored competing risk data using multiple imputation (다중대체방법을 이용한 구간 중도 경쟁 위험 모형에서의 이표본 검정)

  • Kim, Yuwon;Kim, Yang-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.2
    • /
    • pp.233-241
    • /
    • 2017
  • Interval censored data frequently occur in observation studies where the subject is followed periodically. In this paper, our interest is to suggest a test statistic to compare the CIF of two groups with interval censored failure time data in the presence of competing risks. Gray (1988) suggested a test statistic for right censored data that motivated a well-known Fine and Gray's subdistribution hazard model. A multiple imputation technique is adopted to adopt Gray's test statistic to interval censored data. The powers and sizes of the suggested method are investigated through diverse simulation schemes. The main merit of the suggested method is its simplicity to implement with existing software for right censored data. The method is illustrated by analyzing Bangkok's HIV cohort dataset.

A Modified Grey-Based k-NN Approach for Treatment of Missing Value

  • Chun, Young-M.;Lee, Joon-W.;Chung, Sung-S.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.421-436
    • /
    • 2006
  • Huang proposed a grey-based nearest neighbor approach to predict accurately missing attribute value in 2004. Our study proposes which way to decide the number of nearest neighbors using not only the deng's grey relational grade but also the wen's grey relational grade. Besides, our study uses not an arithmetic(unweighted) mean but a weighted one. Also, GRG is used by a weighted value when we impute missing values. There are four different methods - DU, DW, WU, WW. The performance of WW(Wen's GRG & weighted mean) method is the best of any other methods. It had been proven by Huang that his method was much better than mean imputation method and multiple imputation method. The performance of our study is far superior to that of Huang.

  • PDF

Denoising Self-Attention Network for Mixed-type Data Imputation (혼합형 데이터 보간을 위한 디노이징 셀프 어텐션 네트워크)

  • Lee, Do-Hoon;Kim, Han-Joon;Chun, Joonghoon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.135-144
    • /
    • 2021
  • Recently, data-driven decision-making technology has become a key technology leading the data industry, and machine learning technology for this requires high-quality training datasets. However, real-world data contains missing values for various reasons, which degrades the performance of prediction models learned from the poor training data. Therefore, in order to build a high-performance model from real-world datasets, many studies on automatically imputing missing values in initial training data have been actively conducted. Many of conventional machine learning-based imputation techniques for handling missing data involve very time-consuming and cumbersome work because they are applied only to numeric type of columns or create individual predictive models for each columns. Therefore, this paper proposes a new data imputation technique called 'Denoising Self-Attention Network (DSAN)', which can be applied to mixed-type dataset containing both numerical and categorical columns. DSAN can learn robust feature expression vectors by combining self-attention and denoising techniques, and can automatically interpolate multiple missing variables in parallel through multi-task learning. To verify the validity of the proposed technique, data imputation experiments has been performed after arbitrarily generating missing values for several mixed-type training data. Then we show the validity of the proposed technique by comparing the performance of the binary classification models trained on imputed data together with the errors between the original and imputed values.

Sparse Web Data Analysis Using MCMC Missing Value Imputation and PCA Plot-based SOM (MCMC 결측치 대체와 주성분 산점도 기반의 SOM을 이용한 희소한 웹 데이터 분석)

  • Jun, Sung-Hae;Oh, Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.277-282
    • /
    • 2003
  • The knowledge discovery from web has been studied in many researches. There are some difficulties using web log for training data on efficient information predictive models. In this paper, we studied on the method to eliminate sparseness from web log data and to perform web user clustering. Using missing value imputation by Bayesian inference of MCMC, the sparseness of web data is removed. And web user clustering is performed using self organizing maps based on 3-D plot by principal component. Finally, using KDD Cup data, our experimental results were shown the problem solving process and the performance evaluation.

Imputation method for missing data based on clustering and measure of property (군집화 및 특성도를 이용한 결측치 대체 방법)

  • Kim, Sunghyun;Kim, Dongjae
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.1
    • /
    • pp.29-40
    • /
    • 2018
  • There are various reasons for missing values when collecting data. Missing values have some influence on the analysis and results; consequently, various methods of processing missing values have been studied to solve the problem. It is thought that the later point of view may be affected by the initial time point value in the repeated measurement data. However, in the existing method, there was no method for the imputation of missing values using this concept. Therefore, we proposed a new missing value imputation method in this study using clustering in initial time point of the repeated measurement data and the measure of property proposed by Kim and Kim (The Korean Communications in Statistics, 30, 463-473, 2017). We also applied the Monte Carlo simulations to compare the performance of the established method and suggested methods in repeated measurement data.

Imputation Model for Link Travel Speed Measurement Using UTIS (UTIS 구간통행속도 결측치 보정모델)

  • Ki, Yong-Kul;Ahn, Gye-Hyeong;Kim, Eun-Jeong;Bae, Kwang-Soo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.6
    • /
    • pp.63-73
    • /
    • 2011
  • Travel speed is an important parameter for measuring road traffic. UTIS(Urban Traffic Information System) was developed as a mobile detector for measuring link travel speeds in South Korea. After investigation, we founded that UTIS includes some missing data caused by the lack of probe vehicles on road segments, system failures and etc. Imputation is the practice of filling in missing data with estimated values. In this paper, we suggests a new model for imputing missing data to provide accurate link travel speeds to the public. In the field test, new model showed the travel speed measuring accuracy of 93.6%. Therefore, it can be concluded that the proposed model significantly improves travel speed measuring accuracy.

Enhancement of durability of tall buildings by using deep-learning-based predictions of wind-induced pressure

  • K.R. Sri Preethaa;N. Yuvaraj;Gitanjali Wadhwa;Sujeen Song;Se-Woon Choi;Bubryur Kim
    • Wind and Structures
    • /
    • v.36 no.4
    • /
    • pp.237-247
    • /
    • 2023
  • The emergence of high-rise buildings has necessitated frequent structural health monitoring and maintenance for safety reasons. Wind causes damage and structural changes on tall structures; thus, safe structures should be designed. The pressure developed on tall buildings has been utilized in previous research studies to assess the impacts of wind on structures. The wind tunnel test is a primary research method commonly used to quantify the aerodynamic characteristics of high-rise buildings. Wind pressure is measured by placing pressure sensor taps at different locations on tall buildings, and the collected data are used for analysis. However, sensors may malfunction and produce erroneous data; these data losses make it difficult to analyze aerodynamic properties. Therefore, it is essential to generate missing data relative to the original data obtained from neighboring pressure sensor taps at various intervals. This study proposes a deep learning-based, deep convolutional generative adversarial network (DCGAN) to restore missing data associated with faulty pressure sensors installed on high-rise buildings. The performance of the proposed DCGAN is validated by using a standard imputation model known as the generative adversarial imputation network (GAIN). The average mean-square error (AMSE) and average R-squared (ARSE) are used as performance metrics. The calculated ARSE values by DCGAN on the building model's front, backside, left, and right sides are 0.970, 0.972, 0.984 and 0.978, respectively. The AMSE produced by DCGAN on four sides of the building model is 0.008, 0.010, 0.015 and 0.014. The average standard deviation of the actual measures of the pressure sensors on four sides of the model were 0.1738, 0.1758, 0.2234 and 0.2278. The average standard deviation of the pressure values generated by the proposed DCGAN imputation model was closer to that of the measured actual with values of 0.1736,0.1746,0.2191, and 0.2239 on four sides, respectively. In comparison, the standard deviation of the values predicted by GAIN are 0.1726,0.1735,0.2161, and 0.2209, which is far from actual values. The results demonstrate that DCGAN model fits better for data imputation than the GAIN model with improved accuracy and fewer error rates. Additionally, the DCGAN is utilized to estimate the wind pressure in regions of buildings where no pressure sensor taps are available; the model yielded greater prediction accuracy than GAIN.

A Concordance Study of the Preprocessing Orders in Microarray Data (마이크로어레이 자료의 사전 처리 순서에 따른 검색의 일치도 분석)

  • Kim, Sang-Cheol;Lee, Jae-Hwi;Kim, Byung-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.585-594
    • /
    • 2009
  • Researchers of microarray experiment transpose processed images of raw data to possible data of statistical analysis: it is preprocessing. Preprocessing of microarray has image filtering, imputation and normalization. There have been studied about several different methods of normalization and imputation, but there was not further study on the order of the procedures. We have no further study about which things put first on our procedure between normalization and imputation. This study is about the identification of differentially expressed genes(DEG) on the order of the preprocessing steps using two-dye cDNA microarray in colon cancer and gastric cancer. That is, we check for compare which combination of imputation and normalization steps can detect the DEG. We used imputation methods(K-nearly neighbor, Baysian principle comparison analysis) and normalization methods(global, within-print tip group, variance stabilization). Therefore, preprocessing steps have 12 methods. We identified concordance measure of DEG using the datasets to which the 12 different preprocessing orders were applied. When we applied preprocessing using variance stabilization of normalization method, there was a little variance in a sensitive way for detecting DEG.

Filling in Hydrological Missing Data Using Imputation Methods (Imputation Method를 활용한 수문 결측자료의 보정)

  • Kang, Tae-Ho;Hong, Il-Pyo;Km, Young-Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.1254-1259
    • /
    • 2009
  • 과거 관측된 수문자료는 분석을 통해 다양한 수문모형의 평가 및 예측과 수자원 정책결정에서 활용된다. 하지만 관측장비의 오작동 및 관측범위의 한계에 의해 수집된 자료에는 결측이 존재한다. 단순히 결측이 존재하는 벡터를 제외하거나, 결측이 존재하는 자료 구간에 선형성이 존재한다는 가정 하에 평균을 활용하기도 했으나, 이로 인하여 자료의 통계특성에 왜곡이 야기될 수 있다. 본 연구는 결측의 보정으로 자료가 보유하는 정보의 손실 및 왜곡을 최소화 할 수 있는 방안을 연구하고자 한다. 자료의 결측은 크게 완벽한 무작위 결측(missing completely at random, MCAR), 무작위 결측(missing at random, MAR), 무작위성이 없는 결측(nonrandom missingness)으로 분류되며, 수문자료는 결측을 포함한 기간이 그 외 기간의 자료와 통계적으로 동일하지는 않지만 결측자료의 추정이 가능한 MAR에 속하는 것이 일반적이므로 이를 가정으로 결측을 보정하였다. Local Lest Squares Imputation(LLSimput)을 결측의 추정을 위해 사용하였으며, 기존에 쉽게 사용되던 선형보간법과 비교하였다. 적용성 평가를 위해 소양강댐 일 유입량 자료에 1 - 5 %의 결측자료를 임의로 생성하였다. 동일한 양의 결측자료에 대해 100개의 셋을 사용하여 보정의 불확실성 범위를 적용된 방법에 대해 비교..평가하였으며, 결측 증가에 따른 보정효과의 변화를 검토하였다. Normalized Root Mean Squared Error(NRMSE)를 사용하여 적용된 두 방법을 평가한 결과, (1) 결측자료의 비가 낮을수록 간단한 선형보간법을 사용한 보정이 효과적이었다. (2) 하지만 결측의 비가 증가할수록 선형보간법의 보정효과는 점차 큰 불확실성과 낮은 보정효과를 보인 반면, (3) LLSimpute는 결측의 증가에 관계없이 일정한 보정효과 및 불확실성 범위를 나타내는 것으로 드러났다.

  • PDF