• Title/Summary/Keyword: Data imputation

Search Result 201, Processing Time 0.026 seconds

A Study of Labor Entry of Conditional Welfare Recipients : An Exploration of the Predictors (취업대상 조건부수급자의 경제적 자활로의 진입에 영향을 미치는 요인에 관한 연구)

  • Kim, Kyo-Seong;Kang, Chul-Hee
    • Korean Journal of Social Welfare
    • /
    • v.52
    • /
    • pp.5-32
    • /
    • 2003
  • This paper examines the labor entry of conditional welfare recipients. This paper focuses on two questions. First, what is the percentage of conditional welfare recipients who have labor entry? Second, what are the predictors in the labor entry and the duration to the entry? Using Data about 917 welfare recipients who participated in the self-sufficiency programs of the Offices for Secure Employment in Seoul, this paper attempts to answer the above questions. Logistic regression analysis and survival analysis are adopted to identify variables predicting labor entry of conditional welfare recipients. This paper also utilizes a multiple imputation method to deal with the limitation of data by the missing values in some variables. The major findings are as follows: about 43.8% of the conditional welfare recipients have successful labor entry; and in the labor entry and the duration to the entry, gender, household, information and referral services for employment, health and willingness for self-sufficiency are the predictors that are statistically significant. Among these variables, health and willingness for self-sufficiency are more noticeable; it is recognized that programs to care for health of welfare recipients who want to have the labor entry and counseling programs to strengthen welfare recipients' willingness for labor entry are very important for them to be successful in the labor entry. This paper provides a basic knowledge about realities of the conditional welfare recipients' labor entry, identifies research areas for further research, and develops policy implications for their self-sufficiency.

  • PDF

An Intelligent Framework for Feature Detection and Health Recommendation System of Diseases

  • Mavaluru, Dinesh
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.177-184
    • /
    • 2021
  • All over the world, people are affected by many chronic diseases and medical practitioners are working hard to find out the symptoms and remedies for the diseases. Many researchers focus on the feature detection of the disease and trying to get a better health recommendation system. It is necessary to detect the features automatically to provide the most relevant solution for the disease. This research gives the framework of Health Recommendation System (HRS) for identification of relevant and non-redundant features in the dataset for prediction and recommendation of diseases. This system consists of three phases such as Pre-processing, Feature Selection and Performance evaluation. It supports for handling of missing and noisy data using the proposed Imputation of missing data and noise detection based Pre-processing algorithm (IMDNDP). The selection of features from the pre-processed dataset is performed by proposed ensemble-based feature selection using an expert's knowledge (EFS-EK). It is very difficult to detect and monitor the diseases manually and also needs the expertise in the field so that process becomes time consuming. Finally, the prediction and recommendation can be done using Support Vector Machine (SVM) and rule-based approaches.

Exploiting Patterns for Handling Incomplete Coevolving EEG Time Series

  • Thi, Ngoc Anh Nguyen;Yang, Hyung-Jeong;Kim, Sun-Hee
    • International Journal of Contents
    • /
    • v.9 no.4
    • /
    • pp.1-10
    • /
    • 2013
  • The electroencephalogram (EEG) time series is a measure of electrical activity received from multiple electrodes placed on the scalp of a human brain. It provides a direct measurement for characterizing the dynamic aspects of brain activities. These EEG signals are formed from a series of spatial and temporal data with multiple dimensions. Missing data could occur due to fault electrodes. These missing data can cause distortion, repudiation, and further, reduce the effectiveness of analyzing algorithms. Current methodologies for EEG analysis require a complete set of EEG data matrix as input. Therefore, an accurate and reliable imputation approach for missing values is necessary to avoid incomplete data sets for analyses and further improve the usage of performance techniques. This research proposes a new method to automatically recover random consecutive missing data from real world EEG data based on Linear Dynamical System. The proposed method aims to capture the optimal patterns based on two main characteristics in the coevolving EEG time series: namely, (i) dynamics via discovering temporal evolving behaviors, and (ii) correlations by identifying the relationships between multiple brain signals. From these exploits, the proposed method successfully identifies a few hidden variables and discovers their dynamics to impute missing values. The proposed method offers a robust and scalable approach with linear computation time over the size of sequences. A comparative study has been performed to assess the effectiveness of the proposed method against interpolation and missing values via Singular Value Decomposition (MSVD). The experimental simulations demonstrate that the proposed method provides better reconstruction performance up to 49% and 67% improvements over MSVD and interpolation approaches, respectively.

Probability Estimation Method for Imputing Missing Values in Data Expansion Technique (데이터 확장 기법에서 손실값을 대치하는 확률 추정 방법)

  • Lee, Jong Chan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.91-97
    • /
    • 2021
  • This paper uses a data extension technique originally designed for the rule refinement problem to handling incomplete data. This technique is characterized in that each event can have a weight indicating importance, and each variable can be expressed as a probability value. Since the key problem in this paper is to find the probability that is closest to the missing value and replace the missing value with the probability, three different algorithms are used to find the probability for the missing value and then store it in this data structure format. And, after learning to classify each information area with the SVM classification algorithm for evaluation of each probability structure, it compares with the original information and measures how much they match each other. The three algorithms for the imputation probability of the missing value use the same data structure, but have different characteristics in the approach method, so it is expected that it can be used for various purposes depending on the application field.

Development of Truck Axle Load Estimation Model Using Weigh-In-Motion Data (WIM 자료를 활용한 화물차량의 축중량 추정 모형 개발에 관한 연구)

  • Oh, Ju Sam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.4D
    • /
    • pp.511-518
    • /
    • 2011
  • Truck weight data are essential for road infrastructure design, maintenance and management. WIM (Weigh-In-Motion) system provides highway planners, researchers and officials with statistical data. Recently high speed WIM data also uses to support a vehicle weight regulation and enforcement activities. This paper aims at developing axle load estimating models with high speed WIM data collected from national highway. We also suggest a method to estimate axle load using simple regression model for WIM system. The model proposed by this paper, resulted in better axle load estimation in all class of vehicle than conventional model. The developed axle load estimating model will used for on-going or re-calibration procedures to ensure an adequate level of WIM system performance. This model can also be used for missing axle load data imputation in the future.

Incomplete data handling technique using decision trees (결정트리를 이용하는 불완전한 데이터 처리기법)

  • Lee, Jong Chan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.8
    • /
    • pp.39-45
    • /
    • 2021
  • This paper discusses how to handle incomplete data including missing values. Optimally processing the missing value means obtaining an estimate that is the closest to the original value from the information contained in the training data, and replacing the missing value with this value. The way to achieve this is to use a decision tree that is completed in the process of classifying information by the classifier. In other words, this decision tree is obtained in the process of learning by inputting only complete information that does not include loss values among all training data into the C4.5 classifier. The nodes of this decision tree have classification variable information, and the higher node closer to the root contains more information, and the leaf node forms a classification region through a path from the root. In addition, the average of classified data events is recorded in each region. Events including the missing value are input to this decision tree, and the region closest to the event is searched through a traversal process according to the information of each node. The average value recorded in this area is regarded as an estimate of the missing value, and the compensation process is completed.

Personalized Data Restoration Algorithm to Improve Wearable Device Service (웨어러블 디바이스 서비스 향상을 위한 개인 맞춤형 데이터 복원 알고리즘)

  • Kikun Park;Hye-Rim Bae
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.51-60
    • /
    • 2021
  • The market size of wearable devices is growing rapidly every year, and manufacturers around the world are introducing products that utilize their unique characteristics to keep up with the demand. Among them, smart watches are wearable devices with a very high share in sales, and they provide a variety of services to users by using information collected in real-time. The quality of service depends on the accuracy of the data collected by the smart watch, but data measurement may not be possible depending on the situation. This paper introduces a method to restore data that a smart watch could not collect. It deals with the similarity calculation method of trajectory information measured over time for data restoration and introduces a procedure for restoring missing sections according to the similarity. To prove the performance of the proposed methodology, a comparative experiment with a machine learning algorithm was conducted. Finally, the expected effects of this study and future research directions are discussed.

Variational Mode Decomposition with Missing Data (결측치가 있는 자료에서의 변동모드분해법)

  • Choi, Guebin;Oh, Hee-Seok;Lee, Youngjo;Kim, Donghoh;Yu, Kyungsang
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.159-174
    • /
    • 2015
  • Dragomiretskiy and Zosso (2014) developed a new decomposition method, termed variational mode decomposition (VMD), which is efficient for handling the tone detection and separation of signals. However, VMD may be inefficient in the presence of missing data since it is based on a fast Fourier transform (FFT) algorithm. To overcome this problem, we propose a new approach based on a novel combination of VMD and hierarchical (or h)-likelihood method. The h-likelihood provides an effective imputation methodology for missing data when VMD decomposes the signal into several meaningful modes. A simulation study and real data analysis demonstrates that the proposed method can produce substantially effective results.

Store Sales Prediction Using Gradient Boosting Model (그래디언트 부스팅 모델을 활용한 상점 매출 예측)

  • Choi, Jaeyoung;Yang, Heeyoon;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.171-177
    • /
    • 2021
  • Through the rapid developments in machine learning, there have been diverse utilization approaches not only in industrial fields but also in daily life. Implementations of machine learning on financial data, also have been of interest. Herein, we employ machine learning algorithms to store sales data and present future applications for fintech enterprises. We utilize diverse missing data processing methods to handle missing data and apply gradient boosting machine learning algorithms; XGBoost, LightGBM, CatBoost to predict the future revenue of individual stores. As a result, we found that using median imputation onto missing data with the appliance of the xgboost algorithm has the best accuracy. By employing the proposed method, fintech enterprises and customers can attain benefits. Stores can benefit by receiving financial assistance beforehand from fintech companies, while these corporations can benefit by offering financial support to these stores with low risk.

Smoothed RSSI-Based Distance Estimation Using Deep Neural Network (심층 인공신경망을 활용한 Smoothed RSSI 기반 거리 추정)

  • Hyeok-Don Kwon;Sol-Bee Lee;Jung-Hyok Kwon;Eui-Jik Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.2
    • /
    • pp.71-76
    • /
    • 2023
  • In this paper, we propose a smoothed received signal strength indicator (RSSI)-based distance estimation using deep neural network (DNN) for accurate distance estimation in an environment where a single receiver is used. The proposed scheme performs a data preprocessing consisting of data splitting, missing value imputation, and smoothing steps to improve distance estimation accuracy, thereby deriving the smoothed RSSI values. The derived smoothed RSSI values are used as input data of the Multi-Input Single-Output (MISO) DNN model, and are finally returned as an estimated distance in the output layer through input layer and hidden layer. To verify the superiority of the proposed scheme, we compared the performance of the proposed scheme with that of the linear regression-based distance estimation scheme. As a result, the proposed scheme showed 29.09% higher distance estimation accuracy than the linear regression-based distance estimation scheme.