• Title/Summary/Keyword: synthetic data sampling

Search Result 45, Processing Time 0.03 seconds

Predicting Reports of Theft in Businesses via Machine Learning

  • JungIn, Seo;JeongHyeon, Chang
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.499-510
    • /
    • 2022
  • This study examines the reporting factors of crime against business in Korea and proposes a corresponding predictive model using machine learning. While many previous studies focused on the individual factors of theft victims, there is a lack of evidence on the reporting factors of crime against a business that serves the public good as opposed to those that protect private property. Therefore, we proposed a crime prevention model for the willingness factor of theft reporting in businesses. This study used data collected through the 2015 Commercial Crime Damage Survey conducted by the Korea Institute for Criminal Policy. It analyzed data from 834 businesses that had experienced theft during a 2016 crime investigation. The data showed a problem with unbalanced classes. To solve this problem, we jointly applied the Synthetic Minority Over Sampling Technique and the Tomek link techniques to the training data. Two prediction models were implemented. One was a statistical model using logistic regression and elastic net. The other involved a support vector machine model, tree-based machine learning models (e.g., random forest, extreme gradient boosting), and a stacking model. As a result, the features of theft price, invasion, and remedy, which are known to have significant effects on reporting theft offences, can be predicted as determinants of such offences in companies. Finally, we verified and compared the proposed predictive models using several popular metrics. Based on our evaluation of the importance of the features used in each model, we suggest a more accurate criterion for predicting var.

A Scalable Clustering Method for Categorical Sequences (범주형 시퀀스들에 대한 확장성 있는 클러스터링 방법)

  • Oh, Seung-Joon;Kim, Jae-Yearn
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.136-141
    • /
    • 2004
  • There has been enormous growth in the amount of commercial and scientific data, such as retail transactions, protein sequences, and web-logs. Such datasets consist of sequence data that have an inherent sequential nature. However, few clustering algorithms consider sequentiality. In this paper, we study how to cluster sequence datasets. We propose a new similarity measure to compute the similarity between two sequences. We also present an efficient method for determining the similarity measure and develop a clustering algorithm. Due to the high computational complexity of hierarchical clustering algorithms for clustering large datasets, a new clustering method is required. Therefore, we propose a new scalable clustering method using sampling and a k-nearest-neighbor method. Using a real dataset and a synthetic dataset, we show that the quality of clusters generated by our proposed approach is better than that of clusters produced by traditional algorithms.

Enhancing Malware Detection with TabNetClassifier: A SMOTE-based Approach

  • Rahimov Faridun;Eul Gyu Im
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.294-297
    • /
    • 2024
  • Malware detection has become increasingly critical with the proliferation of end devices. To improve detection rates and efficiency, the research focus in malware detection has shifted towards leveraging machine learning and deep learning approaches. This shift is particularly relevant in the context of the widespread adoption of end devices, including smartphones, Internet of Things devices, and personal computers. Machine learning techniques are employed to train models on extensive datasets and evaluate various features, while deep learning algorithms have been extensively utilized to achieve these objectives. In this research, we introduce TabNet, a novel architecture designed for deep learning with tabular data, specifically tailored for enhancing malware detection techniques. Furthermore, the Synthetic Minority Over-Sampling Technique is utilized in this work to counteract the challenges posed by imbalanced datasets in machine learning. SMOTE efficiently balances class distributions, thereby improving model performance and classification accuracy. Our study demonstrates that SMOTE can effectively neutralize class imbalance bias, resulting in more dependable and precise machine learning models.

Machine learning application to seismic site classification prediction model using Horizontal-to-Vertical Spectral Ratio (HVSR) of strong-ground motions

  • Francis G. Phi;Bumsu Cho;Jungeun Kim;Hyungik Cho;Yun Wook Choo;Dookie Kim;Inhi Kim
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.539-554
    • /
    • 2024
  • This study explores development of prediction model for seismic site classification through the integration of machine learning techniques with horizontal-to-vertical spectral ratio (HVSR) methodologies. To improve model accuracy, the research employs outlier detection methods and, synthetic minority over-sampling technique (SMOTE) for data balance, and evaluates using seven machine learning models using seismic data from KiK-net. Notably, light gradient boosting method (LGBM), gradient boosting, and decision tree models exhibit improved performance when coupled with SMOTE, while Multiple linear regression (MLR) and Support vector machine (SVM) models show reduced efficacy. Outlier detection techniques significantly enhance accuracy, particularly for LGBM, gradient boosting, and voting boosting. The ensemble of LGBM with the isolation forest and SMOTE achieves the highest accuracy of 0.91, with LGBM and local outlier factor yielding the highest F1-score of 0.79. Consistently outperforming other models, LGBM proves most efficient for seismic site classification when supported by appropriate preprocessing procedures. These findings show the significance of outlier detection and data balancing for precise seismic soil classification prediction, offering insights and highlighting the potential of machine learning in optimizing site classification accuracy.

EXTRACTION OF INTERPRETIVE WAVELETS BY MODIFIED WIENER FILTER METHOD - TEST AND EVALUATION WITH MARINE SESMIIC DATA- (修正 위너필터 方法에 依한 解釋波의 抽出 -海洋彈性波 探査資料에 依한 實驗 및 評價)

  • Youn, Oong Koo;Han, Sang-Joon;Park, Byung Kwon
    • 한국해양학회지
    • /
    • v.18 no.2
    • /
    • pp.117-124
    • /
    • 1983
  • Pizza's synthetic model, a modified Wiener filter method, was tested to establish the procedure of desirable interpretive wavelet extraction and its application to the marine seismic exploration using several approaches with a real offshore seismic data of the southeast Asia. Noise spectrum acquisition is difficult and any assumptions for it do not produce interpretive wavelets as good as synthetic model result by Piazza (1979). however the resolution could be improved with spiking deconvoultion and following zero phase bandpass filter, and the testing procedure and evaluatttion of results can hopefully contribute in future study and practical evaluation of Piazza's method.

  • PDF

Evaluation of Multivariate Stream Data Reduction Techniques (다변량 스트림 데이터 축소 기법 평가)

  • Jung, Hung-Jo;Seo, Sung-Bo;Cheol, Kyung-Joo;Park, Jeong-Seok;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.889-900
    • /
    • 2006
  • Even though sensor networks are different in user requests and data characteristics depending on each application area, the existing researches on stream data transmission problem focus on the performance improvement of their methods rather than considering the original characteristic of stream data. In this paper, we introduce a hierarchical or distributed sensor network architecture and data model, and then evaluate the multivariate data reduction methods suitable for user requirements and data features so as to apply reduction methods alternatively. To assess the relative performance of the proposed multivariate data reduction methods, we used the conventional techniques, such as Wavelet, HCL(Hierarchical Clustering), Sampling and SVD (Singular Value Decomposition) as well as the experimental data sets, such as multivariate time series, synthetic data and robot execution failure data. The experimental results shows that SVD and Sampling method are superior to Wavelet and HCL ia respect to the relative error ratio and execution time. Especially, since relative error ratio of each data reduction method is different according to data characteristic, it shows a good performance using the selective data reduction method for the experimental data set. The findings reported in this paper can serve as a useful guideline for sensor network application design and construction including multivariate stream data.

Optimal Ratio of Data Oversampling Based on a Genetic Algorithm for Overcoming Data Imbalance (데이터 불균형 해소를 위한 유전알고리즘 기반 최적의 오버샘플링 비율)

  • Shin, Seung-Soo;Cho, Hwi-Yeon;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.49-55
    • /
    • 2021
  • Recently, with the development of database, it is possible to store a lot of data generated in finance, security, and networks. These data are being analyzed through classifiers based on machine learning. The main problem at this time is data imbalance. When we train imbalanced data, it may happen that classification accuracy is degraded due to over-fitting with majority class data. To overcome the problem of data imbalance, oversampling strategy that increases the quantity of data of minority class data is widely used. It requires to tuning process about suitable method and parameters for data distribution. To improve the process, In this study, we propose a strategy to explore and optimize oversampling combinations and ratio based on various methods such as synthetic minority oversampling technique and generative adversarial networks through genetic algorithms. After sampling credit card fraud detection which is a representative case of data imbalance, with the proposed strategy and single oversampling strategies, we compare the performance of trained classifiers with each data. As a result, a strategy that is optimized by exploring for ratio of each method with genetic algorithms was superior to previous strategies.

A FREQUENCY DOMAIN RAW SIGNAL SIMULATOR FOR SAR

  • Kwak Sunghee;Kim Moon-Gyu;Shin Dongseok;Shin Jae-Min
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.530-533
    • /
    • 2005
  • A raw signal simulator for synthetic aperture radar (SAR) is a useful tool for the design and implementation of SAR system. Also, in order to analyze and verify the developed SAR processor, the raw signal simulator is required. Moreover, there is the need for a test system to help designing new SAR sensors and mission of SAR system. The derived parameters of the SAR simulator also help to generate accurate SAR processing algorithms. Although the ultimate purpose of this research is to presents a general purpose SAR simulator, this paper presents a SAR simulator in frequency domain at the first step. The proposed simulator generates the raw signal by changing various simulation parameters such as antenna parameters, modulation parameters, and sampling parameters. It also uses the statistics from an actual SAR image to imitate actual physical scattering. This paper introduces the procedures and parameters of the simulator, and presents the simulation results. Experiments have been conducted by comparing the simulated raw data with original raw SAR image. In addition, the simulated raw data have been verified through commercial SAR processing software.

  • PDF

Performance Characteristics of an Ensemble Machine Learning Model for Turbidity Prediction With Improved Data Imbalance (데이터 불균형 개선에 따른 탁도 예측 앙상블 머신러닝 모형의 성능 특성)

  • HyunSeok Yang;Jungsu Park
    • Ecology and Resilient Infrastructure
    • /
    • v.10 no.4
    • /
    • pp.107-115
    • /
    • 2023
  • High turbidity in source water can have adverse effects on water treatment plant operations and aquatic ecosystems, necessitating turbidity management. Consequently, research aimed at predicting river turbidity continues. This study developed a multi-class classification model for prediction of turbidity using LightGBM (Light Gradient Boosting Machine), a representative ensemble machine learning algorithm. The model utilized data that was classified into four classes ranging from 1 to 4 based on turbidity, from low to high. The number of input data points used for analysis varied among classes, with 945, 763, 95, and 25 data points for classes 1 to 4, respectively. The developed model exhibited precisions of 0.85, 0.71, 0.26, and 0.30, as well as recalls of 0.82, 0.76, 0.19, and 0.60 for classes 1 to 4, respectively. The model tended to perform less effectively in the minority classes due to the limited data available for these classes. To address data imbalance, the SMOTE (Synthetic Minority Over-sampling Technique) algorithm was applied, resulting in improved model performance. For classes 1 to 4, the Precision and Recall of the improved model were 0.88, 0.71, 0.26, 0.25 and 0.79, 0.76, 0.38, 0.60, respectively. This demonstrated that alleviating data imbalance led to a significant enhancement in Recall of the model. Furthermore, to analyze the impact of differences in input data composition addressing the input data imbalance, input data was constructed with various ratios for each class, and the model performances were compared. The results indicate that an appropriate composition ratio for model input data improves the performance of the machine learning model.

Applying Spitz Trace Interpolation Algorithm for Seismic Data (탄성파 자료를 이용한 Spitz 보간 알고리즘의 적용)

  • Yang Jung Ah;Suh Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.6 no.4
    • /
    • pp.171-179
    • /
    • 2003
  • In land and marine seismic survey, we generally set receivers with equal interval suppose that sampling interval Is too narrow. But the cost of seismic data acquisition and that of data processing are much higher, therefore we should design proper receiver interval. Spatial aliasing can be occurred on seismic data when sampling interval is too coarse. If we Process spatial aliasing data, we can not obtain a good imaging result. Trace interpolation is used to improve the quality of multichannel seismic data processing. In this study, we applied the Spitz algorithm which is widely used in seismic data processing. This algorithm works well regardless of dip information of the complex underground structure. Using prediction filter and original traces with linear event we interpolated in f-x domain. We confirm our algorithm by examining for some synthetic data and marine data. After interpolation, we could find that receiver intervals get more narrow and the number of receiver is increased. We also could see that continuity of traces is more linear than before Applying this interpolation algorithm on seismic data with spatial aliasing, we may obtain a better migration imaging.