• Title/Summary/Keyword: oversampled

Search Result 25, Processing Time 0.02 seconds

Comparison of Machine Learning Model Performance based on Observation Methods using Naked-eye and Visibility-meter (머신러닝을 이용한 안개 예측 시 목측과 시정계 계측 방법에 따른 모델 성능 차이 비교)

  • Changhyoun Park;Soon-hwan Lee
    • Journal of the Korean earth science society
    • /
    • v.44 no.2
    • /
    • pp.105-118
    • /
    • 2023
  • In this study, we predicted the presence of fog with a one-hour delay using the XGBoost DART machine learning algorithm for Andong, which had the highest occurrence of fog among inland stations from 2016 to 2020. We used six datasets: meteorological data, agricultural observation data, additional derived data, and their expanded data. The weather phenomenon numbers obtained through naked-eye observations and the visibility distances measured by visibility meters were classified as fog [1] or no-fog [0]. We set up twelve machine learning modeling experiments and used data from 2021 for model validation. We mainly evaluated model performance using recall and AUC-ROC, considering the harmful effects of fog on society and local communities. The combination of oversampled meteorological data features and the target induced by weather phenomenon numbers showed the best performance. This result highlights the importance of naked-eye observations in predicting fog using machine learning algorithms.

Method of Harmonic Magnitude Quantization for Harmonic Coder Using the Straight Line and DCT (Discrete Cosine Transform) (하모닉 코더를 위한 직선과 이산코사인변환 (DCT)을 이용한 하모닉 크기값 (Magnitude) 양자화 기법)

  • Choi, Ji-Wook;Jeong, Gyu-Hyeok;Lee, In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.4
    • /
    • pp.200-206
    • /
    • 2008
  • This paper presents a method of quantization to extract quantization parameters using the straight-line and DCT (Discrete Cosine Transform) for two splited frequency bands. As the number of harmonic is variable frame to frame, harmonics in low frequency band is oversampled to fix the dimension and straight-lines present a spectral envelope, then the discontinuous points of straight-lines in low frequency is sent to quantizer. Thus, extraction of quantization parameters using the straight-line provides a fixed dimension. Harmonics in high frequency use variable DCT to obtain quantization parameters and this paper proposes a method of quantization combining the straight-line with DCT. The measurement (If proposed method of quantization uses spectral distortion (SD) for spectral magnitudes. As a result, The proposed method of quantization improved 0.3dB in term of SD better than HVXC.

Deep learning based crack detection from tunnel cement concrete lining (딥러닝 기반 터널 콘크리트 라이닝 균열 탐지)

  • Bae, Soohyeon;Ham, Sangwoo;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.583-598
    • /
    • 2022
  • As human-based tunnel inspections are affected by the subjective judgment of the inspector, making continuous history management difficult. There is a lot of deep learning-based automatic crack detection research recently. However, the large public crack datasets used in most studies differ significantly from those in tunnels. Also, additional work is required to build sophisticated crack labels in current tunnel evaluation. Therefore, we present a method to improve crack detection performance by inputting existing datasets into a deep learning model. We evaluate and compare the performance of deep learning models trained by combining existing tunnel datasets, high-quality tunnel datasets, and public crack datasets. As a result, DeepLabv3+ with Cross-Entropy loss function performed best when trained on both public datasets, patchwise classification, and oversampled tunnel datasets. In the future, we expect to contribute to establishing a plan to efficiently utilize the tunnel image acquisition system's data for deep learning model learning.

Enhancing machine learning-based anomaly detection for TBM penetration rate with imbalanced data manipulation (불균형 데이터 처리를 통한 머신러닝 기반 TBM 굴진율 이상탐지 개선)

  • Kibeom Kwon;Byeonghyun Hwang;Hyeontae Park;Ju-Young Oh;Hangseok Choi
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2024
  • Anomaly detection for the penetration rate of tunnel boring machines (TBMs) is crucial for effective risk management in TBM tunnel projects. However, previous machine learning models for predicting the penetration rate have struggled with imbalanced data between normal and abnormal penetration rates. This study aims to enhance the performance of machine learning-based anomaly detection for the penetration rate by utilizing a data augmentation technique to address this data imbalance. Initially, six input features were selected through correlation analysis. The lowest and highest 10% of the penetration rates were designated as abnormal classes, while the remaining penetration rates were categorized as a normal class. Two prediction models were developed, each trained on an original training set and an oversampled training set constructed using SMOTE (synthetic minority oversampling technique): an XGB (extreme gradient boosting) model and an XGB-SMOTE model. The prediction results showed that the XGB model performed poorly for the abnormal classes, despite performing well for the normal class. In contrast, the XGB-SMOTE model consistently exhibited superior performance across all classes. These findings can be attributed to the data augmentation for the abnormal penetration rates using SMOTE, which enhances the model's ability to learn patterns between geological and operational factors that contribute to abnormal penetration rates. Consequently, this study demonstrates the effectiveness of employing data augmentation to manage imbalanced data in anomaly detection for TBM penetration rates.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.