• Title/Summary/Keyword: Gradient boosting machines

Search Result 15, Processing Time 0.02 seconds

Prediction models of rock quality designation during TBM tunnel construction using machine learning algorithms

  • Byeonghyun Hwang;Hangseok Choi;Kibeom Kwon;Young Jin Shin;Minkyu Kang
    • Geomechanics and Engineering
    • /
    • v.38 no.5
    • /
    • pp.507-515
    • /
    • 2024
  • An accurate estimation of the geotechnical parameters in front of tunnel faces is crucial for the safe construction of underground infrastructure using tunnel boring machines (TBMs). This study was aimed at developing a data-driven model for predicting the rock quality designation (RQD) of the ground formation ahead of tunnel faces. The dataset used for the machine learning (ML) model comprises seven geological and mechanical features and 564 RQD values, obtained from an earth pressure balance (EPB) shield TBM tunneling project beneath the Han River in the Republic of Korea. Four ML algorithms were employed in developing the RQD prediction model: k-nearest neighbor (KNN), support vector regression (SVR), random forest (RF), and extreme gradient boosting (XGB). The grid search and five-fold cross-validation techniques were applied to optimize the prediction performance of the developed model by identifying the optimal hyperparameter combinations. The prediction results revealed that the RF algorithm-based model exhibited superior performance, achieving a root mean square error of 7.38% and coefficient of determination of 0.81. In addition, the Shapley additive explanations (SHAP) approach was adopted to determine the most relevant features, thereby enhancing the interpretability and reliability of the developed model with the RF algorithm. It was concluded that the developed model can successfully predict the RQD of the ground formation ahead of tunnel faces, contributing to safe and efficient tunnel excavation.

Development of an AI-based remaining trip time prediction system for nuclear power plants

  • Sang Won Oh;Ji Hun Park;Hye Seon Jo;Man Gyun Na
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3167-3179
    • /
    • 2024
  • In abnormal states of nuclear power plants (NPPs), operators undertake mitigation actions to restore a normal state and prevent reactor trips. However, in abnormal states, the NPP condition fluctuates rapidly, which can lead to human error. If human error occurs, the condition of an NPP can deteriorate, leading to reactor trips. Sudden shutdowns, such as reactor trips, can result in the failure of numerous NPP facilities and economic losses. This study develops a remaining trip time (RTT) prediction system as part of an operator support system to reduce possible human errors and improve the safety of NPPs. The RTT prediction system consists of an algorithm that utilizes artificial intelligence (AI) and explainable AI (XAI) methods, such as autoencoders, light gradient-boosting machines, and Shapley additive explanations. AI methods provide diagnostic information about the abnormal states that occur and predict the remaining time until a reactor trip occurs. The XAI method improves the reliability of AI by providing a rationale for RTT prediction results and information on the main variables of the status of NPPs. The RTT prediction system includes an interface that can effectively provide the results of the system.

Enhancing machine learning-based anomaly detection for TBM penetration rate with imbalanced data manipulation (불균형 데이터 처리를 통한 머신러닝 기반 TBM 굴진율 이상탐지 개선)

  • Kibeom Kwon;Byeonghyun Hwang;Hyeontae Park;Ju-Young Oh;Hangseok Choi
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2024
  • Anomaly detection for the penetration rate of tunnel boring machines (TBMs) is crucial for effective risk management in TBM tunnel projects. However, previous machine learning models for predicting the penetration rate have struggled with imbalanced data between normal and abnormal penetration rates. This study aims to enhance the performance of machine learning-based anomaly detection for the penetration rate by utilizing a data augmentation technique to address this data imbalance. Initially, six input features were selected through correlation analysis. The lowest and highest 10% of the penetration rates were designated as abnormal classes, while the remaining penetration rates were categorized as a normal class. Two prediction models were developed, each trained on an original training set and an oversampled training set constructed using SMOTE (synthetic minority oversampling technique): an XGB (extreme gradient boosting) model and an XGB-SMOTE model. The prediction results showed that the XGB model performed poorly for the abnormal classes, despite performing well for the normal class. In contrast, the XGB-SMOTE model consistently exhibited superior performance across all classes. These findings can be attributed to the data augmentation for the abnormal penetration rates using SMOTE, which enhances the model's ability to learn patterns between geological and operational factors that contribute to abnormal penetration rates. Consequently, this study demonstrates the effectiveness of employing data augmentation to manage imbalanced data in anomaly detection for TBM penetration rates.

Comparative Analysis of Machine Learning Techniques for IoT Anomaly Detection Using the NSL-KDD Dataset

  • Zaryn, Good;Waleed, Farag;Xin-Wen, Wu;Soundararajan, Ezekiel;Maria, Balega;Franklin, May;Alicia, Deak
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.46-52
    • /
    • 2023
  • With billions of IoT (Internet of Things) devices populating various emerging applications across the world, detecting anomalies on these devices has become incredibly important. Advanced Intrusion Detection Systems (IDS) are trained to detect abnormal network traffic, and Machine Learning (ML) algorithms are used to create detection models. In this paper, the NSL-KDD dataset was adopted to comparatively study the performance and efficiency of IoT anomaly detection models. The dataset was developed for various research purposes and is especially useful for anomaly detection. This data was used with typical machine learning algorithms including eXtreme Gradient Boosting (XGBoost), Support Vector Machines (SVM), and Deep Convolutional Neural Networks (DCNN) to identify and classify any anomalies present within the IoT applications. Our research results show that the XGBoost algorithm outperformed both the SVM and DCNN algorithms achieving the highest accuracy. In our research, each algorithm was assessed based on accuracy, precision, recall, and F1 score. Furthermore, we obtained interesting results on the execution time taken for each algorithm when running the anomaly detection. Precisely, the XGBoost algorithm was 425.53% faster when compared to the SVM algorithm and 2,075.49% faster than the DCNN algorithm. According to our experimental testing, XGBoost is the most accurate and efficient method.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.