• 제목/요약/키워드: Machine Learning Models

검색결과 1,395건 처리시간 0.027초

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • 제51권6호
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.

머신러닝 자동화를 위한 개발 환경에 관한 연구 (A Study on Development Environments for Machine Learning)

  • 김동길;박용순;박래정;정태윤
    • 대한임베디드공학회논문지
    • /
    • 제15권6호
    • /
    • pp.307-316
    • /
    • 2020
  • Machine learning model data is highly affected by performance. preprocessing is needed to enable analysis of various types of data, such as letters, numbers, and special characters. This paper proposes a development environment that aims to process categorical and continuous data according to the type of missing values in stage 1, implementing the function of selecting the best performing algorithm in stage 2 and automating the process of checking model performance in stage 3. Using this model, machine learning models can be created without prior knowledge of data preprocessing.

Identifying the Optimal Machine Learning Algorithm for Breast Cancer Prediction

  • ByungJoo Kim
    • International journal of advanced smart convergence
    • /
    • 제13권3호
    • /
    • pp.80-88
    • /
    • 2024
  • Breast cancer remains a significant global health burden, necessitating accurate and timely detection for improved patient outcomes. Machine learning techniques have demonstrated remarkable potential in assisting breast cancer diagnosis by learning complex patterns from multi-modal patient data. This study comprehensively evaluates several popular machine learning models, including logistic regression, decision trees, random forests, support vector machines (SVMs), naive Bayes, k-nearest neighbors (KNN), XGBoost, and ensemble methods for breast cancer prediction using the Wisconsin Breast Cancer Dataset (WBCD). Through rigorous benchmarking across metrics like accuracy, precision, recall, F1-score, and area under the ROC curve (AUC), we identify the naive Bayes classifier as the top-performing model, achieving an accuracy of 0.974, F1-score of 0.979, and highest AUC of 0.988. Other strong performers include logistic regression, random forests, and XGBoost, with AUC values exceeding 0.95. Our findings showcase the significant potential of machine learning, particularly the robust naive Bayes algorithm, to provide highly accurate and reliable breast cancer screening from fine needle aspirate (FNA) samples, ultimately enabling earlier intervention and optimized treatment strategies.

SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용 (VKOSPI Forecasting and Option Trading Application Using SVM)

  • 라윤선;최흥식;김선웅
    • 지능정보연구
    • /
    • 제22권4호
    • /
    • pp.177-192
    • /
    • 2016
  • 기계학습(Machine Learning)은 인공 지능의 한 분야로, 데이터를 이용하여 기계를 학습시켜 기계 스스로가 데이터 분석 및 예측을 하게 만드는 것과 관련한 컴퓨터 과학의 한 영역을 일컫는다. 그중에서 SVM(Support Vector Machines)은 주로 분류와 회귀 분석을 목적으로 사용되는 모델이다. 어느 두 집단에 속한 데이터들에 대한 정보를 얻었을 때, SVM 모델은 주어진 데이터 집합을 바탕으로 하여 새로운 데이터가 어느 집단에 속할지를 판단해준다. 최근 들어서 많은 금융전문가는 기계학습과 막대한 데이터가 존재하는 금융 분야와의 접목 가능성을 보며 기계학습에 집중하고 있다. 그러면서 각 금융사는 고도화된 알고리즘과 빅데이터를 통해 여러 금융업무 수행이 가능한 로봇(Robot)과 투자전문가(Advisor)의 합성어인 로보어드바이저(Robo-Advisor) 서비스를 발 빠르게 제공하기 시작했다. 따라서 현재의 금융 동향을 고려하여 본 연구에서는 기계학습 방법의 하나인 SVM을 활용하여 매매성과를 올리는 방법에 대해 제안하고자 한다. SVM을 통한 예측대상은 한국형 변동성지수인 VKOSPI이다. VKOSPI는 금융파생상품의 한 종류인 옵션의 가격에 영향을 미친다. VKOSPI는 흔히 말하는 변동성과 같고 VKOSPI 값은 옵션의 종류와 관계없이 옵션 가격과 정비례하는 특성이 있다. 그러므로 VKOSPI의 정확한 예측은 옵션 매매에서의 수익을 낼 수 있는 중요한 요소 중 하나이다. 지금까지 기계학습을 기반으로 한 VKOSPI의 예측을 다룬 연구는 없었다. 본 연구에서는 SVM을 통해 일 중의 VKOSPI를 예측하였고, 예측 내용을 바탕으로 옵션 매매에 대한 적용 가능 여부를 실험하였으며 실제로 향상된 매매 성과가 나타남을 증명하였다.

Predicting the compressive strength of SCC containing nano silica using surrogate machine learning algorithms

  • Neeraj Kumar Shukla;Aman Garg;Javed Bhutto;Mona Aggarwal;Mohamed Abbas;Hany S. Hussein;Rajesh Verma;T.M. Yunus Khan
    • Computers and Concrete
    • /
    • 제32권4호
    • /
    • pp.373-381
    • /
    • 2023
  • Fly ash, granulated blast furnace slag, marble waste powder, etc. are just some of the by-products of other sectors that the construction industry is looking to include into the many types of concrete they produce. This research seeks to use surrogate machine learning methods to forecast the compressive strength of self-compacting concrete. The surrogate models were developed using Gradient Boosting Machine (GBM), Support Vector Machine (SVM), Random Forest (RF), and Gaussian Process Regression (GPR) techniques. Compressive strength is used as the output variable, with nano silica content, cement content, coarse aggregate content, fine aggregate content, superplasticizer, curing duration, and water-binder ratio as input variables. Of the four models, GBM had the highest accuracy in determining the compressive strength of SCC. The concrete's compressive strength is worst predicted by GPR. Compressive strength of SCC with nano silica is found to be most affected by curing time and least by fine aggregate.

Water level forecasting for extended lead times using preprocessed data with variational mode decomposition: A case study in Bangladesh

  • Shabbir Ahmed Osmani;Roya Narimani;Hoyoung Cha;Changhyun Jun;Md Asaduzzaman Sayef
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.179-179
    • /
    • 2023
  • This study suggests a new approach of water level forecasting for extended lead times using original data preprocessing with variational mode decomposition (VMD). Here, two machine learning algorithms including light gradient boosting machine (LGBM) and random forest (RF) were considered to incorporate extended lead times (i.e., 5, 10, 15, 20, 25, 30, 40, and 50 days) forecasting of water levels. At first, the original data at two water level stations (i.e., SW173 and SW269 in Bangladesh) and their decomposed data from VMD were prepared on antecedent lag times to analyze in the datasets of different lead times. Mean absolute error (MAE), root mean squared error (RMSE), and mean squared error (MSE) were used to evaluate the performance of the machine learning models in water level forecasting. As results, it represents that the errors were minimized when the decomposed datasets were considered to predict water levels, rather than the use of original data standalone. It was also noted that LGBM produced lower MAE, RMSE, and MSE values than RF, indicating better performance. For instance, at the SW173 station, LGBM outperformed RF in both decomposed and original data with MAE values of 0.511 and 1.566, compared to RF's MAE values of 0.719 and 1.644, respectively, in a 30-day lead time. The models' performance decreased with increasing lead time, as per the study findings. In summary, preprocessing original data and utilizing machine learning models with decomposed techniques have shown promising results for water level forecasting in higher lead times. It is expected that the approach of this study can assist water management authorities in taking precautionary measures based on forecasted water levels, which is crucial for sustainable water resource utilization.

  • PDF

저수지 CO2 배출량 산정을 위한 기계학습 모델의 적용 (Applications of Machine Learning Models for the Estimation of Reservoir CO2 Emissions)

  • 유지수;정세웅;박형석
    • 한국물환경학회지
    • /
    • 제33권3호
    • /
    • pp.326-333
    • /
    • 2017
  • The lakes and reservoirs have been reported as important sources of carbon emissions to the atmosphere in many countries. Although field experiments and theoretical investigations based on the fundamental gas exchange theory have proposed the quantitative amounts of Net Atmospheric Flux (NAF) in various climate regions, there are still large uncertainties at the global scale estimation. Mechanistic models can be used for understanding and estimating the temporal and spatial variations of the NAFs considering complicated hydrodynamic and biogeochemical processes in a reservoir, but these models require extensive and expensive datasets and model parameters. On the other hand, data driven machine learning (ML) algorithms are likely to be alternative tools to estimate the NAFs in responding to independent environmental variables. The objective of this study was to develop random forest (RF) and multi-layer artificial neural network (ANN) models for the estimation of the daily $CO_2$ NAFs in Daecheong Reservoir located in Geum River of Korea, and compare the models performance against the multiple linear regression (MLR) model that proposed in the previous study (Chung et al., 2016). As a result, the RF and ANN models showed much enhanced performance in the estimation of the high NAF values, while MLR model significantly under estimated them. Across validation with 10-fold random samplings was applied to evaluate the performance of three models, and indicated that the ANN model is best, and followed by RF and MLR models.

Feature Selection with Ensemble Learning for Prostate Cancer Prediction from Gene Expression

  • Abass, Yusuf Aleshinloye;Adeshina, Steve A.
    • International Journal of Computer Science & Network Security
    • /
    • 제21권12spc호
    • /
    • pp.526-538
    • /
    • 2021
  • Machine and deep learning-based models are emerging techniques that are being used to address prediction problems in biomedical data analysis. DNA sequence prediction is a critical problem that has attracted a great deal of attention in the biomedical domain. Machine and deep learning-based models have been shown to provide more accurate results when compared to conventional regression-based models. The prediction of the gene sequence that leads to cancerous diseases, such as prostate cancer, is crucial. Identifying the most important features in a gene sequence is a challenging task. Extracting the components of the gene sequence that can provide an insight into the types of mutation in the gene is of great importance as it will lead to effective drug design and the promotion of the new concept of personalised medicine. In this work, we extracted the exons in the prostate gene sequences that were used in the experiment. We built a Deep Neural Network (DNN) and Bi-directional Long-Short Term Memory (Bi-LSTM) model using a k-mer encoding for the DNA sequence and one-hot encoding for the class label. The models were evaluated using different classification metrics. Our experimental results show that DNN model prediction offers a training accuracy of 99 percent and validation accuracy of 96 percent. The bi-LSTM model also has a training accuracy of 95 percent and validation accuracy of 91 percent.

공압기 소비전력에 대한 예측 모형의 비교연구 (A Comparison Study on Forecasting Models for Air Compressor Power Consumption)

  • 김주헌;장문수;김예진;허요섭;정현상;박소영
    • 한국산업융합학회 논문집
    • /
    • 제26권4_2호
    • /
    • pp.657-668
    • /
    • 2023
  • It's important to note that air compressors in the industrial sector are major energy consumers, accounting for a significant portion of total energy costs in manufacturing plants, ranging from 12% to 40%. To address this issue, researchers have compared forecasting models that can predict the power consumption of air compressors. The forecasting models were designed to incorporate variables such as flow rate, pressure, temperature, humidity, and dew point, utilizing statistical methods, machine learning, and deep learning techniques. The model performance was compared using measures such as RMSE, MAE and SMAPE. Out of the 21 models tested, the Elastic Net, a statistical method, proved to be the most effective in power comsumption forecasting.

Enhancing Heart Disease Prediction Accuracy through Soft Voting Ensemble Techniques

  • Byung-Joo Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권3호
    • /
    • pp.290-297
    • /
    • 2024
  • We investigate the efficacy of ensemble learning methods, specifically the soft voting technique, for enhancing heart disease prediction accuracy. Our study uniquely combines Logistic Regression, SVM with RBF Kernel, and Random Forest models in a soft voting ensemble to improve predictive performance. We demonstrate that this approach outperforms individual models in diagnosing heart disease. Our research contributes to the field by applying a well-curated dataset with normalization and optimization techniques, conducting a comprehensive comparative analysis of different machine learning models, and showcasing the superior performance of the soft voting ensemble in medical diagnosis. This multifaceted approach allows us to provide a thorough evaluation of the soft voting ensemble's effectiveness in the context of heart disease prediction. We evaluate our models based on accuracy, precision, recall, F1 score, and Area Under the ROC Curve (AUC). Our results indicate that the soft voting ensemble technique achieves higher accuracy and robustness in heart disease prediction compared to individual classifiers. This study advances the application of machine learning in medical diagnostics, offering a novel approach to improve heart disease prediction. Our findings have significant implications for early detection and management of heart disease, potentially contributing to better patient outcomes and more efficient healthcare resource allocation.