• Title/Summary/Keyword: ensemble learning models

Search Result 195, Processing Time 0.021 seconds

Transfer Learning-Based Feature Fusion Model for Classification of Maneuver Weapon Systems

  • Jinyong Hwang;You-Rak Choi;Tae-Jin Park;Ji-Hoon Bae
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.673-687
    • /
    • 2023
  • Convolutional neural network-based deep learning technology is the most commonly used in image identification, but it requires large-scale data for training. Therefore, application in specific fields in which data acquisition is limited, such as in the military, may be challenging. In particular, the identification of ground weapon systems is a very important mission, and high identification accuracy is required. Accordingly, various studies have been conducted to achieve high performance using small-scale data. Among them, the ensemble method, which achieves excellent performance through the prediction average of the pre-trained models, is the most representative method; however, it requires considerable time and effort to find the optimal combination of ensemble models. In addition, there is a performance limitation in the prediction results obtained by using an ensemble method. Furthermore, it is difficult to obtain the ensemble effect using models with imbalanced classification accuracies. In this paper, we propose a transfer learning-based feature fusion technique for heterogeneous models that extracts and fuses features of pre-trained heterogeneous models and finally, fine-tunes hyperparameters of the fully connected layer to improve the classification accuracy. The experimental results of this study indicate that it is possible to overcome the limitations of the existing ensemble methods by improving the classification accuracy through feature fusion between heterogeneous models based on transfer learning.

A Study on Predicting Lung Cancer Using RNA-Sequencing Data with Ensemble Learning (앙상블 기법을 활용한 RNA-Sequencing 데이터의 폐암 예측 연구)

  • Geon AN;JooYong PARK
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.2 no.1
    • /
    • pp.7-14
    • /
    • 2024
  • In this paper, we explore the application of RNA-sequencing data and ensemble machine learning to predict lung cancer and treatment strategies for lung cancer, a leading cause of cancer mortality worldwide. The research utilizes Random Forest, XGBoost, and LightGBM models to analyze gene expression profiles from extensive datasets, aiming to enhance predictive accuracy for lung cancer prognosis. The methodology focuses on preprocessing RNA-seq data to standardize expression levels across samples and applying ensemble algorithms to maximize prediction stability and reduce model overfitting. Key findings indicate that ensemble models, especially XGBoost, substantially outperform traditional predictive models. Significant genetic markers such as ADGRF5 is identified as crucial for predicting lung cancer outcomes. In conclusion, ensemble learning using RNA-seq data proves highly effective in predicting lung cancer, suggesting a potential shift towards more precise and personalized treatment approaches. The results advocate for further integration of molecular and clinical data to refine diagnostic models and improve clinical outcomes, underscoring the critical role of advanced molecular diagnostics in enhancing patient survival rates and quality of life. This study lays the groundwork for future research in the application of RNA-sequencing data and ensemble machine learning techniques in clinical settings.

A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm (앙상블 학습 알고리즘을 이용한 컨벌루션 신경망의 분류 성능 분석에 관한 연구)

  • Park, Sung-Wook;Kim, Jong-Chan;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.665-675
    • /
    • 2019
  • In this paper, we compare and analyze the classification performance of deep learning algorithm Convolutional Neural Network(CNN) ac cording to ensemble generation and combining techniques. We used several CNN models(VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, GoogLeNet) to create 10 ensemble generation combinations and applied 6 combine techniques(average, weighted average, maximum, minimum, median, product) to the optimal combination. Experimental results, DenseNet169-VGG16-GoogLeNet combination in ensemble generation, and the product rule in ensemble combination showed the best performance. Based on this, it was concluded that ensemble in different models of high benchmarking scores is another way to get good results.

An ensemble learning based Bayesian model updating approach for structural damage identification

  • Guangwei Lin;Yi Zhang;Enjian Cai;Taisen Zhao;Zhaoyan Li
    • Smart Structures and Systems
    • /
    • v.32 no.1
    • /
    • pp.61-81
    • /
    • 2023
  • This study presents an ensemble learning based Bayesian model updating approach for structural damage diagnosis. In the developed framework, the structure is initially decomposed into a set of substructures. The autoregressive moving average (ARMAX) model is established first for structural damage localization based structural motion equation. The wavelet packet decomposition is utilized to extract the damage-sensitive node energy in different frequency bands for constructing structural surrogate models. Four methods, including Kriging predictor (KRG), radial basis function neural network (RBFNN), support vector regression (SVR), and multivariate adaptive regression splines (MARS), are selected as candidate structural surrogate models. These models are then resampled by bootstrapping and combined to obtain an ensemble model by probabilistic ensemble. Meanwhile, the maximum entropy principal is adopted to search for new design points for sample space updating, yielding a more robust ensemble model. Through the iterations, a framework of surrogate ensemble learning based model updating with high model construction efficiency and accuracy is proposed. The specificities of the method are discussed and investigated in a case study.

Investment, Export, and Exchange Rate on Prediction of Employment with Decision Tree, Random Forest, and Gradient Boosting Machine Learning Models (투자와 수출 및 환율의 고용에 대한 의사결정 나무, 랜덤 포레스트와 그래디언트 부스팅 머신러닝 모형 예측)

  • Chae-Deug Yi
    • Korea Trade Review
    • /
    • v.46 no.2
    • /
    • pp.281-299
    • /
    • 2021
  • This paper analyzes the feasibility of using machine learning methods to forecast the employment. The machine learning methods, such as decision tree, artificial neural network, and ensemble models such as random forest and gradient boosting regression tree were used to forecast the employment in Busan regional economy. The following were the main findings of the comparison of their predictive abilities. First, the forecasting power of machine learning methods can predict the employment well. Second, the forecasting values for the employment by decision tree models appeared somewhat differently according to the depth of decision trees. Third, the predictive power of artificial neural network model, however, does not show the high predictive power. Fourth, the ensemble models such as random forest and gradient boosting regression tree model show the higher predictive power. Thus, since the machine learning method can accurately predict the employment, we need to improve the accuracy of forecasting employment with the use of machine learning methods.

Forecasting Sow's Productivity using the Machine Learning Models (머신러닝을 활용한 모돈의 생산성 예측모델)

  • Lee, Min-Soo;Choe, Young-Chan
    • Journal of Agricultural Extension & Community Development
    • /
    • v.16 no.4
    • /
    • pp.939-965
    • /
    • 2009
  • The Machine Learning has been identified as a promising approach to knowledge-based system development. This study aims to examine the ability of machine learning techniques for farmer's decision making and to develop the reference model for using pig farm data. We compared five machine learning techniques: logistic regression, decision tree, artificial neural network, k-nearest neighbor, and ensemble. All models are well performed to predict the sow's productivity in all parity, showing over 87.6% predictability. The model predictability of total litter size are highest at 91.3% in third parity and decreasing as parity increases. The ensemble is well performed to predict the sow's productivity. The neural network and logistic regression is excellent classifier for all parity. The decision tree and the k-nearest neighbor was not good classifier for all parity. Performance of models varies over models used, showing up to 104% difference in lift values. Artificial Neural network and ensemble models have resulted in highest lift values implying best performance among models.

  • PDF

A comparative assessment of bagging ensemble models for modeling concrete slump flow

  • Aydogmus, Hacer Yumurtaci;Erdal, Halil Ibrahim;Karakurt, Onur;Namli, Ersin;Turkan, Yusuf S.;Erdal, Hamit
    • Computers and Concrete
    • /
    • v.16 no.5
    • /
    • pp.741-757
    • /
    • 2015
  • In the last decade, several modeling approaches have been proposed and applied to estimate the high-performance concrete (HPC) slump flow. While HPC is a highly complex material, modeling its behavior is a very difficult issue. Thus, the selection and application of proper modeling methods remain therefore a crucial task. Like many other applications, HPC slump flow prediction suffers from noise which negatively affects the prediction accuracy and increases the variance. In the recent years, ensemble learning methods have introduced to optimize the prediction accuracy and reduce the prediction error. This study investigates the potential usage of bagging (Bag), which is among the most popular ensemble learning methods, in building ensemble models. Four well-known artificial intelligence models (i.e., classification and regression trees CART, support vector machines SVM, multilayer perceptron MLP and radial basis function neural networks RBF) are deployed as base learner. As a result of this study, bagging ensemble models (i.e., Bag-SVM, Bag-RT, Bag-MLP and Bag-RBF) are found superior to their base learners (i.e., SVM, CART, MLP and RBF) and bagging could noticeable optimize prediction accuracy and reduce the prediction error of proposed predictive models.

Prediction of electricity consumption in A hotel using ensemble learning with temperature (앙상블 학습과 온도 변수를 이용한 A 호텔의 전력소모량 예측)

  • Kim, Jaehwi;Kim, Jaehee
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.319-330
    • /
    • 2019
  • Forecasting the electricity consumption through analyzing the past electricity consumption a advantageous for energy planing and policy. Machine learning is widely used as a method to predict electricity consumption. Among them, ensemble learning is a method to avoid the overfitting of models and reduce variance to improve prediction accuracy. However, ensemble learning applied to daily data shows the disadvantages of predicting a center value without showing a peak due to the characteristics of ensemble learning. In this study, we overcome the shortcomings of ensemble learning by considering the temperature trend. We compare nine models and propose a model using random forest with the linear trend of temperature.

On successive machine learning process for predicting strength and displacement of rectangular reinforced concrete columns subjected to cyclic loading

  • Bu-seog Ju;Shinyoung Kwag;Sangwoo Lee
    • Computers and Concrete
    • /
    • v.32 no.5
    • /
    • pp.513-525
    • /
    • 2023
  • Recently, research on predicting the behavior of reinforced concrete (RC) columns using machine learning methods has been actively conducted. However, most studies have focused on predicting the ultimate strength of RC columns using a regression algorithm. Therefore, this study develops a successive machine learning process for predicting multiple nonlinear behaviors of rectangular RC columns. This process consists of three stages: single machine learning, bagging ensemble, and stacking ensemble. In the case of strength prediction, sufficient prediction accuracy is confirmed even in the first stage. In the case of displacement, although sufficient accuracy is not achieved in the first and second stages, the stacking ensemble model in the third stage performs better than the machine learning models in the first and second stages. In addition, the performance of the final prediction models is verified by comparing the backbone curves and hysteresis loops obtained from predicted outputs with actual experimental data.

Development of Highway Traffic Information Prediction Models Using the Stacking Ensemble Technique Based on Cross-validation (스태킹 앙상블 기법을 활용한 고속도로 교통정보 예측모델 개발 및 교차검증에 따른 성능 비교)

  • Yoseph Lee;Seok Jin Oh;Yejin Kim;Sung-ho Park;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.1-16
    • /
    • 2023
  • Accurate traffic information prediction is considered to be one of the most important aspects of intelligent transport systems(ITS), as it can be used to guide users of transportation facilities to avoid congested routes. Various deep learning models have been developed for accurate traffic prediction. Recently, ensemble techniques have been utilized to combine the strengths and weaknesses of various models in various ways to improve prediction accuracy and stability. Therefore, in this study, we developed and evaluated a traffic information prediction model using various deep learning models, and evaluated the performance of the developed deep learning models as a stacking ensemble. The individual models showed error rates within 10% for traffic volume prediction and 3% for speed prediction. The ensemble model showed higher accuracy compared to other models when no cross-validation was performed, and when cross-validation was performed, it showed a uniform error rate in long-term forecasting.