• Title/Summary/Keyword: boosting regression trees

Search Result 16, Processing Time 0.021 seconds

An Ensemble Cascading Extremely Randomized Trees Framework for Short-Term Traffic Flow Prediction

  • Zhang, Fan;Bai, Jing;Li, Xiaoyu;Pei, Changxing;Havyarimana, Vincent
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1975-1988
    • /
    • 2019
  • Short-term traffic flow prediction plays an important role in intelligent transportation systems (ITS) in areas such as transportation management, traffic control and guidance. For short-term traffic flow regression predictions, the main challenge stems from the non-stationary property of traffic flow data. In this paper, we design an ensemble cascading prediction framework based on extremely randomized trees (extra-trees) using a boosting technique called EET to predict the short-term traffic flow under non-stationary environments. Extra-trees is a tree-based ensemble method. It essentially consists of strongly randomizing both the attribute and cut-point choices while splitting a tree node. This mechanism reduces the variance of the model and is, therefore, more suitable for traffic flow regression prediction in non-stationary environments. Moreover, the extra-trees algorithm uses boosting ensemble technique averaging to improve the predictive accuracy and control overfitting. To the best of our knowledge, this is the first time that extra-trees have been used as fundamental building blocks in boosting committee machines. The proposed approach involves predicting 5 min in advance using real-time traffic flow data in the context of inherently considering temporal and spatial correlations. Experiments demonstrate that the proposed method achieves higher accuracy and lower variance and computational complexity when compared to the existing methods.

A review of tree-based Bayesian methods

  • Linero, Antonio R.
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.543-559
    • /
    • 2017
  • Tree-based regression and classification ensembles form a standard part of the data-science toolkit. Many commonly used methods take an algorithmic view, proposing greedy methods for constructing decision trees; examples include the classification and regression trees algorithm, boosted decision trees, and random forests. Recent history has seen a surge of interest in Bayesian techniques for constructing decision tree ensembles, with these methods frequently outperforming their algorithmic counterparts. The goal of this article is to survey the landscape surrounding Bayesian decision tree methods, and to discuss recent modeling and computational developments. We provide connections between Bayesian tree-based methods and existing machine learning techniques, and outline several recent theoretical developments establishing frequentist consistency and rates of convergence for the posterior distribution. The methodology we present is applicable for a wide variety of statistical tasks including regression, classification, modeling of count data, and many others. We illustrate the methodology on both simulated and real datasets.

Study on the ensemble methods with kernel ridge regression

  • Kim, Sun-Hwa;Cho, Dae-Hyeon;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.375-383
    • /
    • 2012
  • The purpose of the ensemble methods is to increase the accuracy of prediction through combining many classifiers. According to recent studies, it is proved that random forests and forward stagewise regression have good accuracies in classification problems. However they have great prediction error in separation boundary points because they used decision tree as a base learner. In this study, we use the kernel ridge regression instead of the decision trees in random forests and boosting. The usefulness of our proposed ensemble methods was shown by the simulation results of the prostate cancer and the Boston housing data.

Investment, Export, and Exchange Rate on Prediction of Employment with Decision Tree, Random Forest, and Gradient Boosting Machine Learning Models (투자와 수출 및 환율의 고용에 대한 의사결정 나무, 랜덤 포레스트와 그래디언트 부스팅 머신러닝 모형 예측)

  • Chae-Deug Yi
    • Korea Trade Review
    • /
    • v.46 no.2
    • /
    • pp.281-299
    • /
    • 2021
  • This paper analyzes the feasibility of using machine learning methods to forecast the employment. The machine learning methods, such as decision tree, artificial neural network, and ensemble models such as random forest and gradient boosting regression tree were used to forecast the employment in Busan regional economy. The following were the main findings of the comparison of their predictive abilities. First, the forecasting power of machine learning methods can predict the employment well. Second, the forecasting values for the employment by decision tree models appeared somewhat differently according to the depth of decision trees. Third, the predictive power of artificial neural network model, however, does not show the high predictive power. Fourth, the ensemble models such as random forest and gradient boosting regression tree model show the higher predictive power. Thus, since the machine learning method can accurately predict the employment, we need to improve the accuracy of forecasting employment with the use of machine learning methods.

Pruning the Boosting Ensemble of Decision Trees

  • Yoon, Young-Joo;Song, Moon-Sup
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.449-466
    • /
    • 2006
  • We propose to use variable selection methods based on penalized regression for pruning decision tree ensembles. Pruning methods based on LASSO and SCAD are compared with the cluster pruning method. Comparative studies are performed on some artificial datasets and real datasets. According to the results of comparative studies, the proposed methods based on penalized regression reduce the size of boosting ensembles without decreasing accuracy significantly and have better performance than the cluster pruning method. In terms of classification noise, the proposed pruning methods can mitigate the weakness of AdaBoost to some degree.

A Study for Improving the Performance of Data Mining Using Ensemble Techniques (앙상블기법을 이용한 다양한 데이터마이닝 성능향상 연구)

  • Jung, Yon-Hae;Eo, Soo-Heang;Moon, Ho-Seok;Cho, Hyung-Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.4
    • /
    • pp.561-574
    • /
    • 2010
  • We studied the performance of 8 data mining algorithms including decision trees, logistic regression, LDA, QDA, Neral network, and SVM and their combinations of 2 ensemble techniques, bagging and boosting. In this study, we utilized 13 data sets with binary responses. Sensitivity, Specificity and missclassificate error were used as criteria for comparison.

IoT Enabled Intelligent System for Radiation Monitoring and Warning Approach using Machine Learning

  • Muhammad Saifullah ;Imran Sarwar Bajwa;Muhammad Ibrahim;Mutyyba Asgher
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.135-147
    • /
    • 2023
  • Internet of things has revolutionaries every field of life due to the use of artificial intelligence within Machine Learning. It is successfully being used for the study of Radiation monitoring, prediction of Ultraviolet and Electromagnetic rays. However, there is no particular system available that can monitor and detect waves. Therefore, the present study designed in which IOT enables intelligence system based on machine learning was developed for the prediction of the radiation and their effects of human beings. Moreover, a sensor based system was installed in order to detect harmful radiation present in the environment and this system has the ability to alert the humans within the range of danger zone with a buzz, so that humans can move to a safer place. Along with this automatic sensor system; a self-created dataset was also created in which sensor values were recorded. Furthermore, in order to study the outcomes of the effect of these rays researchers used Support Vector Machine, Gaussian Naïve Bayes, Decision Trees, Extra Trees, Bagging Classifier, Random Forests, Logistic Regression and Adaptive Boosting Classifier were used. To sum up the whole discussion it is stated the results give high accuracy and prove that the proposed system is reliable and accurate for the detection and monitoring of waves. Furthermore, for the prediction of outcome, Adaptive Boosting Classifier has shown the best accuracy of 81.77% as compared with other classifiers.

Predicting rock brittleness indices from simple laboratory test results using some machine learning methods

  • Davood Fereidooni;Zohre Karimi
    • Geomechanics and Engineering
    • /
    • v.34 no.6
    • /
    • pp.697-726
    • /
    • 2023
  • Brittleness as an important property of rock plays a crucial role both in the failure process of intact rock and rock mass response to excavation in engineering geological and geotechnical projects. Generally, rock brittleness indices are calculated from the mechanical properties of rocks such as uniaxial compressive strength, tensile strength and modulus of elasticity. These properties are generally determined from complicated, expensive and time-consuming tests in laboratory. For this reason, in the present research, an attempt has been made to predict the rock brittleness indices from simple, inexpensive, and quick laboratory test results namely dry unit weight, porosity, slake-durability index, P-wave velocity, Schmidt rebound hardness, and point load strength index using multiple linear regression, exponential regression, support vector machine (SVM) with various kernels, generating fuzzy inference system, and regression tree ensemble (RTE) with boosting framework. So, this could be considered as an innovation for the present research. For this purpose, the number of 39 rock samples including five igneous, twenty-six sedimentary, and eight metamorphic were collected from different regions of Iran. Mineralogical, physical and mechanical properties as well as five well known rock brittleness indices (i.e., B1, B2, B3, B4, and B5) were measured for the selected rock samples before application of the above-mentioned machine learning techniques. The performance of the developed models was evaluated based on several statistical metrics such as mean square error, relative absolute error, root relative absolute error, determination coefficients, variance account for, mean absolute percentage error and standard deviation of the error. The comparison of the obtained results revealed that among the studied methods, SVM is the most suitable one for predicting B1, B2 and B5, while RTE predicts B3 and B4 better than other methods.

Accuracy Evaluation of Machine Learning Model for Concrete Aging Prediction due to Thermal Effect and Carbonation (콘크리트 탄산화 및 열효과에 의한 경년열화 예측을 위한 기계학습 모델의 정확성 검토)

  • Kim, Hyun-Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.23 no.4
    • /
    • pp.81-88
    • /
    • 2023
  • Numerous factors contribute to the deterioration of reinforced concrete structures. Elevated temperatures significantly alter the composition of the concrete ingredients, consequently diminishing the concrete's strength properties. With the escalation of global CO2 levels, the carbonation of concrete structures has emerged as a critical challenge, substantially affecting concrete durability research. Assessing and predicting concrete degradation due to thermal effects and carbonation are crucial yet intricate tasks. To address this, multiple prediction models for concrete carbonation and compressive strength under thermal impact have been developed. This study employs seven machine learning algorithms-specifically, multiple linear regression, decision trees, random forest, support vector machines, k-nearest neighbors, artificial neural networks, and extreme gradient boosting algorithms-to formulate predictive models for concrete carbonation and thermal impact. Two distinct datasets, derived from reported experimental studies, were utilized for training these predictive models. Performance evaluation relied on metrics like root mean square error, mean square error, mean absolute error, and coefficient of determination. The optimization of hyperparameters was achieved through k-fold cross-validation and grid search techniques. The analytical outcomes demonstrate that neural networks and extreme gradient boosting algorithms outshine the remaining five machine learning approaches, showcasing outstanding predictive performance for concrete carbonation and thermal effect modeling.

A Study on the Employee Turnover Prediction using XGBoost and SHAP (XGBoost와 SHAP 기법을 활용한 근로자 이직 예측에 관한 연구)

  • Lee, Jae Jun;Lee, Yu Rin;Lim, Do Hyun;Ahn, Hyun Chul
    • The Journal of Information Systems
    • /
    • v.30 no.4
    • /
    • pp.21-42
    • /
    • 2021
  • Purpose In order for companies to continue to grow, they should properly manage human resources, which are the core of corporate competitiveness. Employee turnover means the loss of talent in the workforce. When an employee voluntarily leaves his or her company, it will lose hiring and training cost and lead to the withdrawal of key personnel and new costs to train a new employee. From an employee's viewpoint, moving to another company is also risky because it can be time consuming and costly. Therefore, in order to reduce the social and economic costs caused by employee turnover, it is necessary to accurately predict employee turnover intention, identify the factors affecting employee turnover, and manage them appropriately in the company. Design/methodology/approach Prior studies have mainly used logistic regression and decision trees, which have explanatory power but poor predictive accuracy. In order to develop a more accurate prediction model, XGBoost is proposed as the classification technique. Then, to compensate for the lack of explainability, SHAP, one of the XAI techniques, is applied. As a result, the prediction accuracy of the proposed model is improved compared to the conventional methods such as LOGIT and Decision Trees. By applying SHAP to the proposed model, the factors affecting the overall employee turnover intention as well as a specific sample's turnover intention are identified. Findings Experimental results show that the prediction accuracy of XGBoost is superior to that of logistic regression and decision trees. Using SHAP, we find that jobseeking, annuity, eng_test, comm_temp, seti_dev, seti_money, equl_ablt, and sati_safe significantly affect overall employee turnover intention. In addition, it is confirmed that the factors affecting an individual's turnover intention are more diverse. Our research findings imply that companies should adopt a personalized approach for each employee in order to effectively prevent his or her turnover.