• Title/Summary/Keyword: random forests model

Search Result 51, Processing Time 0.023 seconds

A Prediction Model for the Development of Cataract Using Random Forests (Random Forests 기법을 이용한 백내장 예측모형 - 일개 대학병원 건강검진 수검자료에서 -)

  • Han, Eun-Jeong;Song, Ki-Jun;Kim, Dong-Geon
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.4
    • /
    • pp.771-780
    • /
    • 2009
  • Cataract is the main cause of blindness and visual impairment, especially, age-related cataract accounts for about half of the 32 million cases of blindness worldwide. As the life expectancy and the expansion of the elderly population are increasing, the cases of cataract increase as well, which causes a serious economic and social problem throughout the country. However, the incidence of cataract can be reduced dramatically through early diagnosis and prevention. In this study, we developed a prediction model of cataracts for early diagnosis using hospital data of 3,237 subjects who received the screening test first and then later visited medical center for cataract check-ups cataract between 1994 and 2005. To develop the prediction model, we used random forests and compared the predictive performance of this model with other common discriminant models such as logistic regression, discriminant model, decision tree, naive Bayes, and two popular ensemble model, bagging and arcing. The accuracy of random forests was 67.16%, sensitivity was 72.28%, and main factors included in this model were age, diabetes, WBC, platelet, triglyceride, BMI and so on. The results showed that it could predict about 70% of cataract existence by screening test without any information from direct eye examination by ophthalmologist. We expect that our model may contribute to diagnose cataract and help preventing cataract in early stages.

Application of Random Forests to Association Studies Using Mitochondrial Single Nucleotide Polymorphisms

  • Kim, Yoon-Hee;Kim, Ho
    • Genomics & Informatics
    • /
    • v.5 no.4
    • /
    • pp.168-173
    • /
    • 2007
  • In previous nuclear genomic association studies, Random Forests (RF), one of several up-to-date machine learning methods, has been used successfully to generate evidence of association of genetic polymorphisms with diseases or other phenotypes. Compared with traditional statistical analytic methods, such as chi-square tests or logistic regression models, the RF method has advantages in handling large numbers of predictor variables and examining gene-gene interactions without a specific model. Here, we applied the RF method to find the association between mitochondrial single nucleotide polymorphisms (mtSNPs) and diabetes risk. The results from a chi-square test validated the usage of RF for association studies using mtDNA. Indexes of important variables such as the Gini index and mean decrease in accuracy index performed well compared with chi-square tests in favor of finding mtSNPs associated with a real disease example, type 2 diabetes.

Generalized Partially Linear Additive Models for Credit Scoring

  • Shim, Ju-Hyun;Lee, Young-K.
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.587-595
    • /
    • 2011
  • Credit scoring is an objective and automatic system to assess the credit risk of each customer. The logistic regression model is one of the popular methods of credit scoring to predict the default probability; however, it may not detect possible nonlinear features of predictors despite the advantages of interpretability and low computation cost. In this paper, we propose to use a generalized partially linear model as an alternative to logistic regression. We also introduce modern ensemble technologies such as bagging, boosting and random forests. We compare these methods via a simulation study and illustrate them through a German credit dataset.

Ensemble approach for improving prediction in kernel regression and classification

  • Han, Sunwoo;Hwang, Seongyun;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.4
    • /
    • pp.355-362
    • /
    • 2016
  • Ensemble methods often help increase prediction ability in various predictive models by combining multiple weak learners and reducing the variability of the final predictive model. In this work, we demonstrate that ensemble methods also enhance the accuracy of prediction under kernel ridge regression and kernel logistic regression classification. Here we apply bagging and random forests to two kernel-based predictive models; and present the procedure of how bagging and random forests can be embedded in kernel-based predictive models. Our proposals are tested under numerous synthetic and real datasets; subsequently, they are compared with plain kernel-based predictive models and their subsampling approach. Numerical studies demonstrate that ensemble approach outperforms plain kernel-based predictive models.

Usage of coot optimization-based random forests analysis for determining the shallow foundation settlement

  • Yi, Han;Xingliang, Jiang;Ye, Wang;Hui, Wang
    • Geomechanics and Engineering
    • /
    • v.32 no.3
    • /
    • pp.271-291
    • /
    • 2023
  • Settlement estimation in cohesion materials is a crucial topic to tackle because of the complexity of the cohesion soil texture, which could be solved roughly by substituted solutions. The goal of this research was to implement recently developed machine learning features as effective methods to predict settlement (Sm) of shallow foundations over cohesion soil properties. These models include hybridized support vector regression (SVR), random forests (RF), and coot optimization algorithm (COM), and black widow optimization algorithm (BWOA). The results indicate that all created systems accurately simulated the Sm, with an R2 of better than 0.979 and 0.9765 for the train and test data phases, respectively. This indicates extraordinary efficiency and a good correlation between the experimental and simulated Sm. The model's results outperformed those of ANFIS - PSO, and COM - RF findings were much outstanding to those of the literature. By analyzing established designs utilizing different analysis aspects, such as various error criteria, Taylor diagrams, uncertainty analyses, and error distribution, it was feasible to arrive at the final result that the recommended COM - RF was the outperformed approach in the forecasting process of Sm of shallow foundation, while other techniques were also reliable.

Estimation of frost durability of recycled aggregate concrete by hybridized Random Forests algorithms

  • Rui Liang;Behzad Bayrami
    • Steel and Composite Structures
    • /
    • v.49 no.1
    • /
    • pp.91-107
    • /
    • 2023
  • An effective approach to promoting sustainability within the construction industry is the use of recycled aggregate concrete (RAC) as a substitute for natural aggregates. Ensuring the frost resilience of RAC technologies is crucial to facilitate their adoption in regions characterized by cold temperatures. The main aim of this study was to use the Random Forests (RF) approach to forecast the frost durability of RAC in cold locations, with a focus on the durability factor (DF) value. Herein, three optimization algorithms named Sine-cosine optimization algorithm (SCA), Black widow optimization algorithm (BWOA), and Equilibrium optimizer (EO) were considered for determing optimal values of RF hyperparameters. The findings show that all developed systems faithfully represented the DF, with an R2 for the train and test data phases of better than 0.9539 and 0.9777, respectively. In two assessment and learning stages, EO - RF is found to be superior than BWOA - RF and SCA - RF. The outperformed model's performance (EO - RF) was superior to that of ANN (from literature) by raising the values of R2 and reducing the RMSE values. Considering the justifications, as well as the comparisons from metrics and Taylor diagram's findings, it could be found out that, although other RF models were equally reliable in predicting the the frost durability of RAC based on the durability factor (DF) value in cold climates, the developed EO - RF strategy excelled them all.

Simple Graphs for Complex Prediction Functions

  • Huh, Myung-Hoe;Lee, Yong-Goo
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.3
    • /
    • pp.343-351
    • /
    • 2008
  • By supervised learning with p predictors, we frequently obtain a prediction function of the form $y\;=\;f(x_1,...,x_p)$. When $p\;{\geq}\;3$, it is not easy to understand the inner structure of f, except for the case the function is formulated as additive. In this study, we propose to use p simple graphs for visual understanding of complex prediction functions produced by several supervised learning engines such as LOESS, neural networks, support vector machines and random forests.

Application of a comparative analysis of random forest programming to predict the strength of environmentally-friendly geopolymer concrete

  • Ying Bi;Yeng Yi
    • Steel and Composite Structures
    • /
    • v.50 no.4
    • /
    • pp.443-458
    • /
    • 2024
  • The construction industry, one of the biggest producers of greenhouse emissions, is under a lot of pressure as a result of growing worries about how climate change may affect local communities. Geopolymer concrete (GPC) has emerged as a feasible choice for construction materials as a result of the environmental issues connected to the manufacture of cement. The findings of this study contribute to the development of machine learning methods for estimating the properties of eco-friendly concrete, which might be used in lieu of traditional concrete to reduce CO2 emissions in the building industry. In the present work, the compressive strength (fc) of GPC is calculated using random forests regression (RFR) methodology where natural zeolite (NZ) and silica fume (SF) replace ground granulated blast-furnace slag (GGBFS). From the literature, a thorough set of experimental experiments on GPC samples were compiled, totaling 254 data rows. The considered RFR integrated with artificial hummingbird optimization (AHA), black widow optimization algorithm (BWOA), and chimp optimization algorithm (ChOA), abbreviated as ARFR, BRFR, and CRFR. The outcomes obtained for RFR models demonstrated satisfactory performance across all evaluation metrics in the prediction procedure. For R2 metric, the CRFR model gained 0.9988 and 0.9981 in the train and test data set higher than those for BRFR (0.9982 and 0.9969), followed by ARFR (0.9971 and 0.9956). Some other error and distribution metrics depicted a roughly 50% improvement for CRFR respect to ARFR.

Evaluating the quality of baseball pitch using PITCHf/x (PITCHf/x를 이용한 투구의 질 평가)

  • Park, Sungmin;Jang, Woncheol
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.2
    • /
    • pp.171-184
    • /
    • 2020
  • Major League Baseball (MLB) records and releases the trajectory data for every baseball pitch, called the PITCHf/x, using three high-speed cameras installed in every stadium. In a previous study, the quality of the pitch was assessed as the expected number of bases yielded using PITCHf/x data. However, the number of bases yielded does not always lead to baseball scores, or runs. In this paper, we assess the quality of a pitch by combining baseball analytics metric Run Expectancy and Run Value using a Random Forests model. We compare the quality of pitches evaluated with Run Value to the quality of pitches evaluated with the expected number of bases yielded.

Comparison of survival prediction models for pancreatic cancer: Cox model versus machine learning models

  • Kim, Hyunsuk;Park, Taesung;Jang, Jinyoung;Lee, Seungyeoun
    • Genomics & Informatics
    • /
    • v.20 no.2
    • /
    • pp.23.1-23.9
    • /
    • 2022
  • A survival prediction model has recently been developed to evaluate the prognosis of resected nonmetastatic pancreatic ductal adenocarcinoma based on a Cox model using two nationwide databases: Surveillance, Epidemiology and End Results (SEER) and Korea Tumor Registry System-Biliary Pancreas (KOTUS-BP). In this study, we applied two machine learning methods-random survival forests (RSF) and support vector machines (SVM)-for survival analysis and compared their prediction performance using the SEER and KOTUS-BP datasets. Three schemes were used for model development and evaluation. First, we utilized data from SEER for model development and used data from KOTUS-BP for external evaluation. Second, these two datasets were swapped by taking data from KOTUS-BP for model development and data from SEER for external evaluation. Finally, we mixed these two datasets half and half and utilized the mixed datasets for model development and validation. We used 9,624 patients from SEER and 3,281 patients from KOTUS-BP to construct a prediction model with seven covariates: age, sex, histologic differentiation, adjuvant treatment, resection margin status, and the American Joint Committee on Cancer 8th edition T-stage and N-stage. Comparing the three schemes, the performance of the Cox model, RSF, and SVM was better when using the mixed datasets than when using the unmixed datasets. When using the mixed datasets, the C-index, 1-year, 2-year, and 3-year time-dependent areas under the curve for the Cox model were 0.644, 0.698, 0.680, and 0.687, respectively. The Cox model performed slightly better than RSF and SVM.