• Title/Summary/Keyword: support vector regression.

Search Result 554, Processing Time 0.022 seconds

Estimating the tensile strength of geopolymer concrete using various machine learning algorithms

  • Danial Fakhri;Hamid Reza Nejati;Arsalan Mahmoodzadeh;Hamid Soltanian;Ehsan Taheri
    • Computers and Concrete
    • /
    • v.33 no.2
    • /
    • pp.175-193
    • /
    • 2024
  • Researchers have embarked on an active investigation into the feasibility of adopting alternative materials as a solution to the mounting environmental and economic challenges associated with traditional concrete-based construction materials, such as reinforced concrete. The examination of concrete's mechanical properties using laboratory methods is a complex, time-consuming, and costly endeavor. Consequently, the need for models that can overcome these drawbacks is urgent. Fortunately, the ever-increasing availability of data has paved the way for the utilization of machine learning methods, which can provide powerful, efficient, and cost-effective models. This study aims to explore the potential of twelve machine learning algorithms in predicting the tensile strength of geopolymer concrete (GPC) under various curing conditions. To fulfill this objective, 221 datasets, comprising tensile strength test results of GPC with diverse mix ratios and curing conditions, were employed. Additionally, a number of unseen datasets were used to assess the overall performance of the machine learning models. Through a comprehensive analysis of statistical indices and a comparison of the models' behavior with laboratory tests, it was determined that nearly all the models exhibited satisfactory potential in estimating the tensile strength of GPC. Nevertheless, the artificial neural networks and support vector regression models demonstrated the highest robustness. Both the laboratory tests and machine learning outcomes revealed that GPC composed of 30% fly ash and 70% ground granulated blast slag, mixed with 14 mol of NaOH, and cured in an oven at 300°F for 28 days exhibited superior tensile strength.

In-depth exploration of machine learning algorithms for predicting sidewall displacement in underground caverns

  • Hanan Samadi;Abed Alanazi;Sabih Hashim Muhodir;Shtwai Alsubai;Abdullah Alqahtani;Mehrez Marzougui
    • Geomechanics and Engineering
    • /
    • v.37 no.4
    • /
    • pp.307-321
    • /
    • 2024
  • This paper delves into the critical assessment of predicting sidewall displacement in underground caverns through the application of nine distinct machine learning techniques. The accurate prediction of sidewall displacement is essential for ensuring the structural safety and stability of underground caverns, which are prone to various geological challenges. The dataset utilized in this study comprises a total of 310 data points, each containing 13 relevant parameters extracted from 10 underground cavern projects located in Iran and other regions. To facilitate a comprehensive evaluation, the dataset is evenly divided into training and testing subset. The study employs a diverse array of machine learning models, including recurrent neural network, back-propagation neural network, K-nearest neighbors, normalized and ordinary radial basis function, support vector machine, weight estimation, feed-forward stepwise regression, and fuzzy inference system. These models are leveraged to develop predictive models that can accurately forecast sidewall displacement in underground caverns. The training phase involves utilizing 80% of the dataset (248 data points) to train the models, while the remaining 20% (62 data points) are used for testing and validation purposes. The findings of the study highlight the back-propagation neural network (BPNN) model as the most effective in providing accurate predictions. The BPNN model demonstrates a remarkably high correlation coefficient (R2 = 0.99) and a low error rate (RMSE = 4.27E-05), indicating its superior performance in predicting sidewall displacement in underground caverns. This research contributes valuable insights into the application of machine learning techniques for enhancing the safety and stability of underground structures.

GeoAI-Based Forest Fire Susceptibility Assessment with Integration of Forest and Soil Digital Map Data

  • Kounghoon Nam;Jong-Tae Kim;Chang-Ju Lee;Gyo-Cheol Jeong
    • The Journal of Engineering Geology
    • /
    • v.34 no.1
    • /
    • pp.107-115
    • /
    • 2024
  • This study assesses forest fire susceptibility in Gangwon-do, South Korea, which hosts the largest forested area in the nation and constitutes ~21% of the country's forested land. With 81% of its terrain forested, Gangwon-do is particularly susceptible to wildfires, as evidenced by the fact that seven out of the ten most extensive wildfires in Korea have occurred in this region, with significant ecological and economic implications. Here, we analyze 480 historical wildfire occurrences in Gangwon-do between 2003 and 2019 using 17 predictor variables of wildfire occurrence. We utilized three machine learning algorithms—random forest, logistic regression, and support vector machine—to construct wildfire susceptibility prediction models and identify the best-performing model for Gangwon-do. Forest and soil map data were integrated as important indicators of wildfire susceptibility and enhanced the precision of the three models in identifying areas at high risk of wildfires. Of the three models examined, the random forest model showed the best predictive performance, with an area-under-the-curve value of 0.936. The findings of this study, especially the maps generated by the models, are expected to offer important guidance to local governments in formulating effective management and conservation strategies. These strategies aim to ensure the sustainable preservation of forest resources and to enhance the well-being of communities situated in areas adjacent to forests. Furthermore, the outcomes of this study are anticipated to contribute to the safeguarding of forest resources and biodiversity and to the development of comprehensive plans for forest resource protection, biodiversity conservation, and environmental management.

Resume Classification System using Natural Language Processing & Machine Learning Techniques

  • Irfan Ali;Nimra;Ghulam Mujtaba;Zahid Hussain Khand;Zafar Ali;Sajid Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.108-117
    • /
    • 2024
  • The selection and recommendation of a suitable job applicant from the pool of thousands of applications are often daunting jobs for an employer. The recommendation and selection process significantly increases the workload of the concerned department of an employer. Thus, Resume Classification System using the Natural Language Processing (NLP) and Machine Learning (ML) techniques could automate this tedious process and ease the job of an employer. Moreover, the automation of this process can significantly expedite and transparent the applicants' selection process with mere human involvement. Nevertheless, various Machine Learning approaches have been proposed to develop Resume Classification Systems. However, this study presents an automated NLP and ML-based system that classifies the Resumes according to job categories with performance guarantees. This study employs various ML algorithms and NLP techniques to measure the accuracy of Resume Classification Systems and proposes a solution with better accuracy and reliability in different settings. To demonstrate the significance of NLP & ML techniques for processing & classification of Resumes, the extracted features were tested on nine machine learning models Support Vector Machine - SVM (Linear, SGD, SVC & NuSVC), Naïve Bayes (Bernoulli, Multinomial & Gaussian), K-Nearest Neighbor (KNN) and Logistic Regression (LR). The Term-Frequency Inverse Document (TF-IDF) feature representation scheme proven suitable for Resume Classification Task. The developed models were evaluated using F-ScoreM, RecallM, PrecissionM, and overall Accuracy. The experimental results indicate that using the One-Vs-Rest-Classification strategy for this multi-class Resume Classification task, the SVM class of Machine Learning algorithms performed better on the study dataset with over 96% overall accuracy. The promising results suggest that NLP & ML techniques employed in this study could be used for the Resume Classification task.

Change Analysis of Aboveground Forest Carbon Stocks According to the Land Cover Change Using Multi-Temporal Landsat TM Images and Machine Learning Algorithms (다시기 Landsat TM 영상과 기계학습을 이용한 토지피복변화에 따른 산림탄소저장량 변화 분석)

  • LEE, Jung-Hee;IM, Jung-Ho;KIM, Kyoung-Min;HEO, Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.4
    • /
    • pp.81-99
    • /
    • 2015
  • The acceleration of global warming has required better understanding of carbon cycles over local and regional areas such as the Korean peninsula. Since forests serve as a carbon sink, which stores a large amount of terrestrial carbon, there has been a demand to accurately estimate such forest carbon sequestration. In Korea, the National Forest Inventory(NFI) has been used to estimate the forest carbon stocks based on the amount of growing stocks per hectare measured at sampled location. However, as such data are based on point(i.e., plot) measurements, it is difficult to identify spatial distribution of forest carbon stocks. This study focuses on urban areas, which have limited number of NFI samples and have shown rapid land cover change, to estimate grid-based forest carbon stocks based on UNFCCC Approach 3 and Tier 3. Land cover change and forest carbon stocks were estimated using Landsat 5 TM data acquired in 1991, 1992, 2010, and 2011, high resolution airborne images, and the 3rd, 5th~6th NFI data. Machine learning techniques(i.e., random forest and support vector machines/regression) were used for land cover change classification and forest carbon stock estimation. Forest carbon stocks were estimated using reflectance, band ratios, vegetation indices, and topographical indices. Results showed that 33.23tonC/ha of carbon was sequestrated on the unchanged forest areas between 1991 and 2010, while 36.83 tonC/ha of carbon was sequestrated on the areas changed from other land-use types to forests. A total of 7.35 tonC/ha of carbon was released on the areas changed from forests to other land-use types. This study was a good chance to understand the quantitative forest carbon stock change according to the land cover change. Moreover the result of this study can contribute to the effective forest management.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

The big data method for flash flood warning (돌발홍수 예보를 위한 빅데이터 분석방법)

  • Park, Dain;Yoon, Sanghoo
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.245-250
    • /
    • 2017
  • Flash floods is defined as the flooding of intense rainfall over a relatively small area that flows through river and valley rapidly in short time with no advance warning. So that it can cause damage property and casuality. This study is to establish the flash-flood warning system using 38 accident data, reported from the National Disaster Information Center and Land Surface Model(TOPLATS) between 2009 and 2012. Three variables were used in the Land Surface Model: precipitation, soil moisture, and surface runoff. The three variables of 6 hours preceding flash flood were reduced to 3 factors through factor analysis. Decision tree, random forest, Naive Bayes, Support Vector Machine, and logistic regression model are considered as big data methods. The prediction performance was evaluated by comparison of Accuracy, Kappa, TP Rate, FP Rate and F-Measure. The best method was suggested based on reproducibility evaluation at the each points of flash flood occurrence and predicted count versus actual count using 4 years data.

A study on entertainment TV show ratings and the number of episodes prediction (국내 예능 시청률과 회차 예측 및 영향요인 분석)

  • Kim, Milim;Lim, Soyeon;Jang, Chohee;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.6
    • /
    • pp.809-825
    • /
    • 2017
  • The number of TV entertainment shows is increasing. Competition among programs in the entertainment market is intensifying since cable channels air many entertainment TV shows. There is now a need for research on program ratings and the number of episodes. This study presents predictive models for entertainment TV show ratings and number of episodes. We use various data mining techniques such as linear regression, logistic regression, LASSO, random forests, gradient boosting, and support vector machine. The analysis results show that the average program ratings before the first broadcast is affected by broadcasting company, average ratings of the previous season, starting year and number of articles. The average program ratings after the first broadcast is influenced by the rating of the first broadcast, broadcasting company and program type. We also found that the predicted average ratings, starting year, type and broadcasting company are important variables in predicting of the number of episodes.